Sample records for selected key parameters

  1. Estimation of end point foot clearance points from inertial sensor data.

    PubMed

    Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu

    2011-01-01

    Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.

  2. What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?

    PubMed

    Imamura, Teruhiko; Kinugawa, Koichiro

    2018-05-23

    Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.

  3. Selection Dynamics in Transient Compartmentalization

    NASA Astrophysics Data System (ADS)

    Blokhuis, Alex; Lacoste, David; Nghe, Philippe; Peliti, Luca

    2018-04-01

    Transient compartments have been recently shown to be able to maintain functional replicators in the context of prebiotic studies. Here, we show that a broad class of selection dynamics is able to achieve this goal. We identify two key parameters, the relative amplification of nonactive replicators (parasites) and the size of compartments. These parameters account for competition and diversity, and the results are relevant to similar multilevel selection problems, such as those found in virus-host ecology and trait group selection.

  4. User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2

    NASA Technical Reports Server (NTRS)

    Lee, F. C.

    1980-01-01

    A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.

  5. Comparative investigation on magnetic capture selectivity between single wires and a real matrix

    NASA Astrophysics Data System (ADS)

    Ren, Peng; Chen, Luzheng; Liu, Wenbo; Shao, Yanhai; Zeng, Jianwu

    2018-03-01

    High gradient magnetic separation (HGMS) achieves the effective separation to fine weakly magnetic minerals through a magnetic matrix. In practice, the matrix is made of numerous magnetic wires, so that an insight into the magnetic capture characteristics of single wires to magnetic minerals would provide a basic foundation for the optimum design and choice of real matrix. The magnetic capture selectivity of cylindrical and rectangular single wires in concentrating ilmenite minerals were investigated through a cyclic pulsating HGMS separator with its key operating parameters (magnetic induction, feed velocity and pulsating frequency) varied, and their capture selectivity characteristics were parallelly compared with that of a real 3.0 mm cylindrical matrix. It was found that the cylindrical single wires have superior capture selectivity to the rectangular one; and, the single wires and the real matrix have basically the same capture trend with changes in the key operating parameters, but the single wires have a much higher capture selectivity than that of real matrix.

  6. The selection criteria elements of X-ray optics system

    NASA Astrophysics Data System (ADS)

    Plotnikova, I. V.; Chicherina, N. V.; Bays, S. S.; Bildanov, R. G.; Stary, O.

    2018-01-01

    At the design of new modifications of x-ray tomography there are difficulties in the right choice of elements of X-ray optical system. Now this problem is solved by practical consideration, selection of values of the corresponding parameters - tension on an x-ray tube taking into account the thickness and type of the studied material. For reduction of time and labor input of design it is necessary to create the criteria of the choice, to determine key parameters and characteristics of elements. In the article two main elements of X-ray optical system - an x-ray tube and the detector of x-ray radiation - are considered. Criteria of the choice of elements, their key characteristics, the main dependences of parameters, quality indicators and also recommendations according to the choice of elements of x-ray systems are received.

  7. Parameter as a Switch Between Dynamical States of a Network in Population Decoding.

    PubMed

    Yu, Jiali; Mao, Hua; Yi, Zhang

    2017-04-01

    Population coding is a method to represent stimuli using the collective activities of a number of neurons. Nevertheless, it is difficult to extract information from these population codes with the noise inherent in neuronal responses. Moreover, it is a challenge to identify the right parameter of the decoding model, which plays a key role for convergence. To address the problem, a population decoding model is proposed for parameter selection. Our method successfully identified the key conditions for a nonzero continuous attractor. Both the theoretical analysis and the application studies demonstrate the correctness and effectiveness of this strategy.

  8. ENHANCING THE STABILITY OF POROUS CATALYSTS WITH SUPERCRITICAL REACTION MEDIA. (R826034)

    EPA Science Inventory

    Adsorption/desorption and pore-transport are key parameters influencing the activity and product selectivity in porous catalysts. With conventional reaction media (gas or liquid phase), one of these parameters is generally favorable while the other is not. For instance, while ...

  9. Parameter Selection Methods in Inverse Problem Formulation

    DTIC Science & Technology

    2010-11-03

    clinical data and used for prediction and a model for the reaction of the cardiovascular system to an ergometric workload. Key Words: Parameter selection...model for HIV dynamics which has been successfully validated with clinical data and used for prediction and a model for the reaction of the...recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction [4, 8]; b) a global

  10. In vivo quantitative evaluation of vascular parameters for angiogenesis based on sparse principal component analysis and aggregated boosted trees

    NASA Astrophysics Data System (ADS)

    Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie

    2014-12-01

    To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.

  11. Management of physical health in patients with schizophrenia: practical recommendations.

    PubMed

    Heald, A; Montejo, A L; Millar, H; De Hert, M; McCrae, J; Correll, C U

    2010-06-01

    Improved physical health care is a pressing need for patients with schizophrenia. It can be achieved by means of a multidisciplinary team led by the psychiatrist. Key priorities should include: selection of antipsychotic therapy with a low risk of weight gain and metabolic adverse effects; routine assessment, recording and longitudinal tracking of key physical health parameters, ideally by electronic spreadsheets; and intervention to control CVD risk following the same principles as for the general population. A few simple tools to assess and record key physical parameters, combined with lifestyle intervention and pharmacological treatment as indicated, could significantly improve physical outcomes. Effective implementation of strategies to optimise physical health parameters in patients with severe enduring mental illness requires engagement and communication between psychiatrists and primary care in most health settings. Copyright (c) 2010 Elsevier Masson SAS. All rights reserved.

  12. Secure and Efficient Signature Scheme Based on NTRU for Mobile Payment

    NASA Astrophysics Data System (ADS)

    Xia, Yunhao; You, Lirong; Sun, Zhe; Sun, Zhixin

    2017-10-01

    Mobile payment becomes more and more popular, however the traditional public-key encryption algorithm has higher requirements for hardware which is not suitable for mobile terminals of limited computing resources. In addition, these public-key encryption algorithms do not have the ability of anti-quantum computing. This paper researches public-key encryption algorithm NTRU for quantum computation through analyzing the influence of parameter q and k on the probability of generating reasonable signature value. Two methods are proposed to improve the probability of generating reasonable signature value. Firstly, increase the value of parameter q. Secondly, add the authentication condition that meet the reasonable signature requirements during the signature phase. Experimental results show that the proposed signature scheme can realize the zero leakage of the private key information of the signature value, and increase the probability of generating the reasonable signature value. It also improve rate of the signature, and avoid the invalid signature propagation in the network, but the scheme for parameter selection has certain restrictions.

  13. Optimum allocation of test resources and comparison of breeding strategies for hybrid wheat.

    PubMed

    Longin, C Friedrich H; Mi, Xuefei; Melchinger, Albrecht E; Reif, Jochen C; Würschum, Tobias

    2014-10-01

    The use of a breeding strategy combining the evaluation of line per se with testcross performance maximizes annual selection gain for hybrid wheat breeding. Recent experimental studies confirmed a high commercial potential for hybrid wheat requiring the design of optimum breeding strategies. Our objectives were to (1) determine the optimum allocation of the type and number of testers, the number of test locations and the number of doubled haploid lines for different breeding strategies, (2) identify the best breeding strategy and (3) elaborate key parameters for an efficient hybrid wheat breeding program. We performed model calculations using the selection gain for grain yield as target variable to optimize the number of lines, testers and test locations in four different breeding strategies. A breeding strategy (BS2) combining the evaluation of line per se performance and general combining ability (GCA) had a far larger annual selection gain across all considered scenarios than a breeding strategy (BS1) focusing only on GCA. In the combined strategy, the production of testcross seed conducted in parallel with the first yield trial for line per se performance (BS2rapid) resulted in a further increase of the annual selection gain. For the current situation in hybrid wheat, this relative superiority of the strategy BS2rapid amounted to 67 % in annual selection gain compared to BS1. Varying a large number of parameters, we identified the high costs for hybrid seed production and the low variance of GCA in hybrid wheat breeding as key parameters limiting selection gain in BS2rapid.

  14. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  15. Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision

    PubMed Central

    Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun

    2017-01-01

    A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772

  16. A key factor to the spin parameter of uniformly rotating compact stars: crust structure

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Zhang, Nai-Bo; Sun, Bao-Yuan; Wang, Shou-Yu; Gao, Jian-Hua

    2016-04-01

    We study the dimensionless spin parameter j ≡ cJ/(GM2) of different kinds of uniformly rotating compact stars, including traditional neutron stars, hyperonic neutron stars and hybrid stars, based on relativistic mean field theory and the MIT bag model. It is found that jmax ˜ 0.7, which had been suggested in traditional neutron stars, is sustained for hyperonic neutron stars and hybrid stars with M > 0.5 M⊙. Not the interior but rather the crust structure of the stars is a key factor to determine jmax for three kinds of selected compact stars. Furthermore, a universal formula j = 0.63(f/fK) - 0.42(f/fK)2 + 0.48(f/fK)3 is suggested to determine the spin parameter at any rotational frequency f smaller than the Keplerian frequency fK.

  17. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  18. Efficient design based on perturbed parameter ensembles to identify plausible and diverse variants of a model for climate change projections

    NASA Astrophysics Data System (ADS)

    Karmalkar, A.; Sexton, D.; Murphy, J.

    2017-12-01

    We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.

  19. Will Quantitative Proteomics Redefine Some of the Key Concepts in Skeletal Muscle Physiology?

    PubMed

    Gizak, Agnieszka; Rakus, Dariusz

    2016-01-11

    Molecular and cellular biology methodology is traditionally based on the reasoning called "the mechanistic explanation". In practice, this means identifying and selecting correlations between biological processes which result from our manipulation of a biological system. In theory, a successful application of this approach requires precise knowledge about all parameters of a studied system. However, in practice, due to the systems' complexity, this requirement is rarely, if ever, accomplished. Typically, it is limited to a quantitative or semi-quantitative measurements of selected parameters (e.g., concentrations of some metabolites), and a qualitative or semi-quantitative description of expression/post-translational modifications changes within selected proteins. A quantitative proteomics approach gives a possibility of quantitative characterization of the entire proteome of a biological system, in the context of the titer of proteins as well as their post-translational modifications. This enables not only more accurate testing of novel hypotheses but also provides tools that can be used to verify some of the most fundamental dogmas of modern biology. In this short review, we discuss some of the consequences of using quantitative proteomics to verify several key concepts in skeletal muscle physiology.

  20. Features and selection of vascular access devices.

    PubMed

    Sansivero, Gail Egan

    2010-05-01

    To review venous anatomy and physiology, discuss assessment parameters before vascular access device (VAD) placement, and review VAD options. Journal articles, personal experience. A number of VAD options are available in clinical practice. Access planning should include comprehensive assessment, with attention to patient participation in the planning and selection process. Careful consideration should be given to long-term access needs and preservation of access sites. Oncology nurses are uniquely suited to perform a key role in VAD planning and placement. With knowledge of infusion therapy, anatomy and physiology, device options, and community resources, nurses can be key leaders in preserving vascular access and improving the safety and comfort of infusion therapy. Copyright 2010 Elsevier Inc. All rights reserved.

  1. Guiding automated left ventricular chamber segmentation in cardiac imaging using the concept of conserved myocardial volume.

    PubMed

    Garson, Christopher D; Li, Bing; Acton, Scott T; Hossack, John A

    2008-06-01

    The active surface technique using gradient vector flow allows semi-automated segmentation of ventricular borders. The accuracy of the algorithm depends on the optimal selection of several key parameters. We investigated the use of conservation of myocardial volume for quantitative assessment of each of these parameters using synthetic and in vivo data. We predicted that for a given set of model parameters, strong conservation of volume would correlate with accurate segmentation. The metric was most useful when applied to the gradient vector field weighting and temporal step-size parameters, but less effective in guiding an optimal choice of the active surface tension and rigidity parameters.

  2. Morphological effects on the selectivity of intramolecular versus intermolecular catalytic reaction on Au nanoparticles.

    PubMed

    Wang, Dan; Sun, Yuanmiao; Sun, Yinghui; Huang, Jing; Liang, Zhiqiang; Li, Shuzhou; Jiang, Lin

    2017-06-14

    It is hard for metal nanoparticle catalysts to control the selectivity of a catalytic reaction in a simple process. In this work, we obtain active Au nanoparticle catalysts with high selectivity for the hydrogenation reaction of aromatic nitro compounds, by simply employing spine-like Au nanoparticles. The density functional theory (DFT) calculations further elucidate that the morphological effect on thermal selectivity control is an internal key parameter to modulate the nitro hydrogenation process on the surface of Au spines. These results show that controlled morphological effects may play an important role in catalysis reactions of noble metal NPs with high selectivity.

  3. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Adverse Selection and an Individual Mandate: When Theory Meets Practice*

    PubMed Central

    Hackmann, Martin B.; Kolstad, Jonathan T.; Kowalski, Amanda E.

    2014-01-01

    We develop a model of selection that incorporates a key element of recent health reforms: an individual mandate. Using data from Massachusetts, we estimate the parameters of the model. In the individual market for health insurance, we find that premiums and average costs decreased significantly in response to the individual mandate. We find an annual welfare gain of 4.1% per person or $51.1 million annually in Massachusetts as a result of the reduction in adverse selection. We also find smaller post-reform markups. PMID:25914412

  5. Launch Vehicle Propulsion Design with Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey D.; Frederick, Robert A.; Wilhite, Alan W.

    2005-01-01

    The approach and techniques described herein define an optimization and evaluation approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system. The method uses Monte Carlo simulations, genetic algorithm solvers, a propulsion thermo-chemical code, power series regression curves for historical data, and statistical models in order to optimize a vehicle system. The system, including parameters for engine chamber pressure, area ratio, and oxidizer/fuel ratio, was modeled and optimized to determine the best design for seven separate design weight and cost cases by varying design and technology parameters. Significant model results show that a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Other key findings show the sensitivity of propulsion parameters, technology factors, and cost factors and how these parameters differ when cost and weight are optimized separately. Each of the three key propulsion parameters; chamber pressure, area ratio, and oxidizer/fuel ratio, are optimized in the seven design cases and results are plotted to show impacts to engine mass and overall vehicle mass.

  6. Two-Stage Modeling of Formaldehyde-Induced Tumor Incidence in the Rat—analysis of Uncertainties

    EPA Science Inventory

    This works extends the 2-stage cancer modeling of tumor incidence in formaldehyde-exposed rats carried out at the CIIT Centers for Health Research. We modify key assumptions, evaluate the effect of selected uncertainties, and develop confidence bounds on parameter estimates. Th...

  7. U.S. EPA/ORD LARGE BUILDINGS STUDY: RESULTS OF THE INITIAL SURVEY OF RANDOMLY SELECTED GSA BUILDINGS

    EPA Science Inventory

    The Atmospheric Research and Exposure Assessment Laboratory (AREAL), Office of Research and Development (ORD), U.S. Environmental Protection Agency (EPA), is initiating a research program to connect fundamental information on the key parameters and factors that influence indoor a...

  8. Model selection as a science driver for dark energy surveys

    NASA Astrophysics Data System (ADS)

    Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin

    2006-07-01

    A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.

  9. Aqueous enzymatic extraction of Moringa oleifera oil.

    PubMed

    Mat Yusoff, Masni; Gordon, Michael H; Ezeh, Onyinye; Niranjan, Keshavan

    2016-11-15

    This paper reports on the extraction of Moringa oleifera (MO) oil by using aqueous enzymatic extraction (AEE) method. The effect of different process parameters on the oil recovery was discovered by using statistical optimization, besides the effect of selected parameters on the formation of its oil-in-water cream emulsions. Within the pre-determined ranges, the use of pH 4.5, moisture/kernel ratio of 8:1 (w/w), and 300stroke/min shaking speed at 40°C for 1h incubation time resulted in highest oil recovery of approximately 70% (goil/g solvent-extracted oil). These optimized parameters also result in a very thin emulsion layer, indicating minute amount of emulsion formed. Zero oil recovery with thick emulsion were observed when the used aqueous phase was re-utilized for another AEE process. The findings suggest that the critical selection of AEE parameters is key to high oil recovery with minimum emulsion formation thereby lowering the load on the de-emulsification step. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  11. Statistical Analyses of Femur Parameters for Designing Anatomical Plates.

    PubMed

    Wang, Lin; He, Kunjin; Chen, Zhengming

    2016-01-01

    Femur parameters are key prerequisites for scientifically designing anatomical plates. Meanwhile, individual differences in femurs present a challenge to design well-fitting anatomical plates. Therefore, to design anatomical plates more scientifically, analyses of femur parameters with statistical methods were performed in this study. The specific steps were as follows. First, taking eight anatomical femur parameters as variables, 100 femur samples were classified into three classes with factor analysis and Q-type cluster analysis. Second, based on the mean parameter values of the three classes of femurs, three sizes of average anatomical plates corresponding to the three classes of femurs were designed. Finally, based on Bayes discriminant analysis, a new femur could be assigned to the proper class. Thereafter, the average anatomical plate suitable for that new femur was selected from the three available sizes of plates. Experimental results showed that the classification of femurs was quite reasonable based on the anatomical aspects of the femurs. For instance, three sizes of condylar buttress plates were designed. Meanwhile, 20 new femurs are judged to which classes the femurs belong. Thereafter, suitable condylar buttress plates were determined and selected.

  12. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    PubMed

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tao; Niu, Zhenbin; Hu, Xunxiang

    The development of high performance materials for CO 2 separation and capture will significantly contribute to a solution for climate change. In this work, (bicycloheptenyl) ethyl terminated polydimethylsiloxane (PDMSPNB) membranes with varied cross-link densities were synthesized via ring-opening metathesis polymerization. The developed polymer membranes show higher permeability and better selectivity than those of conventional cross-linked PDMS membrane. The achieved performance (CO 2 permeability ~ 6800 Barrer and CO 2/N 2 selectivity ~ 14) is very promising for practical applications. The key to achieving this high performance is the use of an in-situ cross-linking method of the difunctional PDMS macromonomers, whichmore » provides lightly cross-linked membranes. By combining positron annihilation lifetime spectroscopy, broadband dielectric spectroscopy and gas solubility measurements, we have elucidated the key parameters necessary for achieving their excellent performance.« less

  14. Transmission line design for the lunar environment

    NASA Technical Reports Server (NTRS)

    Gaustad, Krista L.; Gordon, Lloyd B.

    1990-01-01

    How the mass, operating temperature, and efficiency of a transmission line operating on the moon are affected by its operating parameters, the lunar environment, and the choice of materials is examined. The key transmission line parameters which have an effect on mass, operating temperature, and efficiency are voltage, power loss, and waveform. The choice of waveform for transmission will be influenced by the waveform of the source and load, and therefore an analysis of both DC and AC transmission is necessary for a complete understanding of lunar power transmission. The data presented are for the DC case only; however, the discussion of the environmental effects and of material selection is pertinent to both AC and DC transmission. The operating voltage is shown to be a key parameter in transmission line design. The role efficiency plays in transmission line design is also examined. The analyses include above- and below-the-surface operation for both a vacuum-insulated, two-wire, transmission line, and a solid-dielectric-insulated, coaxial, transmission line.

  15. Image processing methods in two and three dimensions used to animate remotely sensed data. [cloud cover

    NASA Technical Reports Server (NTRS)

    Hussey, K. J.; Hall, J. R.; Mortensen, R. A.

    1986-01-01

    Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.

  16. Analysis and design of a standardized control module for switching regulators

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.; Kolecki, J. C.

    1982-07-01

    Three basic switching regulators: buck, boost, and buck/boost, employing a multiloop standardized control module (SCM) were characterized by a common small signal block diagram. Employing the unified model, regulator performances such as stability, audiosusceptibility, output impedance, and step load transient are analyzed and key performance indexes are expressed in simple analytical forms. More importantly, the performance characteristics of all three regulators are shown to enjoy common properties due to the unique SCM control scheme which nullifies the positive zero and provides adaptive compensation to the moving poles of the boost and buck/boost converters. This allows a simple unified design procedure to be devised for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt.

  17. Speeding Up Microevolution: The Effects of Increasing Temperature on Selection and Genetic Variance in a Wild Bird Population

    PubMed Central

    Husby, Arild; Visser, Marcel E.; Kruuk, Loeske E. B.

    2011-01-01

    The amount of genetic variance underlying a phenotypic trait and the strength of selection acting on that trait are two key parameters that determine any evolutionary response to selection. Despite substantial evidence that, in natural populations, both parameters may vary across environmental conditions, very little is known about the extent to which they may covary in response to environmental heterogeneity. Here we show that, in a wild population of great tits (Parus major), the strength of the directional selection gradients on timing of breeding increased with increasing spring temperatures, and that genotype-by-environment interactions also predicted an increase in additive genetic variance, and heritability, of timing of breeding with increasing spring temperature. Consequently, we therefore tested for an association between the annual selection gradients and levels of additive genetic variance expressed each year; this association was positive, but non-significant. However, there was a significant positive association between the annual selection differentials and the corresponding heritability. Such associations could potentially speed up the rate of micro-evolution and offer a largely ignored mechanism by which natural populations may adapt to environmental changes. PMID:21408101

  18. A Bayesian Framework for Coupled Estimation of Key Unknown Parameters of Land Water and Energy Balance Equations

    NASA Astrophysics Data System (ADS)

    Farhadi, L.; Abdolghafoorian, A.

    2015-12-01

    The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states

  19. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization.

    PubMed

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-03-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors' memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm.

  20. Parameter Selection and Performance Comparison of Particle Swarm Optimization in Sensor Networks Localization

    PubMed Central

    Cui, Huanqing; Shu, Minglei; Song, Min; Wang, Yinglong

    2017-01-01

    Localization is a key technology in wireless sensor networks. Faced with the challenges of the sensors’ memory, computational constraints, and limited energy, particle swarm optimization has been widely applied in the localization of wireless sensor networks, demonstrating better performance than other optimization methods. In particle swarm optimization-based localization algorithms, the variants and parameters should be chosen elaborately to achieve the best performance. However, there is a lack of guidance on how to choose these variants and parameters. Further, there is no comprehensive performance comparison among particle swarm optimization algorithms. The main contribution of this paper is three-fold. First, it surveys the popular particle swarm optimization variants and particle swarm optimization-based localization algorithms for wireless sensor networks. Secondly, it presents parameter selection of nine particle swarm optimization variants and six types of swarm topologies by extensive simulations. Thirdly, it comprehensively compares the performance of these algorithms. The results show that the particle swarm optimization with constriction coefficient using ring topology outperforms other variants and swarm topologies, and it performs better than the second-order cone programming algorithm. PMID:28257060

  1. Consumerism in prenatal diagnosis: a challenge for ethical guidelines

    PubMed Central

    Henn, W.

    2000-01-01

    The ethical guidelines for prenatal diagnosis proposed by the World Health Organisation (WHO), as well as by national regulations, only refer to paternity and gender of the fetus as unacceptable, disease-unrelated criteria for prenatal selection, as no other such parameters are at hand so far. This perspective is too narrow because research on complex genetic systems such as cognition and ageing is about to provide clinically applicable tests for genetic constituents of potentially desirable properties such as intelligence or longevity which could be misused as parameters for prenatal diagnosis. Moreover, there is an increasing number of prenatally testable genetic traits, such as heritable deafness, which are generally regarded as pathological but desired by some prospective parents and taken into account as parameters for pro-disability selection. To protect prenatal diagnosis from ethically unacceptable genetic consumerism, guidelines must be clarified as soon as possible and updated towards a worldwide restriction of prenatal genetic testing to immediately disease-determining traits. Key Words: Genetics • prenatal diagnosis • ethics • consumerism PMID:11129845

  2. International Solar-Terrestrial Program Key Parameter Visualization Tool Data: USA_NASA_DDF_ISTP_IM_KP_0161

    NASA Technical Reports Server (NTRS)

    Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.

    2000-01-01

    The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.

  3. International Solar-Terrestrial Program Key Parameter Visualization Tool Data: USA_NASA_DDF_ISTP_KP_0192

    NASA Technical Reports Server (NTRS)

    Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.

    2001-01-01

    The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.

  4. International Solar-Terrestrial Program Key Parameter Visualization Tool Data: USA_NASA_DDF_ISTP_KP_0139

    NASA Technical Reports Server (NTRS)

    Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.

    1999-01-01

    The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.

  5. International Solar-Terrestrial Program Key Parameter Visualization Tool Data: USA_NASA_DDF_ISTP_IM_KP_0185

    NASA Technical Reports Server (NTRS)

    Ocuna, M. H.; Ogilvie, K. W.; Baker, D. N.; Curtis, S. A.; Fairfield, D. H.; Mish, W. H.

    2000-01-01

    The Global Geospace Science Program (GGS) is designed to improve greatly the understanding of the flow of energy, mass and momentum in the solar-terrestrial environment with particular emphasis on "Geospace". The Global Geospace Science Program is the US contribution to the International Solar-Terrestrial Physics (ISTP) Science Initiative. This CD-ROM issue describes the WIND and POLAR spacecraft, the scientific experiments carried onboard, the Theoretical and Ground Based investigations which constitute the US Global Geospace Science Program and the ISTP Data Systems which support the data acquisition and analysis effort. The International Solar-Terrestrial Physics Program (ISTP) Key Parameter Visualization Tool (KPVT), provided on the CD-ROM, was developed at the ISTP Science Planning and Operations Facility (SPOF). The KPVT is a generic software package for visualizing the key parameter data produced from all ISTP missions, interactively and simultaneously. The tool is designed to facilitate correlative displays of ISTP data from multiple spacecraft and instruments, and thus the selection of candidate events and data quality control. The software, written in IDL, includes a graphical/widget user interface, and runs on many platforms, including various UNIX workstations, Alpha/Open VMS, Macintosh (680x0 and PowerPC), and PC/Windows NT, Windows 3.1, and Windows 95.

  6. At-line monitoring of key parameters of nisin fermentation by near infrared spectroscopy, chemometric modeling and model improvement.

    PubMed

    Guo, Wei-Liang; Du, Yi-Ping; Zhou, Yong-Can; Yang, Shuang; Lu, Jia-Hui; Zhao, Hong-Yu; Wang, Yao; Teng, Li-Rong

    2012-03-01

    An analytical procedure has been developed for at-line (fast off-line) monitoring of 4 key parameters including nisin titer (NT), the concentration of reducing sugars, cell concentration and pH during a nisin fermentation process. This procedure is based on near infrared (NIR) spectroscopy and Partial Least Squares (PLS). Samples without any preprocessing were collected at intervals of 1 h during fifteen batch of fermentations. These fermentation processes were implemented in 3 different 5 l fermentors at various conditions. NIR spectra of the samples were collected in 10 min. And then, PLS was used for modeling the relationship between NIR spectra and the key parameters which were determined by reference methods. Monte Carlo Partial Least Squares (MCPLS) was applied to identify the outliers and select the most efficacious methods for preprocessing spectra, wavelengths and the suitable number of latent variables (n (LV)). Then, the optimum models for determining NT, concentration of reducing sugars, cell concentration and pH were established. The correlation coefficients of calibration set (R (c)) were 0.8255, 0.9000, 0.9883 and 0.9581, respectively. These results demonstrated that this method can be successfully applied to at-line monitor of NT, concentration of reducing sugars, cell concentration and pH during nisin fermentation processes.

  7. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  8. Autonomous Parameter Adjustment for SSVEP-Based BCIs with a Novel BCI Wizard.

    PubMed

    Gembler, Felix; Stawicki, Piotr; Volosyak, Ivan

    2015-01-01

    Brain-Computer Interfaces (BCIs) transfer human brain activities into computer commands and enable a communication channel without requiring movement. Among other BCI approaches, steady-state visual evoked potential (SSVEP)-based BCIs have the potential to become accurate, assistive technologies for persons with severe disabilities. Those systems require customization of different kinds of parameters (e.g., stimulation frequencies). Calibration usually requires selecting predefined parameters by experienced/trained personnel, though in real-life scenarios an interface allowing people with no experience in programming to set up the BCI would be desirable. Another occurring problem regarding BCI performance is BCI illiteracy (also called BCI deficiency). Many articles reported that BCI control could not be achieved by a non-negligible number of users. In order to bypass those problems we developed a SSVEP-BCI wizard, a system that automatically determines user-dependent key-parameters to customize SSVEP-based BCI systems. This wizard was tested and evaluated with 61 healthy subjects. All subjects were asked to spell the phrase "RHINE WAAL UNIVERSITY" with a spelling application after key parameters were determined by the wizard. Results show that all subjects were able to control the spelling application. A mean (SD) accuracy of 97.14 (3.73)% was reached (all subjects reached an accuracy above 85% and 25 subjects even reached 100% accuracy).

  9. Cryptanalysis of SFLASH with Slightly Modified Parameters

    NASA Astrophysics Data System (ADS)

    Dubois, Vivien; Fouque, Pierre-Alain; Stern, Jacques

    SFLASH is a signature scheme which belongs to a family of multivariate schemes proposed by Patarin et al. in 1998 [9]. The SFLASH scheme itself has been designed in 2001 [8] and has been selected in 2003 by the NESSIE European Consortium [6] as the best known solution for implementation on low cost smart cards. In this paper, we show that slight modifications of the parameters of SFLASH within the general family initially proposed renders the scheme insecure. The attack uses simple linear algebra, and allows to forge a signature for an arbitrary message in a question of minutes for practical parameters, using only the public key. Although SFLASH itself is not amenable to our attack, it is worrying to observe that no rationale was ever offered for this "lucky" choice of parameters.

  10. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  11. Design of Diaphragm and Coil for Stable Performance of an Eddy Current Type Pressure Sensor.

    PubMed

    Lee, Hyo Ryeol; Lee, Gil Seung; Kim, Hwa Young; Ahn, Jung Hwan

    2016-07-01

    The aim of this work was to develop an eddy current type pressure sensor and investigate its fundamental characteristics affected by the mechanical and electrical design parameters of sensor. The sensor has two key components, i.e., diaphragm and coil. On the condition that the outer diameter of sensor is 10 mm, two key parts should be designed so as to keep a good linearity and sensitivity. Experiments showed that aluminum is the best target material for eddy current detection. A round-grooved diaphragm is suggested in order to measure more precisely its deflection caused by applied pressures. The design parameters of a round-grooved diaphragm can be selected depending on the measuring requirements. A developed pressure sensor with diaphragm of t = 0.2 mm and w = 1.05 mm was verified to measure pressure up to 10 MPa with very good linearity and errors of less than 0.16%.

  12. Continuous-variable measurement-device-independent quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Xu, Bingjie; Yu, Song; Guo, Hong

    2018-04-01

    The method of improving the performance of continuous-variable quantum key distribution protocols by postselection has been recently proposed and verified. In continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) protocols, the measurement results are obtained from untrusted third party Charlie. There is still not an effective method of improving CV-MDI QKD by the postselection with untrusted measurement. We propose a method to improve the performance of coherent-state CV-MDI QKD protocol by virtual photon subtraction via non-Gaussian postselection. The non-Gaussian postselection of transmitted data is equivalent to an ideal photon subtraction on the two-mode squeezed vacuum state, which is favorable to enhance the performance of CV-MDI QKD. In CV-MDI QKD protocol with non-Gaussian postselection, two users select their own data independently. We demonstrate that the optimal performance of the renovated CV-MDI QKD protocol is obtained with the transmitted data only selected by Alice. By setting appropriate parameters of the virtual photon subtraction, the secret key rate and tolerable excess noise are both improved at long transmission distance. The method provides an effective optimization scheme for the application of CV-MDI QKD protocols.

  13. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    PubMed

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  14. A deliberative framework to identify the need for real-life evidence building of new cancer drugs after interim funding decision.

    PubMed

    Leung, Leanne; de Lemos, Mário L; Kovacic, Laurel

    2017-01-01

    Background With the rising cost of new oncology treatments, it is no longer sustainable to base initial drug funding decisions primarily on prospective clinical trials as their performance in real-life populations are often difficult to determine. In British Columbia, an approach in evidence building is to retrospectively analyse patient outcomes using observational research on an ad hoc basis. Methods The deliberative framework was constructed in three stages: framework design, framework validation and treatment programme characterization, and key informant interview. Framework design was informed through a literature review and analyses of provincial and national decision-making processes. Treatment programmes funded between 2010 and 2013 were used for framework validation. A selection concordance rate of 80% amongst three reviewers was considered to be a validation of the framework. Key informant interviews were conducted to determine the utility of this deliberative framework. Results A multi-domain deliberative framework with 15 assessment parameters was developed. A selection concordance rate of 84.2% was achieved for content validation of the framework. Nine treatment programmes from five different tumour groups were selected for retrospective outcomes analysis. Five contributory factors to funding uncertainties were identified. Key informants agreed that the framework is a comprehensive tool that targets the key areas involved in the funding decision-making process. Conclusions The oncology-based deliberative framework can be routinely used to assess treatment programmes from the major tumour sites for retrospective outcomes analysis. Key informants indicate this is a value-added tool and will provide insight to the current prospective funding model.

  15. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.

  16. The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot

    NASA Astrophysics Data System (ADS)

    Hwang, Donghyeok; Tahk, Min-Jea

    2018-04-01

    The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.

  17. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    PubMed

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  18. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  19. Effect of cross-link density on carbon dioxide separation in polydimethylsiloxane-norbornene membranes

    DOE PAGES

    Hong, Tao; Niu, Zhenbin; Hu, Xunxiang; ...

    2015-10-20

    The development of high performance materials for CO 2 separation and capture will significantly contribute to a solution for climate change. In this work, (bicycloheptenyl) ethyl terminated polydimethylsiloxane (PDMSPNB) membranes with varied cross-link densities were synthesized via ring-opening metathesis polymerization. The developed polymer membranes show higher permeability and better selectivity than those of conventional cross-linked PDMS membrane. The achieved performance (CO 2 permeability ~ 6800 Barrer and CO 2/N 2 selectivity ~ 14) is very promising for practical applications. The key to achieving this high performance is the use of an in-situ cross-linking method of the difunctional PDMS macromonomers, whichmore » provides lightly cross-linked membranes. By combining positron annihilation lifetime spectroscopy, broadband dielectric spectroscopy and gas solubility measurements, we have elucidated the key parameters necessary for achieving their excellent performance.« less

  20. Rotary Wing Deceleration Use on Titan

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Steiner, Ted J.

    2011-01-01

    Rotary wing decelerator (RWD) systems were compared against other methods of atmospheric deceleration and were determined to show significant potential for application to a system requiring controlled descent, low-velocity landing, and atmospheric research capability on Titan. Design space exploration and down-selection results in a system with a single rotor utilizing cyclic pitch control. Models were developed for selection of a RWD descent system for use on Titan and to determine the relationships between the key design parameters of such a system and the time of descent. The possibility of extracting power from the system during descent was also investigated.

  1. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. A hybrid genetic algorithm-extreme learning machine approach for accurate significant wave height reconstruction

    NASA Astrophysics Data System (ADS)

    Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.

    2015-08-01

    Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).

  3. Fast clustering using adaptive density peak detection.

    PubMed

    Wang, Xiao-Feng; Xu, Yifan

    2017-12-01

    Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.

  4. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties

    PubMed Central

    Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Abstract Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing. PMID:29707073

  5. SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Knowledge Advancement.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.

    2014-02-01

    This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less

  6. Selective laser melting of high-performance pure tungsten: parameter design, densification behavior and mechanical properties.

    PubMed

    Tan, Chaolin; Zhou, Kesong; Ma, Wenyou; Attard, Bonnie; Zhang, Panpan; Kuang, Tongchun

    2018-01-01

    Selective laser melting (SLM) additive manufacturing of pure tungsten encounters nearly all intractable difficulties of SLM metals fields due to its intrinsic properties. The key factors, including powder characteristics, layer thickness, and laser parameters of SLM high density tungsten are elucidated and discussed in detail. The main parameters were designed from theoretical calculations prior to the SLM process and experimentally optimized. Pure tungsten products with a density of 19.01 g/cm 3 (98.50% theoretical density) were produced using SLM with the optimized processing parameters. A high density microstructure is formed without significant balling or macrocracks. The formation mechanisms for pores and the densification behaviors are systematically elucidated. Electron backscattered diffraction analysis confirms that the columnar grains stretch across several layers and parallel to the maximum temperature gradient, which can ensure good bonding between the layers. The mechanical properties of the SLM-produced tungsten are comparable to that produced by the conventional fabrication methods, with hardness values exceeding 460 HV 0.05 and an ultimate compressive strength of about 1 GPa. This finding offers new potential applications of refractory metals in additive manufacturing.

  7. Modelling the potential role of forest thinning in maintaining water supplies under a changing climate across the conterminous United States

    Treesearch

    Ge Sun; Peter V. Caldwell; Steven G. McNulty

    2015-01-01

    The goal of this study was to test the sensitivity of water yield to forest thinning and other forest management/disturbances and climate across the conterminous United States (CONUS). Leaf area index (LAI) was selected as a key parameter linking changes in forest ecosystem structure and functions. We used the Water Supply Stress Index model to examine water yield...

  8. Analysis of the economic impact of the national unified carbon trading market mechanism Hebei province, for example

    NASA Astrophysics Data System (ADS)

    Sun, Yuxing

    2018-05-01

    In this paper, a grey prediction model is used to predict the carbon emission in Hebei province, and the impact analysis model based on TermCo2 is established. At the same time, we read a lot about CGE and study on how to build the scene, the selection of key parameters, and sensitivity analysis of application scenarios do industry for reference.

  9. Absorption Coefficient of Alkaline Earth Halides.

    DTIC Science & Technology

    1980-04-01

    not observed at low energy level , are developed at high power levels . No matter how low the absorption is. the effect is objectionable at high-energy... levels . As a natural consequence, the magnitude of the absorption coefficient is the key parameter in selecting laser window materials. Over the past...Presence of impurities can complicate the exponential tail. particularly at low absorption levels . The impurities may enter 12 the lattice singly or

  10. Evaluation of power block arrangements for 100MW scale concentrated solar thermal power generation using top-down design

    NASA Astrophysics Data System (ADS)

    Post, Alexander; Beath, Andrew; Sauret, Emilie; Persky, Rodney

    2017-06-01

    Concentrated solar thermal power generation poses a unique situation for power block selection, in which a capital intensive heat source is subject to daily and seasonal fluctuations in intensity. In this study, a method is developed to easily evaluate the favourability of different power blocks for converting the heat supplied by a concentrated solar thermal plant into power at the 100MWe scale based on several key parameters. The method is then applied to a range of commercially available power cycles that operate over different temperatures and efficiencies, and with differing capital costs, each with performance and economic parameters selected to be typical of their technology type, as reported in literature. Using this method, the power cycle is identified among those examined that is most likely to result in a minimum levelised cost of energy of a solar thermal plant.

  11. Thermal design, rating and second law analysis of shell and tube condensers based on Taguchi optimization for waste heat recovery based thermal desalination plants

    NASA Astrophysics Data System (ADS)

    Chandrakanth, Balaji; Venkatesan, G; Prakash Kumar, L. S. S; Jalihal, Purnima; Iniyan, S

    2018-03-01

    The present work discusses the design and selection of a shell and tube condenser used in Low Temperature Thermal Desalination (LTTD). To optimize the key geometrical and process parameters of the condenser with multiple parameters and levels, a design of an experiment approach using Taguchi method was chosen. An orthogonal array (OA) of 25 designs was selected for this study. The condenser was designed, analysed using HTRI software and the heat transfer area with respective tube side pressure drop were computed using the same, as these two objective functions determine the capital and running cost of the condenser. There was a complex trade off between the heat transfer area and pressure drop in the analysis, however second law analysis was worked out for determining the optimal heat transfer area vs pressure drop for condensing the required heat load.

  12. The size-reduced Eudragit® RS microparticles prepared by solvent evaporation method - monitoring the effect of selected variables on tested parameters.

    PubMed

    Vasileiou, Kalliopi; Vysloužil, Jakub; Pavelková, Miroslava; Vysloužil, Jan; Kubová, Kateřina

    2018-01-01

    Size-reduced microparticles were successfully obtained by solvent evaporation method. Different parameters were applied in each sample and their influence on microparticles was evaluated. As a model drug the insoluble ibuprofen was selected for the encapsulation process with Eudragit® RS. The obtained microparticles were inspected by optical microscopy and scanning electron microscopy. The effect of aqueous phase volume (600, 400, 200 ml) and the concentration of polyvinyl alcohol (PVA; 1.0% and 0.1%) were studied. It was evaluated how those variations and also size can affect microparticle characteristics such as encapsulation efficiency, drug loading, burst effect and microparticle morphology. It was observed that the sample prepared with 600 ml aqueous phase and 1% concentration of polyvinyl alcohol gave the most favorable results.Key words: microparticles solvent evaporation sustained drug release Eudragit RS®.

  13. A risk-based approach to management of leachables utilizing statistical analysis of extractables.

    PubMed

    Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M

    2015-04-01

    To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.

  14. Tracking slow modulations in synaptic gain using dynamic causal modelling: validation in epilepsy.

    PubMed

    Papadopoulou, Margarita; Leite, Marco; van Mierlo, Pieter; Vonck, Kristl; Lemieux, Louis; Friston, Karl; Marinazzo, Daniele

    2015-02-15

    In this work we propose a proof of principle that dynamic causal modelling can identify plausible mechanisms at the synaptic level underlying brain state changes over a timescale of seconds. As a benchmark example for validation we used intracranial electroencephalographic signals in a human subject. These data were used to infer the (effective connectivity) architecture of synaptic connections among neural populations assumed to generate seizure activity. Dynamic causal modelling allowed us to quantify empirical changes in spectral activity in terms of a trajectory in parameter space - identifying key synaptic parameters or connections that cause observed signals. Using recordings from three seizures in one patient, we considered a network of two sources (within and just outside the putative ictal zone). Bayesian model selection was used to identify the intrinsic (within-source) and extrinsic (between-source) connectivity. Having established the underlying architecture, we were able to track the evolution of key connectivity parameters (e.g., inhibitory connections to superficial pyramidal cells) and test specific hypotheses about the synaptic mechanisms involved in ictogenesis. Our key finding was that intrinsic synaptic changes were sufficient to explain seizure onset, where these changes showed dissociable time courses over several seconds. Crucially, these changes spoke to an increase in the sensitivity of principal cells to intrinsic inhibitory afferents and a transient loss of excitatory-inhibitory balance. Copyright © 2014. Published by Elsevier Inc.

  15. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  16. Starting Block Performance in Sprinters: A Statistical Method for Identifying Discriminative Parameters of the Performance and an Analysis of the Effect of Providing Feedback over a 6-Week Period

    PubMed Central

    Fortier, Sylvie; Basset, Fabien A.; Mbourou, Ginette A.; Favérial, Jérôme; Teasdale, Normand

    2005-01-01

    The purpose of this study was twofold: (a) to examine if kinetic and kinematic parameters of the sprint start could differentiate elite from sub-elite sprinters and, (b) to investigate whether providing feedback (FB) about selected parameters could improve starting block performance of intermediate sprinters over a 6-week training period. Twelve male sprinters, assigned to an elite or a sub-elite group, participated in Experiment 1. Eight intermediate sprinters participated in Experiment 2. All athletes were required to perform three sprint starts at maximum intensity followed by a 10-m run. To detect differences between elite and sub-elite groups, comparisons were made using t-tests for independent samples. Parameters reaching a significant group difference were retained for the linear discriminant analysis (LDA). The LDA yielded four discriminative kinetic parameters. Feedback about these selected parameters was given to sprinters in Experiment 2. For this experiment, data acquisition was divided into three periods. The first six sessions were without specific FB, whereas the following six sessions were enriched by kinetic FB. Finally, athletes underwent a retention session (without FB) 4 weeks after the twelfth session. Even though differences were found in the time to front peak force, the time to rear peak force, and the front peak force in the retention session, the results of the present study showed that providing FB about selected kinetic parameters differentiating elite from sub-elite sprinters did not improve the starting block performance of intermediate sprinters. Key Points The linear discriminative analysis allows the identification of starting block parameters differentiating elite from sub-elite athletes. 6-week of feedback does not alter starting block performance in training context. The present results failed to confirm previous studies since feedback did not improve targeted kinetic parameters of the complex motor task in real-world context. PMID:24431969

  17. Starting Block Performance in Sprinters: A Statistical Method for Identifying Discriminative Parameters of the Performance and an Analysis of the Effect of Providing Feedback over a 6-Week Period.

    PubMed

    Fortier, Sylvie; Basset, Fabien A; Mbourou, Ginette A; Favérial, Jérôme; Teasdale, Normand

    2005-06-01

    (a) to examine if kinetic and kinematic parameters of the sprint start could differentiate elite from sub-elite sprinters and, (b) to investigate whether providing feedback (FB) about selected parameters could improve starting block performance of intermediate sprinters over a 6-week training period. Twelve male sprinters, assigned to an elite or a sub-elite group, participated in Experiment 1. Eight intermediate sprinters participated in Experiment 2. All athletes were required to perform three sprint starts at maximum intensity followed by a 10-m run. To detect differences between elite and sub-elite groups, comparisons were made using t-tests for independent samples. Parameters reaching a significant group difference were retained for the linear discriminant analysis (LDA). The LDA yielded four discriminative kinetic parameters. Feedback about these selected parameters was given to sprinters in Experiment 2. For this experiment, data acquisition was divided into three periods. The first six sessions were without specific FB, whereas the following six sessions were enriched by kinetic FB. Finally, athletes underwent a retention session (without FB) 4 weeks after the twelfth session. Even though differences were found in the time to front peak force, the time to rear peak force, and the front peak force in the retention session, the results of the present study showed that providing FB about selected kinetic parameters differentiating elite from sub-elite sprinters did not improve the starting block performance of intermediate sprinters. Key PointsThe linear discriminative analysis allows the identification of starting block parameters differentiating elite from sub-elite athletes.6-week of feedback does not alter starting block performance in training context.The present results failed to confirm previous studies since feedback did not improve targeted kinetic parameters of the complex motor task in real-world context.

  18. Analysis of Design Parameters Effects on Vibration Characteristics of Fluidlastic Isolators

    NASA Astrophysics Data System (ADS)

    Deng, Jing-hui; Cheng, Qi-you

    2017-07-01

    The control of vibration in helicopters which consists of reducing vibration levels below the acceptable limit is one of the key problems. The fluidlastic isolators become more and more widely used because the fluids are non-toxic, non-corrosive, nonflammable, and compatible with most elastomers and adhesives. In the field of the fluidlastic isolators design, the selection of design parameters is very important to obtain efficient vibration-suppressed. Aiming at getting the effect of design parameters on the property of fluidlastic isolator, a dynamic equation is set up based on the theory of dynamics. And the dynamic analysis is carried out. The influences of design parameters on the property of fluidlastic isolator are calculated. Dynamic analysis results have shown that fluidlastic isolator can reduce the vibration effectively. Analysis results also showed that the design parameters such as the fluid density, viscosity coefficient, stiffness (K1 and K2) and loss coefficient have obvious influence on the performance of isolator. The efficient vibration-suppressed can be obtained by the design optimization of parameters.

  19. Chaotic Dynamics of Linguistic-Like Processes at the Syntactical and Semantic Levels: in the Pursuit of a Multifractal Attractor

    NASA Astrophysics Data System (ADS)

    Nicolis, John S.; Katsikas, Anastassis A.

    Collective parameters such as the Zipf's law-like statistics, the Transinformation, the Block Entropy and the Markovian character are compared for natural, genetic, musical and artificially generated long texts from generating partitions (alphabets) on homogeneous as well as on multifractal chaotic maps. It appears that minimal requirements for a language at the syntactical level such as memory, selectivity of few keywords and broken symmetry in one dimension (polarity) are more or less met by dynamically iterating simple maps or flows e.g. very simple chaotic hardware. The same selectivity is observed at the semantic level where the aim refers to partitioning a set of enviromental impinging stimuli onto coexisting attractors-categories. Under the regime of pattern recognition and classification, few key features of a pattern or few categories claim the lion's share of the information stored in this pattern and practically, only these key features are persistently scanned by the cognitive processor. A multifractal attractor model can in principle explain this high selectivity, both at the syntactical and the semantic levels.

  20. The STAR Data Reporting Guidelines for Clinical High Altitude Research.

    PubMed

    Brodmann Maeder, Monika; Brugger, Hermann; Pun, Matiram; Strapazzon, Giacomo; Dal Cappello, Tomas; Maggiorini, Marco; Hackett, Peter; Bärtsch, Peter; Swenson, Erik R; Zafren, Ken

    2018-03-01

    Brodmann Maeder, Monika, Hermann Brugger, Matiram Pun, Giacomo Strapazzon, Tomas Dal Cappello, Marco Maggiorini, Peter Hackett, Peter Baärtsch, Erik R. Swenson, Ken Zafren (STAR Core Group), and the STAR Delphi Expert Group. The STARdata reporting guidelines for clinical high altitude research. High AltMedBiol. 19:7-14, 2018. The goal of the STAR (STrengthening Altitude Research) initiative was to produce a uniform set of key elements for research and reporting in clinical high-altitude (HA) medicine. The STAR initiative was inspired by research on treatment of cardiac arrest, in which the establishment of the Utstein Style, a uniform data reporting protocol, substantially contributed to improving data reporting and subsequently the quality of scientific evidence. The STAR core group used the Delphi method, in which a group of experts reaches a consensus over multiple rounds using a formal method. We selected experts in the field of clinical HA medicine based on their scientific credentials and identified an initial set of parameters for evaluation by the experts. Of 51 experts in HA research who were identified initially, 21 experts completed both rounds. The experts identified 42 key parameters in 5 categories (setting, individual factors, acute mountain sickness and HA cerebral edema, HA pulmonary edema, and treatment) that were considered essential for research and reporting in clinical HA research. An additional 47 supplemental parameters were identified that should be reported depending on the nature of the research. The STAR initiative, using the Delphi method, identified a set of key parameters essential for research and reporting in clinical HA medicine.

  1. Complex Dynamics of Droplet Traffic in a Bifurcating Microfluidic Channel: Periodicity, Multistability, and Selection Rules

    NASA Astrophysics Data System (ADS)

    Sessoms, D. A.; Amon, A.; Courbin, L.; Panizza, P.

    2010-10-01

    The binary path selection of droplets reaching a T junction is regulated by time-delayed feedback and nonlinear couplings. Such mechanisms result in complex dynamics of droplet partitioning: numerous discrete bifurcations between periodic regimes are observed. We introduce a model based on an approximation that makes this problem tractable. This allows us to derive analytical formulae that predict the occurrence of the bifurcations between consecutive regimes, establish selection rules for the period of a regime, and describe the evolutions of the period and complexity of droplet pattern in a cycle with the key parameters of the system. We discuss the validity and limitations of our model which describes semiquantitatively both numerical simulations and microfluidic experiments.

  2. Earthquake ground motion: Chapter 3

    USGS Publications Warehouse

    Luco, Nicolas; Kircher, Charles A.; Crouse, C. B.; Charney, Finley; Haselton, Curt B.; Baker, Jack W.; Zimmerman, Reid; Hooper, John D.; McVitty, William; Taylor, Andy

    2016-01-01

    Most of the effort in seismic design of buildings and other structures is focused on structural design. This chapter addresses another key aspect of the design process—characterization of earthquake ground motion into parameters for use in design. Section 3.1 describes the basis of the earthquake ground motion maps in the Provisions and in ASCE 7 (the Standard). Section 3.2 has examples for the determination of ground motion parameters and spectra for use in design. Section 3.3 describes site-specific ground motion requirements and provides example site-specific design and MCER response spectra and example values of site-specific ground motion parameters. Section 3.4 discusses and provides an example for the selection and scaling of ground motion records for use in various types of response history analysis permitted in the Standard.

  3. Steric parameters, molecular modeling and hydropathic interaction analysis of the pharmacology of para-substituted methcathinone analogues

    PubMed Central

    Sakloth, F; Kolanos, R; Mosier, P D; Bonano, J S; Banks, M L; Partilla, J S; Baumann, M H; Negus, S S; Glennon, R A

    2015-01-01

    Background and Purpose There is growing concern over the abuse of certain psychostimulant methcathinone (MCAT) analogues. This study extends an initial quantitative structure–activity relationship (QSAR) investigation that demonstrated important steric considerations of seven 4- (or para-)substituted analogues of MCAT. Specifically, the steric character (Taft's steric ES) of the 4-position substituent affected in vitro potency to induce monoamine release via dopamine and 5-HT transporters (DAT and SERT) and in vivo modulation of intracranial self-stimulation (ICSS). Here, we have assessed the effects of other steric properties of the 4-position substituents. Experimental Approach Definitive steric parameters that more explicitly focus on the volume, width and length of the MCAT 4-position substituents were assessed. In addition, homology models of human DAT and human SERT based upon the crystallized Drosophila DAT were constructed and docking studies were performed, followed by hydropathic interaction (HINT) analysis of the docking results. Key Results The potency of seven MCAT analogues at DAT was negatively correlated with the volume and maximal width of their 4-position substituents, whereas potency at SERT increased as substituent volume and length increased. SERT/DAT selectivity, as well as abuse-related drug effects in the ICSS procedure, also correlated with the same parameters. Docking solutions offered a means of visualizing these findings. Conclusions and Implications These results suggest that steric aspects of the 4-position substituents of MCAT analogues are key determinants of their action and selectivity, and that the hydrophobic nature of these substituents is involved in their potency at SERT. PMID:25522019

  4. Design for Natural Breast Augmentation: The ICE Principle.

    PubMed

    Mallucci, Patrick; Branford, Olivier Alexandre

    2016-06-01

    The authors' published studies have helped define breast beauty in outlining key parameters that contribute to breast attractiveness. The "ICE" principle puts design into practice. It is a simplified formula for inframammary fold incision planning as part of the process for determining implant selection and placement to reproduce the 45:55 ratio previously described as fundamental to natural breast appearance. The formula is as follows: implant dimensions (I) - capacity of the breast (C) = excess tissue required (E). The aim of this study was to test the accuracy of the ICE principle for producing consistent natural beautiful results in breast augmentation. A prospective analysis of 50 consecutive women undergoing primary breast augmentation by means of an inframammary fold incision with anatomical or round implants was performed. The ICE principle was applied to all cases to determine implant selection, placement, and incision position. Changes in parameters between preoperative and postoperative digital clinical photographs were analyzed. The mean upper pole-to-lower pole ratio changed from 52:48 preoperatively to 45:55 postoperatively (p < 0.0001). Mean nipple angulation was also statistically significantly elevated from 11 degrees to 19 degrees skyward (p ≤ 0.0005). Accuracy of incision placement in the fold was 99.7 percent on the right and 99.6 percent on the left, with a standard error of only 0.2 percent. There was a reduction in variability for all key parameters. The authors have shown using the simple ICE principle for surgical planning in breast augmentation that attractive natural breasts may be achieved consistently and with precision. Therapeutic, IV.

  5. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, James C.

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  6. Steps to consider for effective decision making when selecting and prioritizing eHealth services.

    PubMed

    Vimarlund, Vivian; Davoody, Nadia; Koch, Sabine

    2013-01-01

    Making the best choice for an organization when selecting IT applications or eHealth services is not always easy as there are a lot of parameters to take into account. The aim of this paper is to explore some steps to support effective decision making when selecting and prioritizing eHealth services prior to implementation and/or procurement. The steps presented in this paper were identified by interviewing nine key stakeholders at Stockholm County Council. They are supposed to work as a guide for decision making and aim to identify objectives and expected effects, technical, organizational, and economic requirements, and opportunities important to consider before decisions are taken. The steps and their respective issues and variables are concretized in a number of templates to be filled in by decision makers when selecting and prioritizing eHealth services.

  7. Immobilized magnetic beads-based multi-target affinity selection coupled with HPLC-MS for screening active compounds from traditional Chinese medicine and natural products.

    PubMed

    Chen, Yaqi; Chen, Zhui; Wang, Yi

    2015-01-01

    Screening and identifying active compounds from traditional Chinese medicine (TCM) and other natural products plays an important role in drug discovery. Here, we describe a magnetic beads-based multi-target affinity selection-mass spectrometry approach for screening bioactive compounds from natural products. Key steps and parameters including activation of magnetic beads, enzyme/protein immobilization, characterization of functional magnetic beads, screening and identifying active compounds from a complex mixture by LC/MS, are illustrated. The proposed approach is rapid and efficient in screening and identification of bioactive compounds from complex natural products.

  8. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  9. On Nb Silicide Based Alloys: Alloy Design and Selection.

    PubMed

    Tsakiropoulos, Panos

    2018-05-18

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb₅Si₃ (Materials 11 (2018) 69), and hexagonal C14-NbCr₂ and cubic A15-Nb₃X phases (Materials 11 (2018) 395) and eutectics with Nb ss and Nb₅Si₃ (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys.

  10. On Nb Silicide Based Alloys: Alloy Design and Selection

    PubMed Central

    Tsakiropoulos, Panos.

    2018-01-01

    The development of Nb-silicide based alloys is frustrated by the lack of composition-process-microstructure-property data for the new alloys, and by the shortage of and/or disagreement between thermodynamic data for key binary and ternary systems that are essential for designing (selecting) alloys to meet property goals. Recent publications have discussed the importance of the parameters δ (related to atomic size), Δχ (related to electronegativity) and valence electron concentration (VEC) (number of valence electrons per atom filled into the valence band) for the alloying behavior of Nb-silicide based alloys (J Alloys Compd 748 (2018) 569), their solid solutions (J Alloys Compd 708 (2017) 961), the tetragonal Nb5Si3 (Materials 11 (2018) 69), and hexagonal C14-NbCr2 and cubic A15-Nb3X phases (Materials 11 (2018) 395) and eutectics with Nbss and Nb5Si3 (Materials 11 (2018) 592). The parameter values were calculated using actual compositions for alloys, their phases and eutectics. This paper is about the relationships that exist between the alloy parameters δ, Δχ and VEC, and creep rate and isothermal oxidation (weight gain) and the concentrations of solute elements in the alloys. Different approaches to alloy design (selection) that use property goals and these relationships for Nb-silicide based alloys are discussed and examples of selected alloy compositions and their predicted properties are given. The alloy design methodology, which has been called NICE (Niobium Intermetallic Composite Elaboration), enables one to design (select) new alloys and to predict their creep and oxidation properties and the macrosegregation of Si in cast alloys. PMID:29783707

  11. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  12. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  13. Development of Bread Board Model of TRMM precipitation radar

    NASA Astrophysics Data System (ADS)

    Okamoto, Ken'ichi; Ihara, Toshio; Kumagai, Hiroshi

    The active array radar was selected as a reliable candidate for the TRMM (Tropical Rainfall Measuring Mission) precipitation radar after the trade off studies performed by Communications Research Laboratory (CRL) in the US-Japan joint feasibility study of TRMM in 1987-1988. Main system parameters and block diagram for TRMM precipitation radar are shown as the result of feasibility study. CRL developed key devices for the active array precipitation radar such as 8-element slotted waveguide array antenna, the 5 bit PIN diode phase shifters, solid state power amplifiers and low noise amplifiers in 1988-1990. Integration of these key devices was made to compose 8-element Bread Board Model of TRMM precipitation radar.

  14. Analytical template protection performance and maximum key size given a Gaussian-modeled biometric source

    NASA Astrophysics Data System (ADS)

    Kelkboom, Emile J. C.; Breebaart, Jeroen; Buhan, Ileana; Veldhuis, Raymond N. J.

    2010-04-01

    Template protection techniques are used within biometric systems in order to protect the stored biometric template against privacy and security threats. A great portion of template protection techniques are based on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the size of the key and its closeness to being random. In the literature it can be observed that there is a large variation on the reported key lengths at similar classification performance of the same template protection system, even when based on the same biometric modality and database. In this work we determine the analytical relationship between the system performance and the theoretical maximum key size given a biometric source modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed across all channels and the channels are independent. We also determine the effect of the parameters such as the source capacity, the number of enrolment and verification samples, and the operating point selection on the maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and its convenience for its users.

  15. Evaluation of GCMs in the context of regional predictive climate impact studies.

    NASA Astrophysics Data System (ADS)

    Kokorev, Vasily; Anisimov, Oleg

    2016-04-01

    Significant improvements in the structure, complexity, and general performance of earth system models (ESMs) have been made in the recent decade. Despite these efforts, the range of uncertainty in predicting regional climate impacts remains large. The problem is two-fold. Firstly, there is an intrinsic conflict between the local and regional scales of climate impacts and adaptation strategies, on one hand, and larger scales, at which ESMs demonstrate better performance, on the other. Secondly, there is a growing understanding that majority of the impacts involve thresholds, and are thus driven by extreme climate events, whereas accent in climate projections is conventionally made on gradual changes in means. In this study we assess the uncertainty in projecting extreme climatic events within a region-specific and process-oriented context by examining the skills and ranking of ESMs. We developed a synthetic regionalization of Northern Eurasia that accounts for the spatial features of modern climatic changes and major environmental and socio-economical impacts. Elements of such fragmentation could be considered as natural focus regions that bridge the gap between the spatial scales adopted in climate-impacts studies and patterns of climate change simulated by ESMs. In each focus region we selected several target meteorological variables that govern the key regional impacts, and examined the ability of the models to replicate their seasonal and annual means and trends by testing them against observations. We performed a similar evaluation with regard to extremes and statistics of the target variables. And lastly, we used the results of these analyses to select sets of models that demonstrate the best performance at selected focus regions with regard to selected sets of target meteorological parameters. Ultimately, we ranked the models according to their skills, identified top-end models that "better than average" reproduce the behavior of climatic parameters, and eliminated the outliers. Since the criteria of selecting the "best" models are somewhat loose, we constructed several regional ensembles consisting of different number of high-ranked models and compared results from these optimized ensembles with observations and with the ensemble of all models. We tested our approach in specific regional application of the terrestrial Russian Arctic, considering permafrost and Artic biomes as key regional climate-dependent systems, and temperature and precipitation characteristics governing their state as target meteorological parameters. Results of this case study are deposited on the web portal www.permafrost.su/gcms

  16. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  17. Variance-based selection may explain general mating patterns in social insects.

    PubMed

    Rueppell, Olav; Johnson, Nels; Rychtár, Jan

    2008-06-23

    Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.

  18. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445

  19. Commentary: Why Pharmaceutical Scientists in Early Drug Discovery Are Critical for Influencing the Design and Selection of Optimal Drug Candidates.

    PubMed

    Landis, Margaret S; Bhattachar, Shobha; Yazdanian, Mehran; Morrison, John

    2018-01-01

    This commentary reflects the collective view of pharmaceutical scientists from four different organizations with extensive experience in the field of drug discovery support. Herein, engaging discussion is presented on the current and future approaches for the selection of the most optimal and developable drug candidates. Over the past two decades, developability assessment programs have been implemented with the intention of improving physicochemical and metabolic properties. However, the complexity of both new drug targets and non-traditional drug candidates provides continuing challenges for developing formulations for optimal drug delivery. The need for more enabled technologies to deliver drug candidates has necessitated an even more active role for pharmaceutical scientists to influence many key molecular parameters during compound optimization and selection. This enhanced role begins at the early in vitro screening stages, where key learnings regarding the interplay of molecular structure and pharmaceutical property relationships can be derived. Performance of the drug candidates in formulations intended to support key in vivo studies provides important information on chemotype-formulation compatibility relationships. Structure modifications to support the selection of the solid form are also important to consider, and predictive in silico models are being rapidly developed in this area. Ultimately, the role of pharmaceutical scientists in drug discovery now extends beyond rapid solubility screening, early form assessment, and data delivery. This multidisciplinary role has evolved to include the practice of proactively taking part in the molecular design to better align solid form and formulation requirements to enhance developability potential.

  20. Quantitative structure–activity relationship analysis of the pharmacology of para-substituted methcathinone analogues

    PubMed Central

    Bonano, J S; Banks, M L; Kolanos, R; Sakloth, F; Barnier, M L; Glennon, R A; Cozzi, N V; Partilla, J S; Baumann, M H; Negus, S S

    2015-01-01

    Background and Purpose Methcathinone (MCAT) is a potent monoamine releaser and parent compound to emerging drugs of abuse including mephedrone (4-CH3 MCAT), the para-methyl analogue of MCAT. This study examined quantitative structure–activity relationships (QSAR) for MCAT and six para-substituted MCAT analogues on (a) in vitro potency to promote monoamine release via dopamine and serotonin transporters (DAT and SERT, respectively), and (b) in vivo modulation of intracranial self-stimulation (ICSS), a behavioural procedure used to evaluate abuse potential. Neurochemical and behavioural effects were correlated with steric (Es), electronic (σp) and lipophilic (πp) parameters of the para substituents. Experimental Approach For neurochemical studies, drug effects on monoamine release through DAT and SERT were evaluated in rat brain synaptosomes. For behavioural studies, drug effects were tested in male Sprague-Dawley rats implanted with electrodes targeting the medial forebrain bundle and trained to lever-press for electrical brain stimulation. Key Results MCAT and all six para-substituted analogues increased monoamine release via DAT and SERT and dose- and time-dependently modulated ICSS. In vitro selectivity for DAT versus SERT correlated with in vivo efficacy to produce abuse-related ICSS facilitation. In addition, the Es values of the para substituents correlated with both selectivity for DAT versus SERT and magnitude of ICSS facilitation. Conclusions and Implications Selectivity for DAT versus SERT in vitro is a key determinant of abuse-related ICSS facilitation by these MCAT analogues, and steric aspects of the para substituent of the MCAT scaffold (indicated by Es) are key determinants of this selectivity. PMID:25438806

  1. The logic of comparative life history studies for estimating key parameters, with a focus on natural mortality rate

    USGS Publications Warehouse

    Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.

    2016-01-01

    There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.

  2. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  3. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    NASA Astrophysics Data System (ADS)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  4. Laser dimpling process parameters selection and optimization using surrogate-driven process capability space

    NASA Astrophysics Data System (ADS)

    Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz

    2017-08-01

    Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii) optimize process parameters under competing quality requirements such as maximizing the dimple height while minimizing the dimple lower surface area.

  5. [Experimental research of turbidity influence on water quality monitoring of COD in UV-visible spectroscopy].

    PubMed

    Tang, Bin; Wei, Biao; Wu, De-Cao; Mi, De-Ling; Zhao, Jing-Xiao; Feng, Peng; Jiang, Shang-Hai; Mao, Ben-Jiang

    2014-11-01

    Eliminating turbidity is a direct effect spectroscopy detection of COD key technical problems. This stems from the UV-visible spectroscopy detected key quality parameters depend on an accurate and effective analysis of water quality parameters analytical model, and turbidity is an important parameter that affects the modeling. In this paper, we selected formazine turbidity solution and standard solution of potassium hydrogen phthalate to study the turbidity affect of UV--visible absorption spectroscopy detection of COD, at the characteristics wavelength of 245, 300, 360 and 560 nm wavelength point several characteristics with the turbidity change in absorbance method of least squares curve fitting, thus analyzes the variation of absorbance with turbidity. The results show, In the ultraviolet range of 240 to 380 nm, as the turbidity caused by particle produces compounds to the organics, it is relatively complicated to test the turbidity affections on the water Ultraviolet spectra; in the visible region of 380 to 780 nm, the turbidity of the spectrum weakens with wavelength increases. Based on this, this paper we study the multiplicative scatter correction method affected by the turbidity of the water sample spectra calibration test, this method can correct water samples spectral affected by turbidity. After treatment, by comparing the spectra before, the results showed that the turbidity caused by wavelength baseline shift points have been effectively corrected, and features in the ultraviolet region has not diminished. Then we make multiplicative scatter correction for the three selected UV liquid-visible absorption spectroscopy, experimental results shows that on the premise of saving the characteristic of the Ultraviolet-Visible absorption spectrum of water samples, which not only improve the quality of COD spectroscopy detection SNR, but also for providing an efficient data conditioning regimen for establishing an accurate of the chemical measurement methods.

  6. Site Characterization at a Tidal Energy Site in the East River, NY (usa)

    NASA Astrophysics Data System (ADS)

    Gunawan, B.; Neary, V. S.; Colby, J.

    2012-12-01

    A comprehensive tidal energy site characterization is performed using ADV measurements of instantaneous horizontal current magnitude and direction at the planned hub centerline of a tidal turbine over a two month period, and contributes to the growing data base of tidal energy site hydrodynamic conditions. The temporal variation, mean current statistics, and turbulence of the key tidal hydrodynamic parameters are examined in detail, and compared to estimates from two tidal energy sites in Puget Sound. Tidal hydrodynamic conditions, including mean annual current (at hub height), the speed of extreme gusts (instantaneous horizontal currents acting normal to the rotor plane), and turbulence intensity (as proposed here, relative to a mean current of 2 m s-1) can vary greatly among tidal energy sites. Comparison of hydrodynamic conditions measured in the East River tidal straight in New York City with those reported for two tidal energy sites in Puget Sound indicate differences of mean annual current speeds, difference in the instantaneous current speeds of extreme gusts, and differences in turbulence intensities. Significant differences in these parameters among the tidal energy sites, and with the tidal resource assessment map, highlight the importance of conducting site resource characterization with ADV measurements at the machine scale. As with the wind industry, which adopted an International Electrotechnical Commission (IEC) wind class standard to aid in the selection of wind turbines for a particular site, it is recommended that the tidal energy industry adopt an appropriate standard for tidal current classes. Such a standard requires a comprehensive field campaign at multiple tidal energy sites that can identify the key hydrodynamic parameters for tidal current site classification, select a list of tidal energy sites that exhibit the range of hydrodynamic conditions that will be encountered, and adopt consistent measurement practices (standards) for site classification.

  7. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  8. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  9. Clinical trial allocation in multinational pharmaceutical companies - a qualitative study on influential factors.

    PubMed

    Dombernowsky, Tilde; Haedersdal, Merete; Lassen, Ulrik; Thomsen, Simon F

    2017-06-01

    Clinical trial allocation in multinational pharmaceutical companies includes country selection and site selection. With emphasis on site selection, the overall aim of this study was to examine which factors pharmaceutical companies value most when allocating clinical trials. The specific aims were (1) to identify key decision makers during country and site selection, respectively, (2) to evaluate by which parameters subsidiaries are primarily assessed by headquarters with regard to conducting clinical trials, and (3) to evaluate which site-related qualities companies value most when selecting trial sites. Eleven semistructured interviews were conducted among employees engaged in trial allocation at 11 pharmaceutical companies. The interviews were analyzed by deductive content analysis, which included coding of data to a categorization matrix containing categories of site-related qualities. The results suggest that headquarters and regional departments are key decision makers during country selection, whereas subsidiaries decide on site selection. Study participants argued that headquarters primarily value timely patient recruitment and quality of data when assessing subsidiaries. The site-related qualities most commonly emphasized during interviews were study population availability, timely patient recruitment, resources at the site, and site personnel's interest and commitment. Costs of running the trials were described as less important. Site personnel experience in conducting trials was described as valuable but not imperative. In conclusion, multinational pharmaceutical companies consider recruitment-related factors as crucial when allocating clinical trials. Quality of data and site personnel's interest and commitment are also essential, whereas costs seem less important. While valued, site personnel experience in conducting clinical trials is not imperative.

  10. Redox-switchable copper(I) metallogel: a metal-organic material for selective and naked-eye sensing of picric acid.

    PubMed

    Sarkar, Sougata; Dutta, Soumen; Chakrabarti, Susmita; Bairi, Partha; Pal, Tarasankar

    2014-05-14

    Thiourea (TU), a commercially available laboratory chemical, has been discovered to introduce metallogelation when reacted with copper(II) chloride in aqueous medium. The chemistry involves the reduction of Cu(II) to Cu(I) with concomitant oxidation of thiourea to dithiobisformamidinium dichloride. The gel formation is triggered through metal-ligand complexation, i.e., Cu(I)-TU coordination and extensive hydrogen bonding interactions involving thiourea, the disulfide product, water, and chloride ions. Entangled network morphology of the gel selectively develops in water, maybe for its superior hydrogen-bonding ability, as accounted from Kamlet-Taft solvent parameters. Complete and systematic chemical analyses demonstrate the importance of both Cu(I) and chloride ions as the key ingredients in the metal-organic coordination gel framework. The gel is highly fluorescent. Again, exclusive presence of Cu(I) metal centers in the gel structure makes the gel redox-responsive and therefore it shows reversible gel-sol phase transition. However, the reversibility does not cause any morphological change in the gel phase. The gel practically exhibits its multiresponsive nature and therefore the influences of different probable interfering parameters (pH, selective metal ions and anions, selective complexing agents, etc.) have been studied mechanistically and the results might be promising for different applications. Finally, the gel material shows a highly selective visual response to a commonly used nitroexplosive, picric acid among a set of 19 congeners and the preferred selectivity has been mechanistically interpreted with density functional theory-based calculations.

  11. Installation Restoration General Environmental Technology Development. Task 2. Incineration Test of Explosives Contaminated Soils at Savanna Army Depot Activity, Savanna, Illinois.

    DTIC Science & Technology

    1984-04-01

    800OF and afterburner temperatures below 112000F. Explosives were detected in the combustion gases leaving the primary chamber for one test burn (i.e... combustion chamber. (c) Temperature in the secondary combustion chamber. l These key parameters were selected since they directly re- late to the...4523A 5.4 Heat exchanger (waste heat boiler) . The f lue gases discharged from the secondary combustion chamber were directed, via refractory-lined duct

  12. Rationally Designed Sensing Selectivity and Sensitivity of an Aerolysin Nanopore via Site-Directed Mutagenesis.

    PubMed

    Wang, Ya-Qian; Cao, Chan; Ying, Yi-Lun; Li, Shuang; Wang, Ming-Bo; Huang, Jin; Long, Yi-Tao

    2018-04-27

    Selectivity and sensitivity are two key parameters utilized to describe the performance of a sensor. In order to investigate selectivity and sensitivity of the aerolysin nanosensor, we manipulated its surface charge at different locations via single site-directed mutagenesis. To study the selectivity, we replaced the positively charged R220 at the entrance of the pore with negatively charged glutamic acid, resulting in barely no current blockages for sensing negatively charged oligonucleotides. For the sensitivity, we substituted the positively charged lumen-exposed amino acid K238 located at trans-ward third of the β-barrel stem with glutamic acid. This leads to a surprisingly longer duration time at +140 mV, which is about 20 times slower in translocation speed for Poly(dA) 4 compared to that of wild-type aerolysin, indicating the stronger pore-analyte interactions and enhanced sensitivity. Therefore, it is both feasible and understandable to rationally design confined biological nanosensors for single molecule detection with high selectivity and sensitivity.

  13. Acrylamide mitigation strategies: critical appraisal of the FoodDrinkEurope toolbox.

    PubMed

    Palermo, M; Gökmen, V; De Meulenaer, B; Ciesarová, Z; Zhang, Y; Pedreschi, F; Fogliano, V

    2016-06-15

    FoodDrinkEurope Federation recently released the latest version of the Acrylamide Toolbox to support manufacturers in acrylamide reduction activities giving indication about the possible mitigation strategies. The Toolbox is intended for small and medium size enterprises with limited R&D resources, however no comments about the pro and cons of the different measures were provided to advise the potential users. Experts of the field are aware that not all the strategies proposed have equal value in terms of efficacy and cost/benefit ratio. This consideration prompted us to provide a qualitative science-based ranking of the mitigation strategies proposed in the acrylamide Toolbox, focusing on bakery and fried potato products. Five authors from different geographical areas having a publication record on acrylamide mitigation strategies worked independently ranking the efficacy of the acrylamide mitigation strategies taking into account three key parameters: (i) reduction rate; (ii) side effects; and (iii) applicability and economic impact. On the basis of their own experience and considering selected literature of the last ten years, the authors scored for each key parameter the acrylamide mitigation strategies proposed in the Toolbox. As expected, all strategies selected in the Toolbox turned out to be useful, however, not at the same level. The use of enzyme asparaginase and the selection of low sugar varieties were considered the best mitigation strategies in bakery and in potato products, respectively. According to authors' opinion most of the other mitigation strategies, although effective, either have relevant side effects on the sensory profile of the products, or they are not easy to implement in industrial production. The final outcome was a science based commented ranking which can enrich the acrylamide Toolbox supporting individual manufacturer in taking the best actions to reduce the acrylamide content in their specific production context.

  14. [Atmospheric parameter estimation for LAMOST/GUOSHOUJING spectra].

    PubMed

    Lu, Yu; Li, Xiang-Ru; Yang, Tan

    2014-11-01

    It is a key task to estimate the atmospheric parameters from the observed stellar spectra in exploring the nature of stars and universe. With our Large Sky Area Multi-Object Fiber Spectroscopy Telescope (LAMOST) which begun its formal Sky Survey in September 2012, we are obtaining a mass of stellar spectra in an unprecedented speed. It has brought a new opportunity and a challenge for the research of galaxies. Due to the complexity of the observing system, the noise in the spectrum is relatively large. At the same time, the preprocessing procedures of spectrum are also not ideal, such as the wavelength calibration and the flow calibration. Therefore, there is a slight distortion of the spectrum. They result in the high difficulty of estimating the atmospheric parameters for the measured stellar spectra. It is one of the important issues to estimate the atmospheric parameters for the massive stellar spectra of LAMOST. The key of this study is how to eliminate noise and improve the accuracy and robustness of estimating the atmospheric parameters for the measured stellar spectra. We propose a regression model for estimating the atmospheric parameters of LAMOST stellar(SVM(lasso)). The basic idea of this model is: First, we use the Haar wavelet to filter spectrum, suppress the adverse effects of the spectral noise and retain the most discrimination information of spectrum. Secondly, We use the lasso algorithm for feature selection and extract the features of strongly correlating with the atmospheric parameters. Finally, the features are input to the support vector regression model for estimating the parameters. Because the model has better tolerance to the slight distortion and the noise of the spectrum, the accuracy of the measurement is improved. To evaluate the feasibility of the above scheme, we conduct experiments extensively on the 33 963 pilot surveys spectrums by LAMOST. The accuracy of three atmospheric parameters is log Teff: 0.006 8 dex, log g: 0.155 1 dex, [Fe/H]: 0.104 0 dex.

  15. Classification of Weed Species Using Artificial Neural Networks Based on Color Leaf Texture Feature

    NASA Astrophysics Data System (ADS)

    Li, Zhichen; An, Qiu; Ji, Changying

    The potential impact of herbicide utilization compel people to use new method of weed control. Selective herbicide application is optimal method to reduce herbicide usage while maintain weed control. The key of selective herbicide is how to discriminate weed exactly. The HIS color co-occurrence method (CCM) texture analysis techniques was used to extract four texture parameters: Angular second moment (ASM), Entropy(E), Inertia quadrature (IQ), and Inverse difference moment or local homogeneity (IDM).The weed species selected for studying were Arthraxon hispidus, Digitaria sanguinalis, Petunia, Cyperus, Alternanthera Philoxeroides and Corchoropsis psilocarpa. The software of neuroshell2 was used for designing the structure of the neural network, training and test the data. It was found that the 8-40-1 artificial neural network provided the best classification performance and was capable of classification accuracies of 78%.

  16. Magnetic Bearings for Inertial Energy Storage

    NASA Technical Reports Server (NTRS)

    Studer, P. A.

    1983-01-01

    The selection of a noncontacting bearing technique with no wear out phenomena and which is vacuum compatible which is the decisive factor in selecting magnetic bearings for kinetic energy storage was investigated. Unlimited cycle life without degradation is a primary goal. Storage efficiency is a key parameter which is defined as the ratio of the energy remaining to energy stored after a fixed time interval at no load conditions. Magnetic bearings, although noncontacting, are not perfectly frictionless in that magnetic losses due to eddy currents and hysteresis can occur. Practical magnetic bearings, however, deviate from perfect symmetry and have discontinuities and asymmetric flux paths either by design or when controlled in the presence of disturbances, which cause losses. These losses can be kept smaller in the bearings than in a high power motor/generator, however, are a significant factor in selecting the magnetic bearing type.

  17. Fitness consequences of sex-specific selection.

    PubMed

    Connallon, Tim; Cox, Robert M; Calsbeek, Ryan

    2010-06-01

    Theory suggests that sex-specific selection can facilitate adaptation in sexually reproducing populations. However, sexual conflict theory and recent experiments indicate that sex-specific selection is potentially costly due to sexual antagonism: alleles harmful to one sex can accumulate within a population because they are favored in the other sex. Whether sex-specific selection provides a net fitness benefit or cost depends, in part, on the relative frequency and strength of sexually concordant versus sexually antagonistic selection throughout a species' genome. Here, we model the net fitness consequences of sex-specific selection while explicitly considering both sexually concordant and sexually antagonistic selection. The model shows that, even when sexual antagonism is rare, the fitness costs that it imposes will generally overwhelm fitness benefits of sexually concordant selection. Furthermore, the cost of sexual antagonism is, at best, only partially resolved by the evolution of sex-limited gene expression. To evaluate the key parameters of the model, we analyze an extensive dataset of sex-specific selection gradients from wild populations, along with data from the experimental evolution literature. The model and data imply that sex-specific selection may likely impose a net cost on sexually reproducing species, although additional research will be required to confirm this conclusion.

  18. Evaluation of performance of select fusion experiments and projected reactors

    NASA Technical Reports Server (NTRS)

    Miley, G. H.

    1978-01-01

    The performance of NASA Lewis fusion experiments (SUMMA and Bumpy Torus) is compared with other experiments and that necessary for a power reactor. Key parameters cited are gain (fusion power/input power) and the time average fusion power, both of which may be more significant for real fusion reactors than the commonly used Lawson parameter. The NASA devices are over 10 orders of magnitude below the required powerplant values in both gain and time average power. The best experiments elsewhere are also as much as 4 to 5 orders of magnitude low. However, the NASA experiments compare favorably with other alternate approaches that have received less funding than the mainline experiments. The steady-state character and efficiency of plasma heating are strong advantages of the NASA approach. The problem, though, is to move ahead to experiments of sufficient size to advance in gain and average power parameters.

  19. Temperature based Restricted Boltzmann Machines

    NASA Astrophysics Data System (ADS)

    Li, Guoqi; Deng, Lei; Xu, Yi; Wen, Changyun; Wang, Wei; Pei, Jing; Shi, Luping

    2016-01-01

    Restricted Boltzmann machines (RBMs), which apply graphical models to learning probability distribution over a set of inputs, have attracted much attention recently since being proposed as building blocks of multi-layer learning systems called deep belief networks (DBNs). Note that temperature is a key factor of the Boltzmann distribution that RBMs originate from. However, none of existing schemes have considered the impact of temperature in the graphical model of DBNs. In this work, we propose temperature based restricted Boltzmann machines (TRBMs) which reveals that temperature is an essential parameter controlling the selectivity of the firing neurons in the hidden layers. We theoretically prove that the effect of temperature can be adjusted by setting the parameter of the sharpness of the logistic function in the proposed TRBMs. The performance of RBMs can be improved by adjusting the temperature parameter of TRBMs. This work provides a comprehensive insights into the deep belief networks and deep learning architectures from a physical point of view.

  20. Printability of calcium phosphate powders for three-dimensional printing of tissue engineering scaffolds.

    PubMed

    Butscher, Andre; Bohner, Marc; Roth, Christian; Ernstberger, Annika; Heuberger, Roman; Doebelin, Nicola; von Rohr, Philipp Rudolf; Müller, Ralph

    2012-01-01

    Three-dimensional printing (3DP) is a versatile method to produce scaffolds for tissue engineering. In 3DP the solid is created by the reaction of a liquid selectively sprayed onto a powder bed. Despite the importance of the powder properties, there has to date been a relatively poor understanding of the relation between the powder properties and the printing outcome. This article aims at improving this understanding by looking at the link between key powder parameters (particle size, flowability, roughness, wettability) and printing accuracy. These powder parameters are determined as key factors with a predictive value for the final 3DP outcome. Promising results can be expected for mean particle size in the range of 20-35 μm, compaction rate in the range of 1.3-1.4, flowability in the range of 5-7 and powder bed surface roughness of 10-25 μm. Finally, possible steps and strategies in pushing the physical limits concerning improved quality in 3DP are addressed and discussed. Copyright © 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  1. Monitoring population and environmental parameters of invasive mosquito species in Europe

    PubMed Central

    2014-01-01

    To enable a better understanding of the overwhelming alterations in the invasive mosquito species (IMS), methodical insight into the population and environmental factors that govern the IMS and pathogen adaptations are essential. There are numerous ways of estimating mosquito populations, and usually these describe developmental and life-history parameters. The key population parameters that should be considered during the surveillance of invasive mosquito species are: (1) population size and dynamics during the season, (2) longevity, (3) biting behaviour, and (4) dispersal capacity. Knowledge of these parameters coupled with vector competence may help to determine the vectorial capacity of IMS and basic disease reproduction number (R0) to support mosquito borne disease (MBD) risk assessment. Similarly, environmental factors include availability and type of larval breeding containers, climate change, environmental change, human population density, increased human travel and goods transport, changes in living, agricultural and farming habits (e.g. land use), and reduction of resources in the life cycle of mosquitoes by interventions (e.g. source reduction of aquatic habitats). Human population distributions, urbanisation, and human population movement are the key behavioural factors in most IMS-transmitted diseases. Anthropogenic issues are related to the global spread of MBD such as the introduction, reintroduction, circulation of IMS and increased exposure to humans from infected mosquito bites. This review addresses the population and environmental factors underlying the growing changes in IMS populations in Europe and confers the parameters selected by criteria of their applicability. In addition, overview of the commonly used and newly developed tools for their monitoring is provided. PMID:24739334

  2. New evaluation parameter for wearable thermoelectric generators

    NASA Astrophysics Data System (ADS)

    Wijethunge, Dimuthu; Kim, Woochul

    2018-04-01

    Wearable devices constitute a key application area for thermoelectric devices. However, owing to new constraints in wearable applications, a few conventional device optimization techniques are not appropriate and material evaluation parameters, such as figure of merit (zT) and power factor (PF), tend to be inadequate. We illustrated the incompleteness of zT and PF by performing simulations and considering different thermoelectric materials. The results indicate a weak correlation between device performance and zT and PF. In this study, we propose a new evaluation parameter, zTwearable, which is better suited for wearable applications compared to conventional zT. Owing to size restrictions, gap filler based device optimization is extremely critical in wearable devices. With respect to the occasions in which gap fillers are used, expressions for power, effective thermal conductivity (keff), and optimum load electrical ratio (mopt) are derived. According to the new parameters, the thermal conductivity of the material has become much more critical now. The proposed new evaluation parameter, namely, zTwearable, is extremely useful in the selection of an appropriate thermoelectric material among various candidates prior to the commencement of the actual design process.

  3. A square-force cohesion model and its extraction from bulk measurements

    NASA Astrophysics Data System (ADS)

    Liu, Peiyuan; Lamarche, Casey; Kellogg, Kevin; Hrenya, Christine

    2017-11-01

    Cohesive particles remain poorly understood, with order of magnitude differences exhibited for prior, physical predictions of agglomerate size. A major obstacle lies in the absence of robust models of particle-particle cohesion, thereby precluding accurate prediction of the behavior of cohesive particles. Rigorous cohesion models commonly contain parameters related to surface roughness, to which cohesion shows extreme sensitivity. However, both roughness measurement and its distillation into these model parameters are challenging. Accordingly, we propose a ``square-force'' model, where cohesive force remains constant until a cut-off separation. Via DEM simulations, we demonstrate validity of the square-force model as surrogate of more rigorous models, when its two parameters are selected to match the two key quantities governing dense and dilute granular flows, namely maximum cohesive force and critical cohesive energy, respectively. Perhaps more importantly, we establish a method to extract the parameters in the square-force model via defluidization, due to its ability to isolate the effects of the two parameters. Thus, instead of relying on complicated scans of individual grains, determination of particle-particle cohesion from simple bulk measurements becomes feasible. Dow Corning Corporation.

  4. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  5. Evolution of conditional cooperation under multilevel selection.

    PubMed

    Zhang, Huanren; Perc, Matjaž

    2016-03-11

    We study the emergence of conditional cooperation in the presence of both intra-group and inter-group selection. Individuals play public goods games within their groups using conditional strategies, which are represented as piecewise linear response functions. Accordingly, groups engage in conflicts with a certain probability. In contrast to previous studies, we consider continuous contribution levels and a rich set of conditional strategies, allowing for a wide range of possible interactions between strategies. We find that the existence of conditional strategies enables the stabilization of cooperation even under strong intra-group selection. The strategy that eventually dominates in the population has two key properties: (i) It is unexploitable with strong intra-group selection; (ii) It can achieve full contribution to outperform other strategies in the inter-group selection. The success of this strategy is robust to initial conditions as well as changes to important parameters. We also investigate the influence of different factors on cooperation levels, including group conflicts, group size, and migration rate. Their effect on cooperation can be attributed to and explained by their influence on the relative strength of intra-group and inter-group selection.

  6. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  7. Partial Transient Liquid-Phase Bonding, Part II: A Filtering Routine for Determining All Possible Interlayer Combinations

    NASA Astrophysics Data System (ADS)

    Cook, Grant O.; Sorensen, Carl D.

    2013-12-01

    Partial transient liquid-phase (PTLP) bonding is currently an esoteric joining process with limited applications. However, it has preferable advantages compared with typical joining techniques and is the best joining technique for certain applications. Specifically, it can bond hard-to-join materials as well as dissimilar material types, and bonding is performed at comparatively low temperatures. Part of the difficulty in applying PTLP bonding is finding suitable interlayer combinations (ICs). A novel interlayer selection procedure has been developed to facilitate the identification of ICs that will create successful PTLP bonds and is explained in a companion article. An integral part of the selection procedure is a filtering routine that identifies all possible ICs for a given application. This routine utilizes a set of customizable parameters that are based on key characteristics of PTLP bonding. These parameters include important design considerations such as bonding temperature, target remelting temperature, bond solid type, and interlayer thicknesses. The output from this routine provides a detailed view of each candidate IC along with a broad view of the entire candidate set, greatly facilitating the selection of ideal ICs. This routine provides a new perspective on the PTLP bonding process. In addition, the use of this routine, by way of the accompanying selection procedure, will expand PTLP bonding as a viable joining process.

  8. Solving the Puzzle of Metastasis: The Evolution of Cell Migration in Neoplasms

    PubMed Central

    Chen, Jun; Sprouffske, Kathleen; Huang, Qihong; Maley, Carlo C.

    2011-01-01

    Background Metastasis represents one of the most clinically important transitions in neoplastic progression. The evolution of metastasis is a puzzle because a metastatic clone is at a disadvantage in competition for space and resources with non-metastatic clones in the primary tumor. Metastatic clones waste some of their reproductive potential on emigrating cells with little chance of establishing metastases. We suggest that resource heterogeneity within primary tumors selects for cell migration, and that cell emigration is a by-product of that selection. Methods and Findings We developed an agent-based model to simulate the evolution of neoplastic cell migration. We simulated the essential dynamics of neoangiogenesis and blood vessel occlusion that lead to resource heterogeneity in neoplasms. We observed the probability and speed of cell migration that evolves with changes in parameters that control the degree of spatial and temporal resource heterogeneity. Across a broad range of realistic parameter values, increasing degrees of spatial and temporal heterogeneity select for the evolution of increased cell migration and emigration. Conclusions We showed that variability in resources within a neoplasm (e.g. oxygen and nutrients provided by angiogenesis) is sufficient to select for cells with high motility. These cells are also more likely to emigrate from the tumor, which is the first step in metastasis and the key to the puzzle of metastasis. Thus, we have identified a novel potential solution to the puzzle of metastasis. PMID:21556134

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tao; Chatterjee, Sabornie; Mahurin, Shannon M.

    Amidoxime-functionalized polydimethylsiloxane (AO-PDMSPNB) membranes with various amidoxime compositions were synthesized via ring-opening metathesis polymerization followed by post-polymerization modification. Compared to other previously reported PDMS-based membranes, the amidoxime-functionalized membranes show enhanced CO 2 permeability and CO 2/N 2 selectivity. The overall gas separation performance (CO 2 permeability 6800 Barrer; CO 2/N 2 selectivity 19) of the highest performing membrane exceeds the Robeson upper bound line, and the excellent permeability of the copolymer itself provides great potential for real world applications where huge volumes of gases are separated. This study details how tuning the CO 2-philicity within rubbery polymer matrices influences gasmore » transport properties. Key parameters for tuning gas transport properties are discussed, and the experimental results show good consistency with theoretical calculations. Finally, this study provides a roadmap to enhancing gas separation performance in rubbery polymers by tuning gas solubility selectivity.« less

  10. The change of steel surface chemistry regarding oxygen partial pressure and dew point

    NASA Astrophysics Data System (ADS)

    Norden, Martin; Blumenau, Marc; Wuttke, Thiemo; Peters, Klaus-Josef

    2013-04-01

    By investigating the surface state of a Ti-IF, TiNb-IF and a MnCr-DP after several series of intercritical annealing, the impact of the annealing gas composition on the selective oxidation process is discussed. On behalf of the presented results, it can be concluded that not the general oxygen partial pressure in the annealing furnace, which is a result of the equilibrium reaction of water and hydrogen, is the main driving force for the selective oxidation process. It is shown that the amounts of adsorbed gases at the strip surface and the effective oxygen partial pressure resulting from the adsorbed gases, which is mainly dependent on the water content of the annealing furnace, is driving the selective oxidation processes occurring during intercritical annealing. Thus it is concluded, that for industrial applications the dew point must be the key parameter value for process control.

  11. Uptake and localization mechanisms of fluorescent and colored lipid probes. Part 2. QSAR models that predict localization of fluorescent probes used to identify ("specifically stain") various biomembranes and membranous organelles.

    PubMed

    Horobin, R W; Stockert, J C; Rashid-Doubell, F

    2015-05-01

    We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.

  12. Gas separation mechanism of CO 2 selective amidoxime-poly(1-trimethylsilyl-1-propyne) membranes

    DOE PAGES

    Feng, Hongbo; Hong, Tao; Mahurin, Shannon Mark; ...

    2017-05-09

    Polymeric membranes for CO 2 separation have drawn significant attention in academia and industry. We prepared amidoxime-functionalized poly(1-trimethylsilyl-1-propyne) (AO-PTMSP) membranes through hydrosilylation and post-polymerization modification. Compared to neat PTMSP membranes, the AO-PTMSP membranes showed significant enhancements in CO 2/N 2 gas separation performance (CO 2 permeability ~6000 Barrer; CO 2/N 2 selectivity 17). This systematic study provides clear guidelines on how to tune the CO 2-philicity within PTMSP matrices and the effects on gas selectivity. Key parameters for elucidating the gas transport mechanism were discussed based on CO 2 sorption measurements and fractional free volume estimates. The effect of themore » AO content on CO 2/N 2 selectivity was further examined by means of density functional theory calculations. Here, both experimental and theoretical data provide consistent results that conclusively show that CO 2/N 2 separation performance is enhanced by increased CO 2 polymer interactions.« less

  13. Gas separation mechanism of CO 2 selective amidoxime-poly(1-trimethylsilyl-1-propyne) membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Hongbo; Hong, Tao; Mahurin, Shannon Mark

    Polymeric membranes for CO 2 separation have drawn significant attention in academia and industry. We prepared amidoxime-functionalized poly(1-trimethylsilyl-1-propyne) (AO-PTMSP) membranes through hydrosilylation and post-polymerization modification. Compared to neat PTMSP membranes, the AO-PTMSP membranes showed significant enhancements in CO 2/N 2 gas separation performance (CO 2 permeability ~6000 Barrer; CO 2/N 2 selectivity 17). This systematic study provides clear guidelines on how to tune the CO 2-philicity within PTMSP matrices and the effects on gas selectivity. Key parameters for elucidating the gas transport mechanism were discussed based on CO 2 sorption measurements and fractional free volume estimates. The effect of themore » AO content on CO 2/N 2 selectivity was further examined by means of density functional theory calculations. Here, both experimental and theoretical data provide consistent results that conclusively show that CO 2/N 2 separation performance is enhanced by increased CO 2 polymer interactions.« less

  14. Practical automated glass selection and the design of apochromats with large field of view.

    PubMed

    Siew, Ronian

    2016-11-10

    This paper presents an automated approach to the selection of optical glasses for the design of an apochromatic lens with large field of view, based on a design originally provided by Yang et al. [Appl. Opt.55, 5977 (2016)APOPAI0003-693510.1364/AO.55.005977]. Following from this reference's preliminary optimized structure, it is shown that the effort of glass selection is significantly reduced by using the global optimization feature in the Zemax optical design program. The glass selection process is very fast, complete within minutes, and the key lies in automating the substitution of glasses found from the global search without the need to simultaneously optimize any other lens parameter during the glass search. The result is an alternate optimized version of the lens from the above reference possessing zero axial secondary color within the visible spectrum and a large field of view. Supplementary material is provided in the form of Zemax and text files, before and after final optimization.

  15. Advances in selective activation of muscles for non-invasive motor neuroprostheses.

    PubMed

    Koutsou, Aikaterini D; Moreno, Juan C; Del Ama, Antonio J; Rocon, Eduardo; Pons, José L

    2016-06-13

    Non-invasive neuroprosthetic (NP) technologies for movement compensation and rehabilitation remain with challenges for their clinical application. Two of those major challenges are selective activation of muscles and fatigue management. This review discusses how electrode arrays improve the efficiency and selectivity of functional electrical stimulation (FES) applied via transcutaneous electrodes. In this paper we review the principles and achievements during the last decade on techniques for artificial motor unit recruitment to improve the selective activation of muscles. We review the key factors affecting the outcome of muscle force production via multi-pad transcutaneous electrical stimulation and discuss how stimulation parameters can be set to optimize external activation of body segments. A detailed review of existing electrode array systems proposed by different research teams is also provided. Furthermore, a review of the targeted applications of existing electrode arrays for control of upper and lower limb NPs is provided. Eventually, last section demonstrates the potential of electrode arrays to overcome the major challenges of NPs for compensation and rehabilitation of patient-specific impairments.

  16. Simultaneously selecting appropriate partners for gaming and strategy adaptation to enhance network reciprocity in the prisoner's dilemma

    NASA Astrophysics Data System (ADS)

    Tanimoto, Jun

    2014-01-01

    Network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium in 2 × 2 prisoner's dilemma games. Previous studies have shown that cooperation can be enhanced by using a skewed, rather than a random, selection of partners for either strategy adaptation or the gaming process. Here we show that combining both processes for selecting a gaming partner and an adaptation partner further enhances cooperation, provided that an appropriate selection rule and parameters are adopted. We also show that this combined model significantly enhances cooperation by reducing the degree of activity in the underlying network; we measure the degree of activity with a quantity called effective degree. More precisely, during the initial evolutionary stage in which the global cooperation fraction declines because initially allocated cooperators becoming defectors, the model shows that weak cooperative clusters perish and only a few strong cooperative clusters survive. This finding is the most important key to attaining significant network reciprocity.

  17. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  18. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  19. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  20. A Multi-Parametric Device with Innovative Solid Electrodes for Long-Term Monitoring of pH, Redox-Potential and Conductivity in a Nuclear Waste Repository

    PubMed Central

    Daoudi, Jordan; Betelu, Stephanie; Tzedakis, Theodore; Bertrand, Johan; Ignatiadis, Ioannis

    2017-01-01

    We present an innovative electrochemical probe for the monitoring of pH, redox potential and conductivity in near-field rocks of deep geological radioactive waste repositories. The probe is composed of a monocrystalline antimony electrode for pH sensing, four AgCl/Ag-based reference or Cl− selective electrodes, one Ag2S/Ag-based reference or S2− selective electrode, as well as four platinum electrodes, a gold electrode and a glassy-carbon electrode for redox potential measurements. Galvanostatic electrochemistry impedance spectroscopy using AgCl/Ag-based and platinum electrodes measure conductivity. The use of such a multi-parameter probe provides redundant information, based as it is on the simultaneous behaviour under identical conditions of different electrodes of the same material, as well as on that of electrodes made of different materials. This identifies the changes in physical and chemical parameters in a solution, as well as the redox reactions controlling the measured potential, both in the solution and/or at the electrode/solution interface. Understanding the electrochemical behaviour of selected materials thus is a key point of our research, as provides the basis for constructing the abacuses needed for developing robust and reliable field sensors. PMID:28608820

  1. A Multi-Parametric Device with Innovative Solid Electrodes for Long-Term Monitoring of pH, Redox-Potential and Conductivity in a Nuclear Waste Repository.

    PubMed

    Daoudi, Jordan; Betelu, Stephanie; Tzedakis, Theodore; Bertrand, Johan; Ignatiadis, Ioannis

    2017-06-13

    We present an innovative electrochemical probe for the monitoring of pH, redox potential and conductivity in near-field rocks of deep geological radioactive waste repositories. The probe is composed of a monocrystalline antimony electrode for pH sensing, four AgCl/Ag-based reference or Cl - selective electrodes, one Ag₂S/Ag-based reference or S 2- selective electrode, as well as four platinum electrodes, a gold electrode and a glassy-carbon electrode for redox potential measurements. Galvanostatic electrochemistry impedance spectroscopy using AgCl/Ag-based and platinum electrodes measure conductivity. The use of such a multi-parameter probe provides redundant information, based as it is on the simultaneous behaviour under identical conditions of different electrodes of the same material, as well as on that of electrodes made of different materials. This identifies the changes in physical and chemical parameters in a solution, as well as the redox reactions controlling the measured potential, both in the solution and/or at the electrode/solution interface. Understanding the electrochemical behaviour of selected materials thus is a key point of our research, as provides the basis for constructing the abacuses needed for developing robust and reliable field sensors.

  2. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  3. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  4. Selected physical properties of various diesel blends

    NASA Astrophysics Data System (ADS)

    Hlaváčová, Zuzana; Božiková, Monika; Hlaváč, Peter; Regrut, Tomáš; Ardonová, Veronika

    2018-01-01

    The quality determination of biofuels requires identifying the chemical and physical parameters. The key physical parameters are rheological, thermal and electrical properties. In our study, we investigated samples of diesel blends with rape-seed methyl esters content in the range from 3 to 100%. In these, we measured basic thermophysical properties, including thermal conductivity and thermal diffusivity, using two different transient methods - the hot-wire method and the dynamic plane source. Every thermophysical parameter was measured 100 times using both methods for all samples. Dynamic viscosity was measured during the heating process under the temperature range 20-80°C. A digital rotational viscometer (Brookfield DV 2T) was used for dynamic viscosity detection. Electrical conductivity was measured using digital conductivity meter (Model 1152) in a temperature range from -5 to 30°C. The highest values of thermal parameters were reached in the diesel sample with the highest biofuel content. The dynamic viscosity of samples increased with higher concentration of bio-component rapeseed methyl esters. The electrical conductivity of blends also increased with rapeseed methyl esters content.

  5. Modelling and multi objective optimization of WEDM of commercially Monel super alloy using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu

    2016-09-01

    In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.

  6. Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa

    NASA Astrophysics Data System (ADS)

    Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.

    2014-12-01

    Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.

  7. Ability of various materials to detect explosive vapors by fluorescent technologies: a comparative study.

    PubMed

    Bouhadid, Myriam; Caron, Thomas; Veignal, Florian; Pasquinet, Eric; Ratsimihety, Amédée; Ganachaud, François; Montméat, Pierre

    2012-10-15

    For the development of fluorescent sensors, one of the key points is choosing the sensitive material. In this article, we aim at evaluating, under strictly identical experimental conditions, the performance of three materials for the detection of dinitrotoluene (a volatile marker of trinitrotoluene) through different parameters: response time, fluorescence intensity, sensitivity, reversibility, reaction after successive exposures and long-term stability. The results are discussed according to the nature of the sensitive materials. This first study rendered it possible to select a conjugated molecule as the best sensitive material for the development of a lab-made prototype. In a second part, the selectivity of this particular sensitive material was studied and its ability to detect TNT could be demonstrated. Copyright © 2012. Published by Elsevier B.V.

  8. The TESS Input Catalog and Selection of Targets for the TESS Transit Search

    NASA Astrophysics Data System (ADS)

    Pepper, Joshua; Stassun, Keivan G.; Paegert, Martin; Oelkers, Ryan; De Lee, Nathan Michael; Torres, Guillermo; TESS Target Selection Working Group

    2018-01-01

    The TESS mission will photometrically survey millions of the brightest stars over almost the entire the sky to detect transiting exoplanets. A key step to enable that search is the creation of the TESS Input Catalog (TIC), a compiled catalog of 700 million stars and galaxies with observed and calculated parameters. From the TIC we derive the Candidate Target List (CTL) to identify target stars for the 2-minute TESS postage stamps. The CTL is designed to identify the best stars for the detection of small planets, which includes all bright cool dwarf stars in the sky. I will describe the target selection strategy, the distribution of stars in the current CTL, and how both the TIC and CTL will expand and improve going forward.

  9. The Compositional Dependence of the Microstructure and Properties of CMSX-4 Superalloys

    NASA Astrophysics Data System (ADS)

    Yu, Hao; Xu, Wei; Van Der Zwaag, Sybrand

    2018-01-01

    The degradation of creep resistance in Ni-based single-crystal superalloys is essentially ascribed to their microstructural evolution. Yet there is a lack of work that manages to predict (even qualitatively) the effect of alloying element concentrations on the rate of microstructural degradation. In this research, a computational model is presented to connect the rafting kinetics of Ni superalloys to their chemical composition by combining thermodynamics calculation and a modified microstructural model. To simulate the evolution of key microstructural parameters during creep, the isotropic coarsening rate and γ/ γ' misfit stress are defined as composition-related parameters, and the effect of service temperature, time, and applied stress are taken into consideration. Two commercial superalloys, for which the kinetics of the rafting process are selected as the reference alloys, and the corresponding microstructural parameters are simulated and compared with experimental observations reported in the literature. The results confirm that our physical model not requiring any fitting parameters manages to predict (semiquantitatively) the microstructural parameters for different service conditions, as well as the effects of alloying element concentrations. The model can contribute to the computational design of new Ni-based superalloys.

  10. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  11. Application of Unmanned Aircraft System Instrumentation to Study Coastal Geochemistry

    NASA Astrophysics Data System (ADS)

    Coffin, R. B.; Osburn, C. L.; Smith, J. P.

    2016-02-01

    Coastal evaluation of key geochemical cycles is in strong need for thorough spatial data to address diverse topics. In many field studies we find that fixed station data taken from ship operations does not provide complete understanding of key research questions. In complicated systems where there is a need to integrate physical, chemical and biological parameters data taken from research vessels needs to be interpreted across large spatial areas. New technology in Unmanned Aircraft System (UAS) instrumentation coupled with ship board data can provide the thorough spatial data needed for a thorough evaluation of coastal sciences. This presentation will provide field data related to UAS application in two diverse environments. One study focuses on the flux of carbon dioxide and methane from Alaskan Arctic tundra and shallow Beaufort Sea coastal region to the atmosphere. In this study gas chemistry from samples is used to predict the relative fluxes to the atmosphere. A second study applies bio-optical analyses to differentiate between Gulf of Mexico coastal water column DOC and Lignin. This wide range of parameters in diverse ecosystems is selected to show current capability for application of UAS and the potential for understanding large scale questions about climate change and carbon cycling in coastal waters.

  12. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/Bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  13. Determination of key diffusion and partition parameters and their use in migration modelling of benzophenone from low-density polyethylene (LDPE) into different foodstuffs.

    PubMed

    Maia, Joaquim; Rodríguez-Bernaldo de Quirós, Ana; Sendón, Raquel; Cruz, José Manuel; Seiler, Annika; Franz, Roland; Simoneau, Catherine; Castle, Laurence; Driffield, Malcolm; Mercea, Peter; Oldring, Peter; Tosa, Valer; Paseiro, Perfecto

    2016-01-01

    The mass transport process (migration) of a model substance, benzophenone (BZP), from LDPE into selected foodstuffs at three temperatures was studied. A mathematical model based on Fick's Second Law of Diffusion was used to simulate the migration process and a good correlation between experimental and predicted values was found. The acquired results contribute to a better understanding of this phenomenon and the parameters so-derived were incorporated into the migration module of the recently launched FACET tool (Flavourings, Additives and Food Contact Materials Exposure Tool). The migration tests were carried out at different time-temperature conditions, and BZP was extracted from LDPE and analysed by HPLC-DAD. With all data, the parameters for migration modelling (diffusion and partition coefficients) were calculated. Results showed that the diffusion coefficients (within both the polymer and the foodstuff) are greatly affected by the temperature and food's physical state, whereas the partition coefficient was affected significantly only by food characteristics, particularly fat content.

  14. Technology advances needed for photovoltaics to achieve widespread grid price parity: Widespread grid price parity for photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones-Albertus, Rebecca; Feldman, David; Fu, Ran

    2016-04-20

    To quantify the potential value of technological advances to the photovoltaics (PV) sector, this paper examines the impact of changes to key PV module and system parameters on the levelized cost of energy (LCOE). The parameters selected include module manufacturing cost, efficiency, degradation rate, and service lifetime. NREL's System Advisor Model (SAM) is used to calculate the lifecycle cost per kilowatt-hour (kWh) for residential, commercial, and utility scale PV systems within the contiguous United States, with a focus on utility scale. Different technological pathways are illustrated that may achieve the Department of Energy's SunShot goal of PV electricity that ismore » at grid price parity with conventional electricity sources. In addition, the impacts on the 2015 baseline LCOE due to changes to each parameter are shown. These results may be used to identify research directions with the greatest potential to impact the cost of PV electricity.« less

  15. A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks

    NASA Astrophysics Data System (ADS)

    Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan

    2018-01-01

    Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.

  16. The genetic consequences of selection in natural populations.

    PubMed

    Thurman, Timothy J; Barrett, Rowan D H

    2016-04-01

    The selection coefficient, s, quantifies the strength of selection acting on a genetic variant. Despite this parameter's central importance to population genetic models, until recently we have known relatively little about the value of s in natural populations. With the development of molecular genetic techniques in the late 20th century and the sequencing technologies that followed, biologists are now able to identify genetic variants and directly relate them to organismal fitness. We reviewed the literature for published estimates of natural selection acting at the genetic level and found over 3000 estimates of selection coefficients from 79 studies. Selection coefficients were roughly exponentially distributed, suggesting that the impact of selection at the genetic level is generally weak but can occasionally be quite strong. We used both nonparametric statistics and formal random-effects meta-analysis to determine how selection varies across biological and methodological categories. Selection was stronger when measured over shorter timescales, with the mean magnitude of s greatest for studies that measured selection within a single generation. Our analyses found conflicting trends when considering how selection varies with the genetic scale (e.g., SNPs or haplotypes) at which it is measured, suggesting a need for further research. Besides these quantitative conclusions, we highlight key issues in the calculation, interpretation, and reporting of selection coefficients and provide recommendations for future research. © 2016 John Wiley & Sons Ltd.

  17. Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables

    DTIC Science & Technology

    2013-06-01

    1 18th ICCRTS Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables...AND SUBTITLE Using a Functional Simulation of Crisis Management to Test the C2 Agility Model Parameters on Key Performance Variables 5a. CONTRACT...command in crisis management. C2 Agility Model Agility can be conceptualized at a number of different levels; for instance at the team

  18. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  19. Sensitivity study of Space Station Freedom operations cost and selected user resources

    NASA Technical Reports Server (NTRS)

    Accola, Anne; Fincannon, H. J.; Williams, Gregory J.; Meier, R. Timothy

    1990-01-01

    The results of sensitivity studies performed to estimate probable ranges for four key Space Station parameters using the Space Station Freedom's Model for Estimating Space Station Operations Cost (MESSOC) are discussed. The variables examined are grouped into five main categories: logistics, crew, design, space transportation system, and training. The modification of these variables implies programmatic decisions in areas such as orbital replacement unit (ORU) design, investment in repair capabilities, and crew operations policies. The model utilizes a wide range of algorithms and an extensive trial logistics data base to represent Space Station operations. The trial logistics data base consists largely of a collection of the ORUs that comprise the mature station, and their characteristics based on current engineering understanding of the Space Station. A nondimensional approach is used to examine the relative importance of variables on parameters.

  20. Rotary wave-ejector enhanced pulse detonation engine

    NASA Astrophysics Data System (ADS)

    Nalim, M. R.; Izzy, Z. A.; Akbari, P.

    2012-01-01

    The use of a non-steady ejector based on wave rotor technology is modeled for pulse detonation engine performance improvement and for compatibility with turbomachinery components in hybrid propulsion systems. The rotary wave ejector device integrates a pulse detonation process with an efficient momentum transfer process in specially shaped channels of a single wave-rotor component. In this paper, a quasi-one-dimensional numerical model is developed to help design the basic geometry and operating parameters of the device. The unsteady combustion and flow processes are simulated and compared with a baseline PDE without ejector enhancement. A preliminary performance assessment is presented for the wave ejector configuration, considering the effect of key geometric parameters, which are selected for high specific impulse. It is shown that the rotary wave ejector concept has significant potential for thrust augmentation relative to a basic pulse detonation engine.

  1. Application of GRA for Sustainable Material Selection and Evaluation Using LCA

    NASA Astrophysics Data System (ADS)

    Jayakrishna, Kandasamy; Vinodh, Sekar; Sakthi Sanghvi, Vijayaselvan; Deepika, Chinadurai

    2016-07-01

    Material selection is identified as a successful key parameter in establishing any product to be sustainable, considering its end of life (EoL) characteristics. An accurate understanding of expected service conditions and environmental considerations are crucial in the selection of material plays a vital role with overwhelming customer expectations and stringent laws. Therefore, this article presents an integrated approach for sustainable material selection using grey relational analysis (GRA) considering the EoL disposal strategies with respect to an automotive product. GRA, an impact evaluation model measures the degree of similarity between the comparability (choice of material) sequence and reference (EoL strategies) sequence based on the relational grade. The ranking result shows that the outranking relationships in the order, ABS-REC > PP-INC > AL-REM > PP-LND > ABS-LND > ABS-INC > PU-LND > AL-REC > AL-LND > PU-INC > AL-INC. The best sustainable material selected was ABS and recycling was selected as the best EoL strategy with the grey relational value of 2.43856. The best material selected by this approach, ABS was evaluated for its viability using life cycle assessment and the estimated impacts also proved the practicability of the selected material highlighting the focus on dehumidification step in the manufacturing of the case product using this developed multi-criteria approach.

  2. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  3. Sample treatments prior to capillary electrophoresis-mass spectrometry.

    PubMed

    Hernández-Borges, Javier; Borges-Miquel, Teresa M; Rodríguez-Delgado, Miguel Angel; Cifuentes, Alejandro

    2007-06-15

    Sample preparation is a crucial part of chemical analysis and in most cases can become the bottleneck of the whole analytical process. Its adequacy is a key factor in determining the success of the analysis and, therefore, careful selection and optimization of the parameters controlling sample treatment should be carried out. This work revises the different strategies that have been developed for sample preparation prior to capillary electrophoresis-mass spectrometry (CE-MS). Namely the present work presents an exhaustive and critical revision of the different samples treatments used together with on-line CE-MS including works published from January 2000 to July 2006.

  4. NIR light propagation in a digital head model for traumatic brain injury (TBI)

    PubMed Central

    Francis, Robert; Khan, Bilal; Alexandrakis, George; Florence, James; MacFarlane, Duncan

    2015-01-01

    Near infrared spectroscopy (NIRS) is capable of detecting and monitoring acute changes in cerebral blood volume and oxygenation associated with traumatic brain injury (TBI). Wavelength selection, source-detector separation, optode density, and detector sensitivity are key design parameters that determine the imaging depth, chromophore separability, and, ultimately, clinical usefulness of a NIRS instrument. We present simulation results of NIR light propagation in a digital head model as it relates to the ability to detect intracranial hematomas and monitor the peri-hematomal tissue viability. These results inform NIRS instrument design specific to TBI diagnosis and monitoring. PMID:26417498

  5. Heritability in the genomics era--concepts and misconceptions.

    PubMed

    Visscher, Peter M; Hill, William G; Wray, Naomi R

    2008-04-01

    Heritability allows a comparison of the relative importance of genes and environment to the variation of traits within and across populations. The concept of heritability and its definition as an estimable, dimensionless population parameter was introduced by Sewall Wright and Ronald Fisher nearly a century ago. Despite continuous misunderstandings and controversies over its use and application, heritability remains key to the response to selection in evolutionary biology and agriculture, and to the prediction of disease risk in medicine. Recent reports of substantial heritability for gene expression and new estimation methods using marker data highlight the relevance of heritability in the genomics era.

  6. Assessment of Stage 35 With APNASA

    NASA Technical Reports Server (NTRS)

    Celestina, Mark L.; Mulac, Richard

    2009-01-01

    An assessment of APNASA was conducted at NASA Glenn Research Center under the Fundamental Aeronautics Program to determine their predictive capabilities. The geometry selected for this study was Stage 35 which is a single stage transonic compressor. A speedline at 100% speed was generated and compared to experimental data at 100% speed for two turbulence models. Performance of the stage at 100% speed and profiles of several key aerodynamic parameters are compared to the survey data downstream of the stator in this report. In addition, hub leakage was modeled and compared to solutions without leakage and the available experimental data.

  7. Guidelines for Computing Longitudinal Dynamic Stability Characteristics of a Subsonic Transport

    NASA Technical Reports Server (NTRS)

    Thompson, Joseph R.; Frank, Neal T.; Murphy, Patrick C.

    2010-01-01

    A systematic study is presented to guide the selection of a numerical solution strategy for URANS computation of a subsonic transport configuration undergoing simulated forced oscillation about its pitch axis. Forced oscillation is central to the prevalent wind tunnel methodology for quantifying aircraft dynamic stability derivatives from force and moment coefficients, which is the ultimate goal for the computational simulations. Extensive computations are performed that lead in key insights of the critical numerical parameters affecting solution convergence. A preliminary linear harmonic analysis is included to demonstrate the potential of extracting dynamic stability derivatives from computational solutions.

  8. Testing contamination risk assessment methods for toxic elements from mine waste sites

    NASA Astrophysics Data System (ADS)

    Abdaal, A.; Jordan, G.; Szilassi, P.; Kiss, J.; Detzky, G.

    2012-04-01

    Major incidents involving mine waste facilities and poor environmental management practices have left a legacy of thousands of contaminated sites like in the historic mining areas in the Carpathian Basin. Associated environmental risks have triggered the development of new EU environmental legislation to prevent and minimize the effects of such incidents. The Mine Waste Directive requires the risk-based inventory of all mine waste sites in Europe by May 2012. In order to address the mining problems a standard risk-based Pre-selection protocol has been developed by the EU Commission. This paper discusses the heavy metal contamination in acid mine drainage (AMD) for risk assessment (RA) along the Source-Pathway-Receptor chain using decision support methods which are intended to aid national and regional organizations in the inventory and assessment of potentially contaminated mine waste sites. Several recognized methods such as the European Environmental Agency (EEA) standard PRAMS model for soil contamination, US EPA-based AIMSS and Irish HMS-IRC models for RA of abandoned sites are reviewed, compared and tested for the mining waste environment. In total 145 ore mine waste sites have been selected for scientific testing using the EU Pre-selection protocol as a case study from Hungary. The proportion of uncertain to certain responses for a site and for the total number of sites may give an insight of specific and overall uncertainty in the data we use. The Pre-selection questions are efficiently linked to a GIS system as database inquiries using digital spatial data to directly generate answers. Key parameters such as distance to the nearest surface and ground water bodies, to settlements and protected areas are calculated and statistically evaluated using STATGRAPHICS® in order to calibrate the RA models. According to our scientific research results, of the 145 sites 11 sites are the most risky having foundation slope >20o, 57 sites are within distance <500m to the nearest surface water bodies, and 33 sites are within distance <680m to the nearest settlements. Moreover 25 sites lie directly above the 'poor status' ground water bodies and 91 sites are located in the protected Natura2000 sites (distance =0). Analysis of the total score of all sites was performed, resulting in six risk classes, as follows: <21 (class I, 4 sites), 21-31 (class II, 16 sites), 31-42 (class III, 27 sites), 42-54 (class II, 38 sites), 54-66 (class V, 40 sites) and >66 (class VI, 20 sites). The total risk scores and key parameters are provided in separate tables and GIS maps, in order to facilitate interpretation and comparison. Results of the Pre-selection protocol are consistent with those of the screening PRAMS model. KEY WORDS contamination risk assessment, Mine Waste Directive, Pre-selection Protocol, PRA.MS, AIMSS, abandoned mine sites, GIS

  9. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  10. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  11. Handwriting: Feature Correlation Analysis for Biometric Hashes

    NASA Astrophysics Data System (ADS)

    Vielhauer, Claus; Steinmetz, Ralf

    2004-12-01

    In the application domain of electronic commerce, biometric authentication can provide one possible solution for the key management problem. Besides server-based approaches, methods of deriving digital keys directly from biometric measures appear to be advantageous. In this paper, we analyze one of our recently published specific algorithms of this category based on behavioral biometrics of handwriting, the biometric hash. Our interest is to investigate to which degree each of the underlying feature parameters contributes to the overall intrapersonal stability and interpersonal value space. We will briefly discuss related work in feature evaluation and introduce a new methodology based on three components: the intrapersonal scatter (deviation), the interpersonal entropy, and the correlation between both measures. Evaluation of the technique is presented based on two data sets of different size. The method presented will allow determination of effects of parameterization of the biometric system, estimation of value space boundaries, and comparison with other feature selection approaches.

  12. Using diurnal temperature signals to infer vertical groundwater-surface water exchange

    USGS Publications Warehouse

    Irvine, Dylan J.; Briggs, Martin A.; Lautz, Laura K.; Gordon, Ryan P.; McKenzie, Jeffrey M.; Cartwright, Ian

    2017-01-01

    Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer.

  13. Programmable Gain Amplifiers with DC Suppression and Low Output Offset for Bioelectric Sensors

    PubMed Central

    Carrera, Albano; de la Rosa, Ramón; Alonso, Alonso

    2013-01-01

    DC-offset and DC-suppression are key parameters in bioelectric amplifiers. However, specific DC analyses are not often explained. Several factors influence the DC-budget: the programmable gain, the programmable cut-off frequencies for high pass filtering and, the low cut-off values and the capacitor blocking issues involved. A new intermediate stage is proposed to address the DC problem entirely. Two implementations were tested. The stage is composed of a programmable gain amplifier (PGA) with DC-rejection and low output offset. Cut-off frequencies are selectable and values from 0.016 to 31.83 Hz were tested, and the capacitor deblocking is embedded in the design. Hence, this PGA delivers most of the required gain with constant low output offset, notwithstanding the gain or cut-off frequency selected. PMID:24084109

  14. Systematic study of high-frequency ultrasonic transducer design for laser-scanning photoacoustic ophthalmoscopy

    PubMed Central

    Ma, Teng; Zhang, Xiangyang; Chiu, Chi Tat; Chen, Ruimin; Kirk Shung, K.; Zhou, Qifa; Jiao, Shuliang

    2014-01-01

    Abstract. Photoacoustic ophthalmoscopy (PAOM) is a high-resolution in vivo imaging modality that is capable of providing specific optical absorption information for the retina. A high-frequency ultrasonic transducer is one of the key components in PAOM, which is in contact with the eyelid through coupling gel during imaging. The ultrasonic transducer plays a crucial role in determining the image quality affected by parameters such as spatial resolution, signal-to-noise ratio, and field of view. In this paper, we present the results of a systematic study on a high-frequency ultrasonic transducer design for PAOM. The design includes piezoelectric material selection, frequency selection, and the fabrication process. Transducers of various designs were successfully applied for capturing images of biological samples in vivo. The performances of these designs are compared and evaluated. PMID:24441942

  15. Systematic study of high-frequency ultrasonic transducer design for laser-scanning photoacoustic ophthalmoscopy.

    PubMed

    Ma, Teng; Zhang, Xiangyang; Chiu, Chi Tat; Chen, Ruimin; Kirk Shung, K; Zhou, Qifa; Jiao, Shuliang

    2014-01-01

    Photoacoustic ophthalmoscopy (PAOM) is a high-resolution in vivo imaging modality that is capable of providing specific optical absorption information for the retina. A high-frequency ultrasonic transducer is one of the key components in PAOM, which is in contact with the eyelid through coupling gel during imaging. The ultrasonic transducer plays a crucial role in determining the image quality affected by parameters such as spatial resolution, signal-to-noise ratio, and field of view. In this paper, we present the results of a systematic study on a high-frequency ultrasonic transducer design for PAOM. The design includes piezoelectric material selection, frequency selection, and the fabrication process. Transducers of various designs were successfully applied for capturing images of biological samples in vivo. The performances of these designs are compared and evaluated.

  16. Cryogenic Etching of High Aspect Ratio 400 nm Pitch Silicon Gratings.

    PubMed

    Miao, Houxun; Chen, Lei; Mirzaeimoghri, Mona; Kasica, Richard; Wen, Han

    2016-10-01

    The cryogenic process and Bosch process are two widely used processes for reactive ion etching of high aspect ratio silicon structures. This paper focuses on the cryogenic deep etching of 400 nm pitch silicon gratings with various etching mask materials including polymer, Cr, SiO 2 and Cr-on-polymer. The undercut is found to be the key factor limiting the achievable aspect ratio for the direct hard masks of Cr and SiO 2 , while the etch selectivity responds to the limitation of the polymer mask. The Cr-on-polymer mask provides the same high selectivity as Cr and reduces the excessive undercut introduced by direct hard masks. By optimizing the etching parameters, we etched a 400 nm pitch grating to ≈ 10.6 μ m depth, corresponding to an aspect ratio of ≈ 53.

  17. Clinical Issues-November 2017.

    PubMed

    Johnstone, Esther M

    2017-11-01

    Heating, ventilation, and air-conditioning (HVAC) systems in the OR Key words: airborne contaminants, HVAC system, air pressure, air quality, temperature and humidity. Air changes and positive pressure Key words: air changes, positive pressure airflow, unidirectional airflow, outdoor air, recirculated air. Product selection Key word: product evaluation, product selection, selection committee. Entry into practice Key words: associate degree in nursing, bachelor of science in nursing, entry-level position, advanced education, BSN-prepared RNs. Mentoring in perioperative nursing Key words: mentor, novice, practice improvement, nursing workforce. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  18. An effective biometric discretization approach to extract highly discriminative, informative, and privacy-protective binary representation

    NASA Astrophysics Data System (ADS)

    Lim, Meng-Hui; Teoh, Andrew Beng Jin

    2011-12-01

    Biometric discretization derives a binary string for each user based on an ordered set of biometric features. This representative string ought to be discriminative, informative, and privacy protective when it is employed as a cryptographic key in various security applications upon error correction. However, it is commonly believed that satisfying the first and the second criteria simultaneously is not feasible, and a tradeoff between them is always definite. In this article, we propose an effective fixed bit allocation-based discretization approach which involves discriminative feature extraction, discriminative feature selection, unsupervised quantization (quantization that does not utilize class information), and linearly separable subcode (LSSC)-based encoding to fulfill all the ideal properties of a binary representation extracted for cryptographic applications. In addition, we examine a number of discriminative feature-selection measures for discretization and identify the proper way of setting an important feature-selection parameter. Encouraging experimental results vindicate the feasibility of our approach.

  19. Impact of tuning CO 2-philicity in polydimethylsiloxane-based membranes for carbon dioxide separation

    DOE PAGES

    Hong, Tao; Chatterjee, Sabornie; Mahurin, Shannon M.; ...

    2017-02-22

    Amidoxime-functionalized polydimethylsiloxane (AO-PDMSPNB) membranes with various amidoxime compositions were synthesized via ring-opening metathesis polymerization followed by post-polymerization modification. Compared to other previously reported PDMS-based membranes, the amidoxime-functionalized membranes show enhanced CO 2 permeability and CO 2/N 2 selectivity. The overall gas separation performance (CO 2 permeability 6800 Barrer; CO 2/N 2 selectivity 19) of the highest performing membrane exceeds the Robeson upper bound line, and the excellent permeability of the copolymer itself provides great potential for real world applications where huge volumes of gases are separated. This study details how tuning the CO 2-philicity within rubbery polymer matrices influences gasmore » transport properties. Key parameters for tuning gas transport properties are discussed, and the experimental results show good consistency with theoretical calculations. Finally, this study provides a roadmap to enhancing gas separation performance in rubbery polymers by tuning gas solubility selectivity.« less

  20. Effect of key parameters on the selective acid leach of nickel from mixed nickel-cobalt hydroxide

    NASA Astrophysics Data System (ADS)

    Byrne, Kelly; Hawker, William; Vaughan, James

    2017-01-01

    Mixed nickel-cobalt hydroxide precipitate (MHP) is a relatively recent intermediate product in primary nickel production. The material is now being produced on a large scale (approximately 60,000 t/y Ni as MHP) at facilities in Australia (Ravensthorpe, First Quantum Minerals) and Papua New Guinea (Ramu, MCC/Highlands Pacific). The University of Queensland Hydrometallurgy research group developed a new processing technology to refine MHP based on a selective acid leach. This process provides a streamlined route to obtaining a high purity nickel product compared with conventional leaching / solvent extraction processes. The selective leaching of nickel from MHP involves stabilising manganese and cobalt into the solid phase using an oxidant. This paper describes a batch reactor study investigating the timing of acid and oxidant addition on the rate and extent of nickel, cobalt, manganese leached from industrial MHP. For the conditions studied, it is concluded that the simultaneous addition of acid and oxidant provide the best process outcomes.

  1. Artificial neural networks for modeling ammonia emissions released from sewage sludge composting

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.

    2012-09-01

    The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).

  2. Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox

    NASA Astrophysics Data System (ADS)

    Li, R. N.; Liu, X.; Liu, S. J.

    2013-12-01

    In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.

  3. Quantitative structure-activity relationship analysis of the pharmacology of para-substituted methcathinone analogues.

    PubMed

    Bonano, J S; Banks, M L; Kolanos, R; Sakloth, F; Barnier, M L; Glennon, R A; Cozzi, N V; Partilla, J S; Baumann, M H; Negus, S S

    2015-05-01

    Methcathinone (MCAT) is a potent monoamine releaser and parent compound to emerging drugs of abuse including mephedrone (4-CH3 MCAT), the para-methyl analogue of MCAT. This study examined quantitative structure-activity relationships (QSAR) for MCAT and six para-substituted MCAT analogues on (a) in vitro potency to promote monoamine release via dopamine and serotonin transporters (DAT and SERT, respectively), and (b) in vivo modulation of intracranial self-stimulation (ICSS), a behavioural procedure used to evaluate abuse potential. Neurochemical and behavioural effects were correlated with steric (Es ), electronic (σp ) and lipophilic (πp ) parameters of the para substituents. For neurochemical studies, drug effects on monoamine release through DAT and SERT were evaluated in rat brain synaptosomes. For behavioural studies, drug effects were tested in male Sprague-Dawley rats implanted with electrodes targeting the medial forebrain bundle and trained to lever-press for electrical brain stimulation. MCAT and all six para-substituted analogues increased monoamine release via DAT and SERT and dose- and time-dependently modulated ICSS. In vitro selectivity for DAT versus SERT correlated with in vivo efficacy to produce abuse-related ICSS facilitation. In addition, the Es values of the para substituents correlated with both selectivity for DAT versus SERT and magnitude of ICSS facilitation. Selectivity for DAT versus SERT in vitro is a key determinant of abuse-related ICSS facilitation by these MCAT analogues, and steric aspects of the para substituent of the MCAT scaffold (indicated by Es ) are key determinants of this selectivity. © 2014 The British Pharmacological Society.

  4. Selective Fragmentation of Biorefinery Corncob Lignin into p-Hydroxycinnamic Esters with a Supported ZnMoO4 Catalyst.

    PubMed

    Wang, Shuizhong; Gao, Wa; Li, Helong; Xiao, Ling-Ping; Sun, Run-Cang; Song, Guoyong

    2018-04-16

    Lignin is the largest renewable resource of bio-aromatics, and catalytic fragmentation of lignin into phenolic monomers is increasingly recognized as an important starting point for lignin valorization. Herein, we reported zinc molybdate (ZnMoO4) supported on MCM-41 can catalyze fragmentation of biorefinery technical lignin, enzymatic mild acidolysis lignin and native lignin derived from corncob, to give lignin oily products containing 15 to 37.8 wt% phenolic monomers, in which the high selectivities towards methyl coumarate 1 and methyl ferulate 2 were obtained (up to 78%). The effects of some key parameters such as the influences of solvent, reaction temperature, time, H2 pressure and catalyst dosage were examined in view of activity and selectivity. The loss of zinc atom in catalyst is appointed as a primary cause of deactivation, and catalytic activity and selectivity can be well-preserved for at least six times by thermal calcination. The high selectivity to compounds 1 and 2 make them easily separated and purified from lignin oily product, thus providing sustainable monomers for preparation of functional polyetheresters and polyesters. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  6. Perspective: Size selected clusters for catalysis and electrochemistry

    DOE PAGES

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; ...

    2018-03-15

    We report that size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this Perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition,more » cluster-support interactions and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modelling based on density functional theory sampling of local minima and energy barriers or ab initio Molecular Dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Lastly, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.« less

  7. Perspective: Size selected clusters for catalysis and electrochemistry

    NASA Astrophysics Data System (ADS)

    Halder, Avik; Curtiss, Larry A.; Fortunelli, Alessandro; Vajda, Stefan

    2018-03-01

    Size-selected clusters containing a handful of atoms may possess noble catalytic properties different from nano-sized or bulk catalysts. Size- and composition-selected clusters can also serve as models of the catalytic active site, where an addition or removal of a single atom can have a dramatic effect on their activity and selectivity. In this perspective, we provide an overview of studies performed under both ultra-high vacuum and realistic reaction conditions aimed at the interrogation, characterization, and understanding of the performance of supported size-selected clusters in heterogeneous and electrochemical reactions, which address the effects of cluster size, cluster composition, cluster-support interactions, and reaction conditions, the key parameters for the understanding and control of catalyst functionality. Computational modeling based on density functional theory sampling of local minima and energy barriers or ab initio molecular dynamics simulations is an integral part of this research by providing fundamental understanding of the catalytic processes at the atomic level, as well as by predicting new materials compositions which can be validated in experiments. Finally, we discuss approaches which aim at the scale up of the production of well-defined clusters for use in real world applications.

  8. Evolution of stochastic demography with life history tradeoffs in density-dependent age-structured populations.

    PubMed

    Lande, Russell; Engen, Steinar; Sæther, Bernt-Erik

    2017-10-31

    We analyze the stochastic demography and evolution of a density-dependent age- (or stage-) structured population in a fluctuating environment. A positive linear combination of age classes (e.g., weighted by body mass) is assumed to act as the single variable of population size, [Formula: see text], exerting density dependence on age-specific vital rates through an increasing function of population size. The environment fluctuates in a stationary distribution with no autocorrelation. We show by analysis and simulation of age structure, under assumptions often met by vertebrate populations, that the stochastic dynamics of population size can be accurately approximated by a univariate model governed by three key demographic parameters: the intrinsic rate of increase and carrying capacity in the average environment, [Formula: see text] and [Formula: see text], and the environmental variance in population growth rate, [Formula: see text] Allowing these parameters to be genetically variable and to evolve, but assuming that a fourth parameter, [Formula: see text], measuring the nonlinearity of density dependence, remains constant, the expected evolution maximizes [Formula: see text] This shows that the magnitude of environmental stochasticity governs the classical trade-off between selection for higher [Formula: see text] versus higher [Formula: see text] However, selection also acts to decrease [Formula: see text], so the simple life-history trade-off between [Formula: see text]- and [Formula: see text]-selection may be obscured by additional trade-offs between them and [Formula: see text] Under the classical logistic model of population growth with linear density dependence ([Formula: see text]), life-history evolution in a fluctuating environment tends to maximize the average population size. Published under the PNAS license.

  9. Ice_Sheets_CCI: Essential Climate Variables for the Greenland Ice Sheet

    NASA Astrophysics Data System (ADS)

    Forsberg, R.; Sørensen, L. S.; Khan, A.; Aas, C.; Evansberget, D.; Adalsteinsdottir, G.; Mottram, R.; Andersen, S. B.; Ahlstrøm, A.; Dall, J.; Kusk, A.; Merryman, J.; Hvidberg, C.; Khvorostovsky, K.; Nagler, T.; Rott, H.; Scharrer, M.; Shepard, A.; Ticconi, F.; Engdahl, M.

    2012-04-01

    As part of the ESA Climate Change Initiative (www.esa-cci.org) a long-term project "ice_sheets_cci" started January 1, 2012, in addition to the existing 11 projects already generating Essential Climate Variables (ECV) for the Global Climate Observing System (GCOS). The "ice_sheets_cci" goal is to generate a consistent, long-term and timely set of key climate parameters for the Greenland ice sheet, to maximize the impact of European satellite data on climate research, from missions such as ERS, Envisat and the future Sentinel satellites. The climate parameters to be provided, at first in a research context, and in the longer perspective by a routine production system, would be grids of Greenland ice sheet elevation changes from radar altimetry, ice velocity from repeat-pass SAR data, as well as time series of marine-terminating glacier calving front locations and grounding lines for floating-front glaciers. The ice_sheets_cci project will involve a broad interaction of the relevant cryosphere and climate communities, first through user consultations and specifications, and later in 2012 optional participation in "best" algorithm selection activities, where prototype climate parameter variables for selected regions and time frames will be produced and validated using an objective set of criteria ("Round-Robin intercomparison"). This comparative algorithm selection activity will be completely open, and we invite all interested scientific groups with relevant experience to participate. The results of the "Round Robin" exercise will form the algorithmic basis for the future ECV production system. First prototype results will be generated and validated by early 2014. The poster will show the planned outline of the project and some early prototype results.

  10. Evaluation of photosynthetic electrons derivation by exogenous redox mediators.

    PubMed

    Longatte, Guillaume; Fu, Han-Yi; Buriez, Olivier; Labbé, Eric; Wollman, Francis-André; Amatore, Christian; Rappaport, Fabrice; Guille-Collignon, Manon; Lemaître, Frédéric

    2015-10-01

    Oxygenic photosynthesis is the complex process that occurs in plants or algae by which the energy from the sun is converted into an electrochemical potential that drives the assimilation of carbon dioxide and the synthesis of carbohydrates. Quinones belong to a family of species commonly found in key processes of the Living, like photosynthesis or respiration, in which they act as electron transporters. This makes this class of molecules a popular candidate for biofuel cell and bioenergy applications insofar as they can be used as cargo to ship electrons to an electrode immersed in the cellular suspension. Nevertheless, such electron carriers are mostly selected empirically. This is why we report on a method involving fluorescence measurements to estimate the ability of seven different quinones to accept photosynthetic electrons downstream of photosystem II, the first protein complex in the light-dependent reactions of oxygenic photosynthesis. To this aim we use a mutant of Chlamydomonas reinhardtii, a unicellular green alga, impaired in electron downstream of photosystem II and assess the ability of quinones to restore electron flow by fluorescence. In this work, we defined and extracted a "derivation parameter" D that indicates the derivation efficiency of the exogenous quinones investigated. D then allows electing 2,6-dichlorobenzoquinone, 2,5-dichlorobenzoquinone and p-phenylbenzoquinone as good candidates. More particularly, our investigations suggested that other key parameters like the partition of quinones between different cellular compartments and their propensity to saturate these various compartments should also be taken into account in the process of selecting exogenous quinones for the purpose of deriving photoelectrons from intact algae. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Scheduling on the basis of the research of dependences among the construction process parameters

    NASA Astrophysics Data System (ADS)

    Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga

    2017-10-01

    The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.

  12. Composite multi-parameter ranking of real and virtual compounds for design of MC4R agonists: renaissance of the Free-Wilson methodology.

    PubMed

    Nilsson, Ingemar; Polla, Magnus O

    2012-10-01

    Drug design is a multi-parameter task present in the analysis of experimental data for synthesized compounds and in the prediction of new compounds with desired properties. This article describes the implementation of a binned scoring and composite ranking scheme for 11 experimental parameters that were identified as key drivers in the MC4R project. The composite ranking scheme was implemented in an AstraZeneca tool for analysis of project data, thereby providing an immediate re-ranking as new experimental data was added. The automated ranking also highlighted compounds overlooked by the project team. The successful implementation of a composite ranking on experimental data led to the development of an equivalent virtual score, which was based on Free-Wilson models of the parameters from the experimental ranking. The individual Free-Wilson models showed good to high predictive power with a correlation coefficient between 0.45 and 0.97 based on the external test set. The virtual ranking adds value to the selection of compounds for synthesis but error propagation must be controlled. The experimental ranking approach adds significant value, is parameter independent and can be tuned and applied to any drug discovery project.

  13. A national-scale analysis of the impacts of drought on water quality in UK rivers

    NASA Astrophysics Data System (ADS)

    Coxon, G.; Howden, N. J. K.; Freer, J. E.; Whitehead, P. G.; Bussi, G.

    2015-12-01

    Impacts of droughts on water quality qre difficult to quanitify but are essential to manage ecosystems and maintain public water supply. During drought, river water quality is significantly changed by increased residence times, reduced dilution and enhanced biogeochemical processes. But, the impact severity varies between catchments and depends on multiple factors including the sensitivity of the river to drought conditions, anthropogenic influences in the catchment and different delivery patterns of key nutrient, contaminant and mineral sources. A key constraint is data availability for key water quality parameters such that impacts of drought periods on certain determinands can be identified. We use national-scale water quality monitoring data to investigate the impacts of drought periods on water quality in the United Kingdom (UK). The UK Water Quality Sampling Harmonised Monitoring Scheme (HMS) dataset consists of >200 UK sites with weekly to monthly sampling of many water quality variables over the past 40 years. This covers several major UK droughts in 1975-1976, 1983-1984,1989-1992, 1995 and 2003, which cover severity, spatial and temporal extent, and how this affects the temporal impact of the drought on water quality. Several key water quality parameters, including water temperature, nitrate, dissolved organic carbon, orthophosphate, chlorophyll and pesticides, are selected from the database. These were chosen based on their availability for many of the sites, high sampling resolution and importance to the drinking water function and ecological status of the river. The water quality time series were then analysed to investigate whether water quality during droughts deviated significantly from non-drought periods and examined how the results varied spatially, for different drought periods and for different water quality parameters. Our results show that there is no simple conclusion as to the effects of drought on water quality in UK rivers; impacts are diverse both in terms of timing, magnitude and duration. We consider several scenarios in which management interventions may alleviate water quality pressures, and discuss how the many interacting factors need to be better characterised to support detailed mechanistic models to improve our process understanding.

  14. Genuine binding energy of the hydrated electron

    PubMed Central

    Luckhaus, David; Yamamoto, Yo-ichi; Suzuki, Toshinori; Signorell, Ruth

    2017-01-01

    The unknown influence of inelastic and elastic scattering of slow electrons in water has made it difficult to clarify the role of the solvated electron in radiation chemistry and biology. We combine accurate scattering simulations with experimental photoemission spectroscopy of the hydrated electron in a liquid water microjet, with the aim of resolving ambiguities regarding the influence of electron scattering on binding energy spectra, photoelectron angular distributions, and probing depths. The scattering parameters used in the simulations are retrieved from independent photoemission experiments of water droplets. For the ground-state hydrated electron, we report genuine values devoid of scattering contributions for the vertical binding energy and the anisotropy parameter of 3.7 ± 0.1 eV and 0.6 ± 0.2, respectively. Our probing depths suggest that even vacuum ultraviolet probing is not particularly surface-selective. Our work demonstrates the importance of quantitative scattering simulations for a detailed analysis of key properties of the hydrated electron. PMID:28508051

  15. Discovery of Potent, Orally Bioavailable Inhibitors of Human Cytomegalovirus

    PubMed Central

    2016-01-01

    A high-throughput screen based on a viral replication assay was used to identify inhibitors of the human cytomegalovirus. Using this approach, hit compound 1 was identified as a 4 μM inhibitor of HCMV that was specific and selective over other herpes viruses. Time of addition studies indicated compound 1 exerted its antiviral effect early in the viral life cycle. Mechanism of action studies also revealed that this series inhibited infection of MRC-5 and ARPE19 cells by free virus and via direct cell-to-cell spread from infected to uninfected cells. Preliminary structure–activity relationships demonstrated that the potency of compound 1 could be improved to a low nanomolar level, but metabolic stability was a key optimization parameter for this series. A strategy focused on minimizing metabolic hydrolysis of the N1-amide led to an alternative scaffold in this series with improved metabolic stability and good pharmacokinetic parameters in rat. PMID:27190604

  16. Safety monitoring and reactor transient interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hench, J. E.; Fukushima, T. Y.

    1983-12-20

    An apparatus which monitors a subset of control panel inputs in a nuclear reactor power plant, the subset being those indicators of plant status which are of a critical nature during an unusual event. A display (10) is provided for displaying primary information (14) as to whether the core is covered and likely to remain covered, including information as to the status of subsystems needed to cool the core and maintain core integrity. Secondary display information (18,20) is provided which can be viewed selectively for more detailed information when an abnormal condition occurs. The primary display information has messages (24)more » for prompting an operator as to which one of a number of pushbuttons (16) to press to bring up the appropriate secondary display (18,20). The apparatus utilizes a thermal-hydraulic analysis to more accurately determine key parameters (such as water level) from other measured parameters, such as power, pressure, and flow rate.« less

  17. Proof of concept of a novel SMA cage actuator

    NASA Astrophysics Data System (ADS)

    Deyer, Christopher W.; Brei, Diann E.

    2001-06-01

    Numerous industrial applications that currently utilize expensive solenoids or slow wax motors are good candidates for smart material actuation. Many of these applications require millimeter-scale displacement and low cost; thereby, eliminating piezoelectric technologies. Fortunately, there is a subset of these applications that can tolerate the slower response of shape memory alloys. This paper details a proof-of-concept study of a novel SMA cage actuator intended for proportional braking in commercial appliances. The chosen actuator architecture consists of a SMA wire cage enclosing a return spring. To develop an understanding of the influences of key design parameters on the actuator response time and displacement amplitude, a half-factorial 25 Design of Experiment (DOE) study was conducted utilizing eight differently configured prototypes. The DOE results guided the selection of the design parameters for the final proof-of-concept actuator. This actuator was built and experimentally characterized for stroke, proportional control and response time.

  18. Cost of ownership for inspection equipment

    NASA Astrophysics Data System (ADS)

    Dance, Daren L.; Bryson, Phil

    1993-08-01

    Cost of Ownership (CoO) models are increasingly a part of the semiconductor equipment evaluation and selection process. These models enable semiconductor manufacturers and equipment suppliers to quantify a system in terms of dollars per wafer. Because of the complex nature of the semiconductor manufacturing process, there are several key attributes that must be considered in order to accurately reflect the true 'cost of ownership'. While most CoO work to date has been applied to production equipment, the need to understand cost of ownership for inspection and metrology equipment presents unique challenges. Critical parameters such as detection sensitivity as a function of size and type of defect are not included in current CoO models yet are, without question, major factors in the technical evaluation process and life-cycle cost. This paper illustrates the relationship between these parameters, as components of the alpha and beta risk, and cost of ownership.

  19. Key Principles of Superfund Remedy Selection

    EPA Pesticide Factsheets

    Guidance on the primary considerations of remedy selection which are universally applicable at Superfund sites. Key guidance here include: Rules of Thumb for Superfund Remedy Selection and Role of the Baseline Risk Assessment.

  20. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    NASA Astrophysics Data System (ADS)

    Landen, O. L.; Edwards, J.; Haan, S. W.; Robey, H. F.; Milovich, J.; Spears, B. K.; Weber, S. V.; Clark, D. S.; Lindl, J. D.; MacGowan, B. J.; Moses, E. I.; Atherton, J.; Amendt, P. A.; Boehly, T. R.; Bradley, D. K.; Braun, D. G.; Callahan, D. A.; Celliers, P. M.; Collins, G. W.; Dewald, E. L.; Divol, L.; Frenje, J. A.; Glenzer, S. H.; Hamza, A.; Hammel, B. A.; Hicks, D. G.; Hoffman, N.; Izumi, N.; Jones, O. S.; Kilkenny, J. D.; Kirkwood, R. K.; Kline, J. L.; Kyrala, G. A.; Marinak, M. M.; Meezan, N.; Meyerhofer, D. D.; Michel, P.; Munro, D. H.; Olson, R. E.; Nikroo, A.; Regan, S. P.; Suter, L. J.; Thomas, C. A.; Wilson, D. C.

    2011-05-01

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analytic models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.

  1. Deconstructing three-dimensional (3D) structure of absorptive glass mat (AGM) separator to tailor pore dimensions and amplify electrolyte uptake

    NASA Astrophysics Data System (ADS)

    Rawal, Amit; Rao, P. V. Kameswara; Kumar, Vijay

    2018-04-01

    Absorptive glass mat (AGM) separator is a vital technical component in valve regulated lead acid (VRLA) batteries that can be tailored for a desired application. To selectively design and tailor the AGM separator, the intricate three-dimensional (3D) structure needs to be unraveled. Herein, a toolkit of 3D analytical models of pore size distribution and electrolyte uptake expressed via wicking characteristics of AGM separators under unconfined and confined states is presented. 3D data of fiber orientation distributions obtained previously through X-ray micro-computed tomography (microCT) analysis are used as key set of input parameters. The predictive ability of pore size distribution model is assessed through the commonly used experimental set-up that usually apply high level of compressive stresses. Further, the existing analytical model of wicking characteristics of AGM separators has been extended to account for 3D characteristics, and subsequently, compared with the experimental results. A good agreement between the theory and experiments pave the way to simulate the realistic charge-discharge modes of the battery by applying cyclic loading condition. A threshold criterion describing the invariant behavior of pore size and wicking characteristics in terms of maximum permissible limit of key structural parameters during charge-discharge mode of the battery has also been proposed.

  2. Remote health monitoring: predicting outcome success based on contextual features for cardiovascular disease.

    PubMed

    Alshurafa, Nabil; Eastwood, Jo-Ann; Pourhomayoun, Mohammad; Liu, Jason J; Sarrafzadeh, Majid

    2014-01-01

    Current studies have produced a plethora of remote health monitoring (RHM) systems designed to enhance the care of patients with chronic diseases. Many RHM systems are designed to improve patient risk factors for cardiovascular disease, including physiological parameters such as body mass index (BMI) and waist circumference, and lipid profiles such as low density lipoprotein (LDL) and high density lipoprotein (HDL). There are several patient characteristics that could be determining factors for a patient's RHM outcome success, but these characteristics have been largely unidentified. In this paper, we analyze results from an RHM system deployed in a six month Women's Heart Health study of 90 patients, and apply advanced feature selection and machine learning algorithms to identify patients' key baseline contextual features and build effective prediction models that help determine RHM outcome success. We introduce Wanda-CVD, a smartphone-based RHM system designed to help participants with cardiovascular disease risk factors by motivating participants through wireless coaching using feedback and prompts as social support. We analyze key contextual features that secure positive patient outcomes in both physiological parameters and lipid profiles. Results from the Women's Heart Health study show that health threat of heart disease, quality of life, family history, stress factors, social support, and anxiety at baseline all help predict patient RHM outcome success.

  3. Genetic potential of common bean progenies obtained by different breeding methods evaluated in various environments.

    PubMed

    Pontes Júnior, V A; Melo, P G S; Pereira, H S; Melo, L C

    2016-09-02

    Grain yield is strongly influenced by the environment, has polygenic and complex inheritance, and is a key trait in the selection and recommendation of cultivars. Breeding programs should efficiently explore the genetic variability resulting from crosses by selecting the most appropriate method for breeding in segregating populations. The goal of this study was to evaluate and compare the genetic potential of common bean progenies of carioca grain for grain yield, obtained by different breeding methods and evaluated in different environments. Progenies originating from crosses between lines and CNFC 7812 and CNFC 7829 were replanted up to the F 7 generation using three breeding methods in segregating populations: population (bulk), bulk within F 2 progenies, and single-seed descent (SSD). Fifteen F 8 progenies per method, two controls (BRS Estilo and Perola), and the parents were evaluated in a 7 x 7 simple lattice design, with plots of two 4-m rows. The tests were conducted in 10 environments in four States of Brazil and in three growing seasons in 2009 and 2010. Genetic parameters including genetic variance, heritability, variance of interaction, and expected selection gain were estimated. Genetic variability among progenies and the effect of progeny-environment interactions were determined for the three methods. The breeding methods differed significantly due to the effects of sampling procedures on the progenies and due to natural selection, which mainly affected the bulk method. The SSD and bulk methods provided populations with better estimates of genetic parameters and more stable progenies that were less affected by interaction with the environment.

  4. Leaf Photosynthetic Parameters Related to Biomass Accumulation in a Global Rice Diversity Survey1[OPEN

    PubMed Central

    Zheng, Guangyong; Hamdani, Saber; Essemine, Jemaa; Song, Qingfeng; Wang, Hongru

    2017-01-01

    Mining natural variations is a major approach to identify new options to improve crop light use efficiency. So far, successes in identifying photosynthetic parameters positively related to crop biomass accumulation through this approach are scarce, possibly due to the earlier emphasis on properties related to leaf instead of canopy photosynthetic efficiency. This study aims to uncover rice (Oryza sativa) natural variations to identify leaf physiological parameters that are highly correlated with biomass accumulation, a surrogate of canopy photosynthesis. To do this, we systematically investigated 14 photosynthetic parameters and four morphological traits in a rice population, which consists of 204 U.S. Department of Agriculture-curated minicore accessions collected globally and 11 elite Chinese rice cultivars in both Beijing and Shanghai. To identify key components responsible for the variance of biomass accumulation, we applied a stepwise feature-selection approach based on linear regression models. Although there are large variations in photosynthetic parameters measured in different environments, we observed that photosynthetic rate under low light (Alow) was highly related to biomass accumulation and also exhibited high genomic inheritability in both environments, suggesting its great potential to be used as a target for future rice breeding programs. Large variations in Alow among modern rice cultivars further suggest the great potential of using this parameter in contemporary rice breeding for the improvement of biomass and, hence, yield potential. PMID:28739819

  5. Q-marker based strategy for CMC research of Chinese medicine: A case study of Panax Notoginseng saponins.

    PubMed

    Zhong, Yi; Zhu, Jieqiang; Yang, Zhenzhong; Shao, Qing; Fan, Xiaohui; Cheng, Yiyu

    2018-01-31

    To ensure pharmaceutical quality, chemistry, manufacturing and control (CMC) research is essential. However, due to the inherent complexity of Chinese medicine (CM), CMC study of CM remains a great challenge for academia, industry, and regulatory agencies. Recently, quality-marker (Q-marker) was proposed to establish quality standards or quality analysis approaches of Chinese medicine, which sheds a light on Chinese medicine's CMC study. Here manufacture processes of Panax Notoginseng Saponins (PNS) is taken as a case study and the present work is to establish a Q-marker based research strategy for CMC of Chinese medicine. The Q-markers of Panax Notoginseng Saponins (PNS) is selected and established by integrating chemical profile with pharmacological activities. Then, the key processes of PNS manufacturing are identified by material flow analysis. Furthermore, modeling algorithms are employed to explore the relationship between Q-markers and critical process parameters (CPPs) of the key processes. At last, CPPs of the key processes are optimized in order to improving the process efficiency. Among the 97 identified compounds, Notoginsenoside R 1 , ginsenoside Rg 1 , Re, Rb 1 and Rd are selected as the Q-markers of PNS. Our analysis on PNS manufacturing show the extraction process and column chromatography process are the key processes. With the CPPs of each process as the inputs and Q-markers' contents as the outputs, two process prediction models are built separately for the extraction process and column chromatography process of Panax notoginseng, which both possess good prediction ability. Based on the efficiency models of extraction process and column chromatography process we constructed, the optimal CPPs of both processes are calculated. Our results show that the Q-markers derived from CMC research strategy can be applied to analyze the manufacturing processes of Chinese medicine to assure product's quality and promote key processes' efficiency simultaneously. Copyright © 2018 Elsevier GmbH. All rights reserved.

  6. Detecting seasonal variations of soil parameters via field measurements and stochastic simulations in the hillslope

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; An, Hyunuk; Kim, Sanghyun

    2015-04-01

    Soil moisture, a critical factor in hydrologic systems, plays a key role in synthesizing interactions among soil, climate, hydrological response, solute transport and ecosystem dynamics. The spatial and temporal distribution of soil moisture at a hillslope scale is essential for understanding hillslope runoff generation processes. In this study, we implement Monte Carlo simulations in the hillslope scale using a three-dimensional surface-subsurface integrated model (3D model). Numerical simulations are compared with multiple soil moistures which had been measured using TDR(Mini_TRASE) for 22 locations in 2 or 3 depths during a whole year at a hillslope (area: 2100 square meters) located in Bongsunsa Watershed, South Korea. In stochastic simulations via Monte Carlo, uncertainty of the soil parameters and input forcing are considered and model ensembles showing good performance are selected separately for several seasonal periods. The presentation will be focused on the characterization of seasonal variations of model parameters based on simulations with field measurements. In addition, structural limitations of the contemporary modeling method will be discussed.

  7. Pertinent parameters in photo-generation of electrons: Comparative study of anatase-based nano-TiO2 suspensions.

    PubMed

    Martel, D; Guerra, A; Turek, P; Weiss, J; Vileno, B

    2016-04-01

    In the field of solar fuel cells, the development of efficient photo-converting semiconductors remains a major challenge. A rational analysis of experimental photocatalytic results obtained with material in colloïdal suspensions is needed to access fundamental knowledge required to improve the design and properties of new materials. In this study, a simple system electron donor/nano-TiO2 is considered and examined via spin scavenging electron paramagnetic resonance as well as a panel of analytical techniques (composition, optical spectroscopy and dynamic light scattering) for selected type of nano-TiO2. Independent variables (pH, electron donor concentration and TiO2 amount) have been varied and interdependent variables (aggregate size, aggregate surface vs. volume and acid/base groups distribution) are discussed. This work shows that reliable understanding involves thoughtful combination of interdependent parameters, whereas the specific surface area seems not a pertinent parameter. The conclusion emphasizes the difficulty to identify the key features of the mechanisms governing photocatalytic properties in nano-TiO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Synthesis of Patterned Vertically Aligned Carbon Nanotubes by PECVD Using Different Growth Techniques: A Review.

    PubMed

    Gangele, Aparna; Sharma, Chandra Shekhar; Pandey, Ashok Kumar

    2017-04-01

    Immense development has been taken place not only to increase the bulk production, repeatability and yield of carbon nanotubes (CNTs) in last 25 years but preference is also given to acknowledge the basic concepts of nucleation and growth methods. Vertically aligned carbon nanotubes (VAC-NTs) are forest of CNTs accommodated perpendicular on a substrate. Their exceptional chemical and physical properties along with sequential arrangement and dense structure make them suitable in various fields. The effect of different type of selected substrate, carbon precursor, catalyst and their physical and chemical status, reaction conditions and many other key parameters have been thoroughly studied and analysed. The aim of this paper is to specify the trend and summarize the effect of key parameters instead of only presenting all the experiments reported till date. The identified trends will be compared with the recent observations on the growth of different types of patterned VACNTs. In this review article, we have presented a comprehensive analysis of different techniques to precisely determine the role of different parameters responsible for the growth of patterned vertical aligned carbon nanotubes. We have covered various techniques proposed in the span of more than two decades to fabricate the different structures and configurations of carbon nanotubes on different types of substrates. Apart from a detailed discussion of each technique along with their specific process and implementation, we have also provided a critical analysis of the associated constraints, benefits and shortcomings. To sum it all for easy reference for researchers, we have tabulated all the techniques based on certain main key factors. This review article comprises of an exhaustive discussion and a handy reference for researchers who are new in the field of synthesis of CNTs or who wants to get abreast with the techniques of determining the growth of VACNTs arrays.

  9. Polymeric membrane materials: new aspects of empirical approaches to prediction of gas permeability parameters in relation to permanent gases, linear lower hydrocarbons and some toxic gases.

    PubMed

    Malykh, O V; Golub, A Yu; Teplyakov, V V

    2011-05-11

    Membrane gas separation technologies (air separation, hydrogen recovery from dehydrogenation processes, etc.) use traditionally the glassy polymer membranes with dominating permeability of "small" gas molecules. For this purposes the membranes based on the low free volume glassy polymers (e.g., polysulfone, tetrabromopolycarbonate and polyimides) are used. On the other hand, an application of membrane methods for VOCs and some toxic gas recovery from air, separation of the lower hydrocarbons containing mixtures (in petrochemistry and oil refining) needs the membranes with preferable penetration of components with relatively larger molecular sizes. In general, this kind of permeability is characterized for rubbers and for the high free volume glassy polymers. Data files accumulated (more than 1500 polymeric materials) represent the region of parameters "inside" of these "boundaries." Two main approaches to the prediction of gas permeability of polymers are considered in this paper: (1) the statistical treatment of published transport parameters of polymers and (2) the prediction using model of ≪diffusion jump≫ with consideration of the key properties of the diffusing molecule and polymeric matrix. In the frames of (1) the paper presents N-dimensional methods of the gas permeability estimation of polymers using the correlations "selectivity/permeability." It is found that the optimal accuracy of prediction is provided at n=4. In the frames of the solution-diffusion mechanism (2) the key properties include the effective molecular cross-section of penetrating species to be responsible for molecular transportation in polymeric matrix and the well known force constant (ε/k)(eff i) of {6-12} potential for gas-gas interaction. Set of corrected effective molecular cross-section of penetrant including noble gases (He, Ne, Ar, Kr, Xe), permanent gases (H(2), O(2), N(2), CO), ballast and toxic gases (CO(2), NO(,) NO(2), SO(2), H(2)S) and linear lower hydrocarbons (CH(4), C(2)H(6), C(3)H(8), C(4)H(10), C(2)H(4), C(3)H(6), C(4)H(8) - 1, C(2)H(2), C(3)H(4)-m (methylacetylene) and C(3)H(4)-a (allen) is determined by using two above mentioned approaches. All of this allows calculating preliminary the permeability parameters of above mentioned gases for most part of known polymers based on limited experimental data. The new correlations suggested demonstrate that the available free volume of polymeric matrix plays an important role in providing of rate and selectivity of gas diffusion for glassy-like polymers; the rate and selectivity of gas diffusion in rubbers is affected mainly by cohesion energy density (CED) the both polymer parameters being calculated by traditional additive group contributions technique. Results of present study are demonstrated by calculation of expected permeability parameters in relation to lower hydrocarbons and some toxic gases for polynorbornene based polymers, PIM and PTMSP outlining potential of practical application for new membrane polymers. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. The use of fibrous ion exchangers in gold hydrometallurgy

    NASA Astrophysics Data System (ADS)

    Kautzmann, R. M.; Sampaio, C. H.; Cortina, J. L.; Soldatov, V.; Shunkevich, A.

    2002-10-01

    This article examines a family of ion-exchange fibers, FIBAN, containing primary and secondary amine groups. These ion exchangers have a fiber diameter of 20 40 Μm, high osmotic and mechanic stability, a high rate of adsorption and regeneration, and excellent dynamic characteristics as filtering media. Inparticular, this article discusses the use of FIBAN fibrous ion exchangers in the recovery of gold cyanide andbase-metal cyanides (copper and mercury) from mineral-leaching solutions. The influence of polymer structure and water content on their extraction ability is described, along with key parameters of gold hydrometallurgy such as extraction efficiency, selectivity, pH dependence, gold cyanide loading, kinetics, and stripping.

  11. Development of an Opto-Acoustic Recanalization System Final Report CRADA No. 1314-96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, L. D.; Adam, H. R.

    The objective of the project was to develop an ischemic stroke treatient system that restores blood flow to the brain by removing occlusions using acoustic energy created by fiber optic delivery of laser light, a process called Opto Acoustic Recanalization (OAR). The key tasks of the project were to select a laser system, quantify temperature, pressure and particle size distribution, and develop a prototype device incorporating a feedback mechanism. System parameters were developed to cause emulsification while attempting to minimize particle size and collateral damage. The prototype system was tested in animal models and resulted in no visible collateral damage.

  12. Site-Selective Orientated Immobilization of Antibodies and Conjugates for Immunodiagnostics Development

    PubMed Central

    Rusling, James

    2016-01-01

    Immobilized antibody systems are the key to develop efficient diagnostics and separations tools. In the last decade, developments in the field of biomolecular engineering and crosslinker chemistry have greatly influenced the development of this field. With all these new approaches at our disposal, several new immobilization methods have been created to address the main challenges associated with immobilized antibodies. Few of these challenges that we have discussed in this review are mainly associated to the site-specific immobilization, appropriate orientation, and activity retention. We have discussed the effect of antibody immobilization approaches on the parameters on the performance of an immunoassay. PMID:27876681

  13. Optimization of Designs for Nanotube-based Scanning Probes

    NASA Technical Reports Server (NTRS)

    Harik, V. M.; Gates, T. S.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Optimization of designs for nanotube-based scanning probes, which may be used for high-resolution characterization of nanostructured materials, is examined. Continuum models to analyze the nanotube deformations are proposed to help guide selection of the optimum probe. The limitations on the use of these models that must be accounted for before applying to any design problem are presented. These limitations stem from the underlying assumptions and the expected range of nanotube loading, end conditions, and geometry. Once the limitations are accounted for, the key model parameters along with the appropriate classification of nanotube structures may serve as a basis for the design optimization of nanotube-based probe tips.

  14. Physicians lead the way at America's top hospitals.

    PubMed

    Weber, D O

    2001-01-01

    The 100 Top hospitals are selected annually based on seven critical parameters for each of the 6,200-plus U.S. hospitals with 25 or more beds. They include the previous year's risk-adjusted patient mortality and complication rates, severity-adjusted average patient lengths of stay, expenses, profitability, proportional outpatient revenue, and asset turnover ratio (a measure of facility and technological pace-keeping ability). The winners are selected from five comparable size groupings--small, medium, large community, teaching, and large academic hospitals. Conspicuous among the winners at every level are physician-led organizations. Even in the majority of hospitals headed by non-physician administrators, however, the managerial capabilities of medical directors are the key to success. The most common characteristic of these award-winning hospitals is that the leadership is working together and communicating the institution's goals effectively to all levels of the organization.

  15. Materials selection guidelines for geothermal energy utilization systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, P.F. II; Conover, M.F.

    1981-01-01

    This manual includes geothermal fluid chemistry, corrosion test data, and materials operating experience. Systems using geothermal energy in El Salvador, Iceland, Italy, Japan, Mexico, New Zealand, and the United States are described. The manual provides materials selection guidelines for surface equipment of future geothermal energy systems. The key chemical species that are significant in determining corrosiveness of geothermal fluids are identified. The utilization modes of geothermal energy are defined as well as the various physical fluid parameters that affect corrosiveness. Both detailed and summarized results of materials performance tests and applicable operating experiences from forty sites throughout the world aremore » presented. The application of various non-metal materials in geothermal environments are discussed. Included in appendices are: corrosion behavior of specific alloy classes in geothermal fluids, corrosion in seawater desalination plants, worldwide geothermal power production, DOE-sponsored utilization projects, plant availability, relative costs of alloys, and composition of alloys. (MHR)« less

  16. Asymmetric organic-inorganic hybrid membrane formation via block copolymer-nanoparticle co-assembly.

    PubMed

    Gu, Yibei; Dorin, Rachel M; Wiesner, Ulrich

    2013-01-01

    A facile method for forming asymmetric organic-inorganic hybrid membranes for selective separation applications is developed. This approach combines co-assembly of block copolymer (BCP) and inorganic nanoparticles (NPs) with non-solvent induced phase separation. The method is successfully applied to two distinct molar mass BCPs with different fractions of titanium dioxide (TiO2) NPs. The resulting hybrid membranes exhibit structural asymmetry with a thin nanoporous surface layer on top of a macroporous fingerlike support layer. Key parameters that dictate membrane surface morphology include the fraction of inorganics used and the length of time allowed for surface layer development. The resulting membranes exhibit both good selectivity and high permeability (3200 ± 500 Lm(-2) h(-1) bar(-1)). This fast and straightforward synthesis method for asymmetric hybrid membranes provides a new self-assembly platform upon which multifunctional and high-performance organic-inorganic hybrid membranes can be formed.

  17. The analysis of the accuracy of spatial models using photogrammetric software: Agisoft Photoscan and Pix4D

    NASA Astrophysics Data System (ADS)

    Barbasiewicz, Adrianna; Widerski, Tadeusz; Daliga, Karol

    2018-01-01

    This article was created as a result of research conducted within the master thesis. The purpose of the measurements was to analyze the accuracy of the positioning of points by computer programs. Selected software was a specialized computer software dedicated to photogrammetric work. For comparative purposes it was decided to use tools with similar functionality. As the basic parameters that affect the results selected the resolution of the photos on which the key points were searched. In order to determine the location of the determined points, it was decided to follow the photogrammetric resection rule. In order to automate the measurement, the measurement session planning was omitted. The coordinates of the points collected by the tachymetric measure were used as a reference system. The resulting deviations and linear displacements oscillate in millimeters. The visual aspects of the cloud points have also been briefly analyzed.

  18. Effects of sampling techniques on physical parameters and concentrations of selected persistent organic pollutants in suspended matter.

    PubMed

    Pohlert, Thorsten; Hillebrand, Gudrun; Breitung, Vera

    2011-06-01

    This study focusses on the effect of sampling techniques for suspended matter in stream water on subsequent particle-size distribution and concentrations of total organic carbon and selected persistent organic pollutants. The key questions are whether differences between the sampling techniques are due to the separation principle of the devices or due to the difference between time-proportional versus integral sampling. Several multivariate homogeneity tests were conducted on an extensive set of field-data that covers the period from 2002 to 2007, when up to three different sampling techniques were deployed in parallel at four monitoring stations of the River Rhine. The results indicate homogeneity for polychlorinated biphenyls, but significant effects due to the sampling techniques on particle-size, organic carbon and hexachlorobenzene. The effects can be amplified depending on the site characteristics of the monitoring stations.

  19. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  20. Word-addressable holographic memory using symbolic substitution and SLRs

    NASA Astrophysics Data System (ADS)

    McAulay, Alastair D.; Wang, Junqing

    1990-12-01

    A heteroassociative memory is proposed that allows a key word in a dictionary of key words to be used to recall an associated holographic image in a database of images. Symbolic substitution search finds the word sought in a dictionary of key words and generates a beam that selects the corresponding holographic image from a directory of images. In this case, symbolic substitution is used to orthogonalize the key words. Spatial light rebroadcasters are proposed for the key word database. Experimental results demonstrate that symbolic substitution will enable a holographic image to be selected and reconstructed. In the case considered, a holographic image having over 40,000-bits is selected out of eight by using a key word from a dictionary of eight words.

  1. Nano/microvehicles for efficient delivery and (bio)sensing at the cellular level

    PubMed Central

    Esteban-Fernández de Ávila, B.; Yáñez-Sedeño, P.

    2017-01-01

    A perspective review of recent strategies involving the use of nano/microvehicles to address the key challenges associated with delivery and (bio)sensing at the cellular level is presented. The main types and characteristics of the different nano/microvehicles used for these cellular applications are discussed, including fabrication pathways, propulsion (catalytic, magnetic, acoustic or biological) and navigation strategies, and relevant parameters affecting their propulsion performance and sensing and delivery capabilities. Thereafter, selected applications are critically discussed. An emphasis is made on enhancing the extra- and intra-cellular biosensing capabilities, fast cell internalization, rapid inter- or intra-cellular movement, efficient payload delivery and targeted on-demand controlled release in order to greatly improve the monitoring and modulation of cellular processes. A critical discussion of selected breakthrough applications illustrates how these smart multifunctional nano/microdevices operate as nano/microcarriers and sensors at the intra- and extra-cellular levels. These advances allow both the real-time biosensing of relevant targets and processes even at a single cell level, and the delivery of different cargoes (drugs, functional proteins, oligonucleotides and cells) for therapeutics, gene silencing/transfection and assisted fertilization, while overcoming challenges faced by current affinity biosensors and delivery vehicles. Key challenges for the future and the envisioned opportunities and future perspectives of this remarkably exciting field are discussed. PMID:29147499

  2. Mining-related metals in terrestrial food webs of the upper Clark Fork River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorok, R.A.; LaTier, A.J.; Butcher, M.K.

    1994-12-31

    Fluvial deposits of tailings and other mining-related waste in selected riparian habitats of the Upper Clark Fork River basin (Montana) have resulted in metals enriched soils. The significance of metals exposure to selected wildlife species was evaluated by measuring tissue residues of metals (arsenic, cadmium, copper, lead, zinc) in key dietary species, including dominant grasses (tufted hair grass and redtop), willows, alfalfa, barley, invertebrates (grasshoppers, spiders, and beetles), and deer mice. Average metals concentrations in grasses, invertebrates, and deer mice collected from tailings-affected sites were elevated relative to reference to reference levels. Soil-tissue bioconcentration factors for grasses and invertebrates weremore » generally lower than expected based on the range of values in the literature, indicating the reduced bioavailability of metals from mining waste. In general, metals concentrations in willows, alfalfa, and barley were not elevated above reference levels. Using these data and plausible assumptions for other exposure parameters for white-tailed deer, red fox, and American kestrel, metals intake was estimated for soil and diet ingestion pathways. Comparisons of exposure estimates with toxicity reference values indicated that the elevated concentrations of metals in key food web species do not pose a significant risk to wildlife.« less

  3. Selection of solubility parameters for characterization of pharmaceutical excipients.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam; Héberger, Károly

    2007-11-09

    The solubility parameter (delta(2)), corrected solubility parameter (delta(T)) and its components (delta(d), delta(p), delta(h)) were determined for series of pharmaceutical excipients by using inverse gas chromatography (IGC). Principal component analysis (PCA) was applied for the selection of the solubility parameters which assure the complete characterization of examined materials. Application of PCA suggests that complete description of examined materials is achieved with four solubility parameters, i.e. delta(2) and Hansen solubility parameters (delta(d), delta(p), delta(h)). Selection of the excipients through PCA of their solubility parameters data can be used for prediction of their behavior in a multi-component system, e.g. for selection of the best materials to form stable pharmaceutical liquid mixtures or stable coating formulation.

  4. A simple physiologically based pharmacokinetic model evaluating the effect of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saylor, Kyle, E-mail: saylor@vt.edu; Zhang, Chenmi

    Physiologically based pharmacokinetic (PBPK) modeling was applied to investigate the effects of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans. Successful construction of both rat and human models was achieved by fitting model outputs to published nicotine concentration time course data in the blood and in the brain. Key parameters presumed to have the most effect on the ability of these antibodies to prevent nicotine from entering the brain were selected for investigation using the human model. These parameters, which included antibody affinity for nicotine, antibody cross-reactivity with cotinine, and antibody concentration, were broken down intomore » different, clinically-derived in silico treatment levels and fed into the human PBPK model. Model predictions suggested that all three parameters, in addition to smoking status, have a sizable impact on anti-nicotine antibodies' ability to prevent nicotine from entering the brain and that the antibodies elicited by current human vaccines do not have sufficient binding characteristics to reduce brain nicotine concentrations. If the antibody binding characteristics achieved in animal studies can similarly be achieved in human studies, however, nicotine vaccine efficacy in terms of brain nicotine concentration reduction is predicted to meet threshold values for alleviating nicotine dependence. - Highlights: • Modelling of nicotine disposition in the presence of anti-nicotine antibodies • Key vaccine efficacy factors are evaluated in silico in rats and in humans. • Model predicts insufficient antibody binding in past human nicotine vaccines. • Improving immunogenicity and antibody specificity may lead to vaccine success.« less

  5. A Modular GIS-Based Software Architecture for Model Parameter Estimation using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio-Murillo, C.; Over, M. W.; Rubin, Y.

    2012-12-01

    The Method of Anchored Distributions (MAD) is an inverse modeling technique that is well-suited for estimation of spatially varying parameter fields using limited observations and Bayesian methods. This presentation will discuss the design, development, and testing of a free software implementation of the MAD technique using the open source DotSpatial geographic information system (GIS) framework, R statistical software, and the MODFLOW groundwater model. This new tool, dubbed MAD-GIS, is built using a modular architecture that supports the integration of external analytical tools and models for key computational processes including a forward model (e.g. MODFLOW, HYDRUS) and geostatistical analysis (e.g. R, GSLIB). The GIS-based graphical user interface provides a relatively simple way for new users of the technique to prepare the spatial domain, to identify observation and anchor points, to perform the MAD analysis using a selected forward model, and to view results. MAD-GIS uses the Managed Extensibility Framework (MEF) provided by the Microsoft .NET programming platform to support integration of different modeling and analytical tools at run-time through a custom "driver." Each driver establishes a connection with external programs through a programming interface, which provides the elements for communicating with core MAD software. This presentation gives an example of adapting the MODFLOW to serve as the external forward model in MAD-GIS for inferring the distribution functions of key MODFLOW parameters. Additional drivers for other models are being developed and it is expected that the open source nature of the project will engender the development of additional model drivers by 3rd party scientists.

  6. Calculations of key magnetospheric parameters using the isotropic and anisotropic SPSU global MHD code

    NASA Astrophysics Data System (ADS)

    Samsonov, Andrey; Gordeev, Evgeny; Sergeev, Victor

    2017-04-01

    As it was recently suggested (e.g., Gordeev et al., 2015), the global magnetospheric configuration can be characterized by a set of key parameters, such as the magnetopause distance at the subsolar point and on the terminator plane, the magnetic field in the magnetotail lobe and the plasma sheet thermal pressure, the cross polar cap electric potential drop and the total field-aligned current. For given solar wind conditions, the values of these parameters can be obtained from both empirical models and global MHD simulations. We validate the recently developed global MHD code SPSU-16 using the key magnetospheric parameters mentioned above. The code SPSU-16 can calculate both the isotropic and anisotropic MHD equations. In the anisotropic version, we use the modified double-adiabatic equations in which the T⊥/T∥ (the ratio of perpendicular to parallel thermal pressures) has been bounded from above by the mirror and ion-cyclotron thresholds and from below by the firehose threshold. The results of validation for the SPSU-16 code well agree with the previously published results of other global codes. Some key parameters coincide in the isotropic and anisotropic MHD simulations, but some are different.

  7. Using Diurnal Temperature Signals to Infer Vertical Groundwater-Surface Water Exchange.

    PubMed

    Irvine, Dylan J; Briggs, Martin A; Lautz, Laura K; Gordon, Ryan P; McKenzie, Jeffrey M; Cartwright, Ian

    2017-01-01

    Heat is a powerful tracer to quantify fluid exchange between surface water and groundwater. Temperature time series can be used to estimate pore water fluid flux, and techniques can be employed to extend these estimates to produce detailed plan-view flux maps. Key advantages of heat tracing include cost-effective sensors and ease of data collection and interpretation, without the need for expensive and time-consuming laboratory analyses or induced tracers. While the collection of temperature data in saturated sediments is relatively straightforward, several factors influence the reliability of flux estimates that are based on time series analysis (diurnal signals) of recorded temperatures. Sensor resolution and deployment are particularly important in obtaining robust flux estimates in upwelling conditions. Also, processing temperature time series data involves a sequence of complex steps, including filtering temperature signals, selection of appropriate thermal parameters, and selection of the optimal analytical solution for modeling. This review provides a synthesis of heat tracing using diurnal temperature oscillations, including details on optimal sensor selection and deployment, data processing, model parameterization, and an overview of computing tools available. Recent advances in diurnal temperature methods also provide the opportunity to determine local saturated thermal diffusivity, which can improve the accuracy of fluid flux modeling and sensor spacing, which is related to streambed scour and deposition. These parameters can also be used to determine the reliability of flux estimates from the use of heat as a tracer. © 2016, National Ground Water Association.

  8. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  9. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  10. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  11. Environmentally Responsible Use of Nanomaterials for the Photocatalytic Reduction of Nitrate in Water

    NASA Astrophysics Data System (ADS)

    Doudrick, Kyle

    Nitrate is the most prevalent water pollutant limiting the use of groundwater as a potable water source. The overarching goal of this dissertation was to leverage advances in nanotechnology to improve nitrate photocatalysis and transition treatment to the full-scale. The research objectives were to (1) examine commercial and synthesized photocatalysts, (2) determine the effect of water quality parameters (e.g., pH), (3) conduct responsible engineering by ensuring detection methods were in place for novel materials, and (4) develop a conceptual framework for designing nitrate-specific photocatalysts. The key issues for implementing photocatalysis for nitrate drinking water treatment were efficient nitrate removal at neutral pH and by-product selectivity toward nitrogen gases, rather than by-products that pose a human health concern (e.g., nitrite). Photocatalytic nitrate reduction was found to follow a series of proton-coupled electron transfers. The nitrate reduction rate was limited by the electron-hole recombination rate, and the addition of an electron donor (e.g., formate) was necessary to reduce the recombination rate and achieve efficient nitrate removal. Nano-sized photocatalysts with high surface areas mitigated the negative effects of competing aqueous anions. The key water quality parameter impacting by-product selectivity was pH. For pH < 4, the by-product selectivity was mostly N-gas with some NH4 +, but this shifted to NO2- above pH = 4, which suggests the need for proton localization to move beyond NO2 -. Co-catalysts that form a Schottky barrier, allowing for localization of electrons, were best for nitrate reduction. Silver was optimal in heterogeneous systems because of its ability to improve nitrate reduction activity and N-gas by-product selectivity, and graphene was optimal in two-electrode systems because of its ability to shuttle electrons to the working electrode. "Environmentally responsible use of nanomaterials" is to ensure that detection methods are in place for the nanomaterials tested. While methods exist for the metals and metal oxides examined, there are currently none for carbon nanotubes (CNTs) and graphene. Acknowledging that risk assessment encompasses dose-response and exposure, new analytical methods were developed for extracting and detecting CNTs and graphene in complex organic environmental (e.g., urban air) and biological matrices (e.g. rat lungs).

  12. Aerobiological Stabilities of Different Species of Gram-Negative Bacteria, Including Well-Known Biothreat Simulants, in Single-Cell Particles and Cell Clusters of Different Compositions

    PubMed Central

    Skogan, Gunnar

    2017-01-01

    ABSTRACT The ability to perform controlled experiments with bioaerosols is a fundamental enabler of many bioaerosol research disciplines. A practical alternative to using hazardous biothreat agents, e.g., for detection equipment development and testing, involves using appropriate model organisms (simulants). Several species of Gram-negative bacteria have been used or proposed as biothreat simulants. However, the appropriateness of different bacterial genera, species, and strains as simulants is still debated. Here, we report aerobiological stability characteristics of four species of Gram-negative bacteria (Pantoea agglomerans, Serratia marcescens, Escherichia coli, and Xanthomonas arboricola) in single-cell particles and cell clusters produced using four spray liquids (H2O, phosphate-buffered saline[PBS], spent culture medium[SCM], and a SCM-PBS mixture). E. coli showed higher stability in cell clusters from all spray liquids than the other species, but it showed similar or lower stability in single-cell particles. The overall stability was higher in cell clusters than in single-cell particles. The highest overall stability was observed for bioaerosols produced using SCM-containing spray liquids. A key finding was the observation that stability differences caused by particle size or compositional changes frequently followed species-specific patterns. The results highlight how even moderate changes to one experimental parameter, e.g., bacterial species, spray liquid, or particle size, can strongly affect the aerobiological stability of Gram-negative bacteria. Taken together, the results highlight the importance of careful and informed selection of Gram-negative bacterial biothreat simulants and also the accompanying particle size and composition. The outcome of this work contributes to improved selection of simulants, spray liquids, and particle size for use in bioaerosol research. IMPORTANCE The outcome of this work contributes to improved selection of simulants, spray liquids, and particle size for use in bioaerosol research. Taken together, the results highlight the importance of careful and informed selection of Gram-negative bacterial biothreat simulants and also the accompanying particle size and composition. The results highlight how even moderate changes to one experimental parameter, e.g., bacterial species, spray liquid, or particle size, can strongly affect the aerobiological stability of Gram-negative bacteria. A key finding was the observation that stability differences caused by particle size or compositional changes frequently followed species-specific patterns. PMID:28687646

  13. An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves

    PubMed Central

    Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing

    2014-01-01

    Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181

  14. A no-key-exchange secure image sharing scheme based on Shamir's three-pass cryptography protocol and the multiple-parameter fractional Fourier transform.

    PubMed

    Lang, Jun

    2012-01-30

    In this paper, we propose a novel secure image sharing scheme based on Shamir's three-pass protocol and the multiple-parameter fractional Fourier transform (MPFRFT), which can safely exchange information with no advance distribution of either secret keys or public keys between users. The image is encrypted directly by the MPFRFT spectrum without the use of phase keys, and information can be shared by transmitting the encrypted image (or message) three times between users. Numerical simulation results are given to verify the performance of the proposed algorithm.

  15. Decolorization of Acid Orange 7 by an electric field-assisted modified orifice plate hydrodynamic cavitation system: Optimization of operational parameters.

    PubMed

    Jung, Kyung-Won; Park, Dae-Seon; Hwang, Min-Jin; Ahn, Kyu-Hong

    2015-09-01

    In this study, the decolorization of Acid Orange 7 (AO-7) with intensified performance was obtained using hydrodynamic cavitation (HC) combined with an electric field (graphite electrodes). As a preliminary step, various HC systems were compared in terms of decolorization, and, among them, the electric field-assisted modified orifice plate HC (EFM-HC) system exhibited perfect decolorization performance within 40 min of reaction time. Interestingly, when H2O2 was injected into the EFM-HC system as an additional oxidant, the reactor performance gradually decreased as the dosing ratio increased; thus, the remaining experiments were performed without H2O2. Subsequently, an optimization process was conducted using response surface methodology with a Box-Behnken design. The inlet pressure, initial pH, applied voltage, and reaction time were chosen as operational key factors, while decolorization was selected as the response variable. The overall performance revealed that the selected parameters were either slightly interdependent, or had significant interactive effects on the decolorization. In the verification test, complete decolorization was observed under statistically optimized conditions. This study suggests that EFM-HC is a useful method for pretreatment of dye wastewater with positive economic and commercial benefits. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Quality parameters and RAPD-PCR differentiation of commercial baker's yeast and hybrid strains.

    PubMed

    El-Fiky, Zaki A; Hassan, Gamal M; Emam, Ahmed M

    2012-06-01

    Baker's yeast, Saccharomyces cerevisiae, is a key component in bread baking. Total of 12 commercial baker's yeast and 2 hybrid strains were compared using traditional quality parameters. Total of 5 strains with high leavening power and the 2 hybrid strains were selected and evaluated for their alpha-amylase, maltase, glucoamylase enzymes, and compared using random amplified polymorphic DNA (RAPD). The results revealed that all selected yeast strains have a low level of alpha-amylase and a high level of maltase and glucoamylase enzymes. Meanwhile, the Egyptian yeast strain (EY) had the highest content of alpha-amylase and maltase enzymes followed by the hybrid YH strain. The EY and YH strains have the highest content of glucoamylase enzyme almost with the same level. The RAPD banding patterns showed a wide variation among commercial yeast and hybrid strains. The closely related Egyptian yeast strains (EY and AL) demonstrated close similarity of their genotypes. The 2 hybrid strains were clustered to Turkish and European strains in 1 group. The authors conclude that the identification of strains and hybrids using RAPD technique was useful in determining their genetic relationship. These results can be useful not only for the basic research, but also for the quality control in baking factories. © 2012 Institute of Food Technologists®

  17. The utility and impact of information communication technology (ICT) for pre-registration nurse education: A narrative synthesis systematic review.

    PubMed

    Webb, Lucy; Clough, Jonathan; O'Reilly, Declan; Wilmott, Danita; Witham, Gary

    2017-01-01

    To evaluate and summarise the utility and impact of information communication technology (ICT) in enhancing student performance and the learning environment in pre-registration nursing. A systematic review of empirical research across a range of themes in ICT health-related education. Science Direct, Cinahl, AMED, MEDLINE, PubMed, ASSIA, OVID and OVID SP (2008-2014). Further date parameters were imposed by theme. Evidence was reviewed by narrative synthesis, adopting Caldwell's appraisal framework and CASP for qualitative methods. Selection and inclusion was grounded in the PICOS structure, with language requirements (English), and further parameters were guided by theme appropriateness. Fifty studies were selected for review across six domains: reusable learning objects, media, audience response systems, e-portfolios, computer-based assessment and faculty adoption of e-learning. Educational ICT was found to be non-inferior to traditional teaching, while offering benefits to teaching and learning efficiency. Where support is in place, ICT improves the learning environment for staff and students, but human and environmental barriers need to be addressed. This review illuminates more advantages for ICT in nurse training than previously. The key advantage of flexibility is supported, though with little evidence for effect on depth of learning. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  18. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    PubMed Central

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-01-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy. PMID:27886244

  19. ToTem: a tool for variant calling pipeline optimization.

    PubMed

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  20. Reconstructing Networks from Profit Sequences in Evolutionary Games via a Multiobjective Optimization Approach with Lasso Initialization

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Liu, Jing; Wang, Shuai

    2016-11-01

    Evolutionary games (EG) model a common type of interactions in various complex, networked, natural and social systems. Given such a system with only profit sequences being available, reconstructing the interacting structure of EG networks is fundamental to understand and control its collective dynamics. Existing approaches used to handle this problem, such as the lasso, a convex optimization method, need a user-defined constant to control the tradeoff between the natural sparsity of networks and measurement error (the difference between observed data and simulated data). However, a shortcoming of these approaches is that it is not easy to determine these key parameters which can maximize the performance. In contrast to these approaches, we first model the EG network reconstruction problem as a multiobjective optimization problem (MOP), and then develop a framework which involves multiobjective evolutionary algorithm (MOEA), followed by solution selection based on knee regions, termed as MOEANet, to solve this MOP. We also design an effective initialization operator based on the lasso for MOEA. We apply the proposed method to reconstruct various types of synthetic and real-world networks, and the results show that our approach is effective to avoid the above parameter selecting problem and can reconstruct EG networks with high accuracy.

  1. Contact Selectivity Engineering in a 2 μm Thick Ultrathin c-Si Solar Cell Using Transition-Metal Oxides Achieving an Efficiency of 10.8.

    PubMed

    Xue, Muyu; Islam, Raisul; Meng, Andrew C; Lyu, Zheng; Lu, Ching-Ying; Tae, Christian; Braun, Michael R; Zang, Kai; McIntyre, Paul C; Kamins, Theodore I; Saraswat, Krishna C; Harris, James S

    2017-12-06

    In this paper, the integration of metal oxides as carrier-selective contacts for ultrathin crystalline silicon (c-Si) solar cells is demonstrated which results in an ∼13% relative improvement in efficiency. The improvement in efficiency originates from the suppression of the contact recombination current due to the band offset asymmetry of these oxides with Si. First, an ultrathin c-Si solar cell having a total thickness of 2 μm is shown to have >10% efficiency without any light-trapping scheme. This is achieved by the integration of nickel oxide (NiO x ) as a hole-selective contact interlayer material, which has a low valence band offset and high conduction band offset with Si. Second, we show a champion cell efficiency of 10.8% with the additional integration of titanium oxide (TiO x ), a well-known material for an electron-selective contact interlayer. Key parameters including V oc and J sc also show different degrees of enhancement if single (NiO x only) or double (both NiO x and TiO x ) carrier-selective contacts are integrated. The fabrication process for TiO x and NiO x layer integration is scalable and shows good compatibility with the device.

  2. Winter wheat yield estimation of remote sensing research based on WOFOST crop model and leaf area index assimilation

    NASA Astrophysics Data System (ADS)

    Chen, Yanling; Gong, Adu; Li, Jing; Wang, Jingmei

    2017-04-01

    Accurate crop growth monitoring and yield predictive information are significant to improve the sustainable development of agriculture and ensure the security of national food. Remote sensing observation and crop growth simulation models are two new technologies, which have highly potential applications in crop growth monitoring and yield forecasting in recent years. However, both of them have limitations in mechanism or regional application respectively. Remote sensing information can not reveal crop growth and development, inner mechanism of yield formation and the affection of environmental meteorological conditions. Crop growth simulation models have difficulties in obtaining data and parameterization from single-point to regional application. In order to make good use of the advantages of these two technologies, the coupling technique of remote sensing information and crop growth simulation models has been studied. Filtering and optimizing model parameters are key to yield estimation by remote sensing and crop model based on regional crop assimilation. Winter wheat of GaoCheng was selected as the experiment object in this paper. And then the essential data was collected, such as biochemical data and farmland environmental data and meteorological data about several critical growing periods. Meanwhile, the image of environmental mitigation small satellite HJ-CCD was obtained. In this paper, research work and major conclusions are as follows. (1) Seven vegetation indexes were selected to retrieve LAI, and then linear regression model was built up between each of these indexes and the measured LAI. The result shows that the accuracy of EVI model was the highest (R2=0.964 at anthesis stage and R2=0.920 at filling stage). Thus, EVI as the most optimal vegetation index to predict LAI in this paper. (2) EFAST method was adopted in this paper to conduct the sensitive analysis to the 26 initial parameters of the WOFOST model and then a sensitivity index was constructed to evaluate the influence of each parameter mentioned above on the winter wheat yield formation. Finally, six parameters that sensitivity index more than 0.1 as sensitivity factors were chose, which are TSUM1, SLATB1, SLATB2, SPAN, EFFTB3 and TMPF4. To other parameters, we confirmed them via practical measurement and calculation, available literature or WOFOST default. Eventually, we completed the regulation of WOFOST parameters. (3) Look-up table algorithm was used to realize single-point yield estimation through the assimilation of the WOFOST model and the retrieval LAI. This simulation achieved a high accuracy which perfectly meet the purpose of assimilation (R2=0.941 and RMSE=194.58kg/hm2). In this paper, the optimum value of sensitivity parameters were confirmed and the estimation of single-point yield were finished. Key words: yield estimation of winter wheat, LAI, WOFOST crop growth model, assimilation

  3. BOK-Printed Electronics

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2013-01-01

    The use of printed electronics technologies (PETs), 2D or 3D printing approaches either by conventional electronic fabrication or by rapid graphic printing of organic or nonorganic electronic devices on various small or large rigid or flexible substrates, is projected to grow exponentially in commercial industry. This has provided an opportunity to determine whether or not PETs could be applicable for low volume and high-reliability applications. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the current status of organic and printed electronics technologies. It reviews three key industry roadmaps- on this subject-OE-A, ITRS, and iNEMI-each with a different name identification for this emerging technology. This followed by a brief review of the status of the industry on standard development for this technology, including IEEE and IPC specifications. The report concludes with key technologies and applications and provides a technology hierarchy similar to those of conventional microelectronics for electronics packaging. Understanding key technology roadmaps, parameters, and applications is important when judicially selecting and narrowing the follow-up of new and emerging applicable technologies for evaluation, as well as the low risk insertion of organic, large area, and printed electronics.

  4. The evolution of concepts for soil erosion modelling

    NASA Astrophysics Data System (ADS)

    Kirkby, Mike

    2013-04-01

    From the earliest models for soil erosion, based on power laws relating sediment discharge or yield to slope length and gradient, the development of the Universal Soil Loss Equation was a natural step, although one that has long continued to hinder the development of better perceptual models for erosion processes. Key stumbling blocks have been: 1. The failure to go through runoff generation as a key intermediary 2. The failure to separate hydrological and strength parameters of the soil 3. The failure to treat sediment transport along a slope as a routing problem 4. The failure to analyse the nature of the dependence on vegetation Key advances have been in these directions (among others) 1. Improved understanding of the hydrological processes (e.g. infiltration and runoff, sediment entrainment) leading to KINEROS, LISEM,WEPP, PESERA 2. Recognition of selective sediment transport (e.g. transport- or supply-limited removal, grain travel distances) leading e.g. to MAHLERAN 3. Development of models adapted to particular time/space scales Some major remaining problems 1. Failure to integrate geomorphological and agronomic approaches 2. Tillage erosion - Is erosion loss of sediment or lowering of centre of mass? 3. Dynamic change during an event, as rills etc form.

  5. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  6. A novel integrated assessment methodology of urban water reuse.

    PubMed

    Listowski, A; Ngo, H H; Guo, W S; Vigneswaran, S

    2011-01-01

    Wastewater is no longer considered a waste product and water reuse needs to play a stronger part in securing urban water supply. Although treatment technologies for water reclamation have significantly improved the question that deserves further analysis is, how selection of a particular wastewater treatment technology relates to performance and sustainability? The proposed assessment model integrates; (i) technology, characterised by selected quantity and quality performance parameters; (ii) productivity, efficiency and reliability criteria; (iii) quantitative performance indicators; (iv) development of evaluation model. The challenges related to hierarchy and selections of performance indicators have been resolved through the case study analysis. The goal of this study is to validate a new assessment methodology in relation to performance of the microfiltration (MF) technology, a key element of the treatment process. Specific performance data and measurements were obtained at specific Control and Data Acquisition Points (CP) to satisfy the input-output inventory in relation to water resources, products, material flows, energy requirements, chemicals use, etc. Performance assessment process contains analysis and necessary linking across important parametric functions leading to reliable outcomes and results.

  7. A reduced-order adaptive neuro-fuzzy inference system model as a software sensor for rapid estimation of five-day biochemical oxygen demand

    NASA Astrophysics Data System (ADS)

    Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin

    2013-07-01

    The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.

  8. Do chimpanzees use weight to select hammer tools?

    PubMed

    Schrauf, Cornelia; Call, Josep; Fuwa, Koki; Hirata, Satoshi

    2012-01-01

    The extent to which tool-using animals take into account relevant task parameters is poorly understood. Nut cracking is one of the most complex forms of tool use, the choice of an adequate hammer being a critical aspect in success. Several properties make a hammer suitable for nut cracking, with weight being a key factor in determining the impact of a strike; in general, the greater the weight the fewer strikes required. This study experimentally investigated whether chimpanzees are able to encode the relevance of weight as a property of hammers to crack open nuts. By presenting chimpanzees with three hammers that differed solely in weight, we assessed their ability to relate the weight of the different tools with their effectiveness and thus select the most effective one(s). Our results show that chimpanzees use weight alone in selecting tools to crack open nuts and that experience clearly affects the subjects' attentiveness to the tool properties that are relevant for the task at hand. Chimpanzees can encode the requirements that a nut-cracking tool should meet (in terms of weight) to be effective.

  9. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    PubMed

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  10. Demand theory of gene regulation. II. Quantitative application to the lactose and maltose operons of Escherichia coli.

    PubMed Central

    Savageau, M A

    1998-01-01

    Induction of gene expression can be accomplished either by removing a restraining element (negative mode of control) or by providing a stimulatory element (positive mode of control). According to the demand theory of gene regulation, which was first presented in qualitative form in the 1970s, the negative mode will be selected for the control of a gene whose function is in low demand in the organism's natural environment, whereas the positive mode will be selected for the control of a gene whose function is in high demand. This theory has now been further developed in a quantitative form that reveals the importance of two key parameters: cycle time C, which is the average time for a gene to complete an ON/OFF cycle, and demand D, which is the fraction of the cycle time that the gene is ON. Here we estimate nominal values for the relevant mutation rates and growth rates and apply the quantitative demand theory to the lactose and maltose operons of Escherichia coli. The results define regions of the C vs. D plot within which selection for the wild-type regulatory mechanisms is realizable, and these in turn provide the first estimates for the minimum and maximum values of demand that are required for selection of the positive and negative modes of gene control found in these systems. The ratio of mutation rate to selection coefficient is the most relevant determinant of the realizable region for selection, and the most influential parameter is the selection coefficient that reflects the reduction in growth rate when there is superfluous expression of a gene. The quantitative theory predicts the rate and extent of selection for each mode of control. It also predicts three critical values for the cycle time. The predicted maximum value for the cycle time C is consistent with the lifetime of the host. The predicted minimum value for C is consistent with the time for transit through the intestinal tract without colonization. Finally, the theory predicts an optimum value of C that is in agreement with the observed frequency for E. coli colonizing the human intestinal tract. PMID:9691028

  11. Lean Information Management: Criteria For Selecting Key Performance Indicators At Shop Floor

    NASA Astrophysics Data System (ADS)

    Iuga, Maria Virginia; Kifor, Claudiu Vasile; Rosca, Liviu-Ion

    2015-07-01

    Most successful organizations worldwide use key performance indicators as an important part of their corporate strategy in order to forecast, measure and plan their businesses. Performance metrics vary in their purpose, definition and content. Therefore, the way organizations select what they think are the optimal indicators for their businesses varies from company to company, sometimes even from department to department. This study aims to answer the question of what is the most suitable way to define and select key performance indicators. More than that, it identifies the right criteria to select key performance indicators at shop floor level. This paper contributes to prior research by analysing and comparing previously researched selection criteria and proposes an original six-criteria-model, which caters towards choosing the most adequate KPIs. Furthermore, the authors take the research a step further by further steps to closed research gaps within this field of study.

  12. Selection, calibration, and validation of models of tumor growth.

    PubMed

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.

  13. Sensor selection and chemo-sensory optimization: toward an adaptable chemo-sensory system.

    PubMed

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to "adapt" in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve.

  14. Sensor Selection and Chemo-Sensory Optimization: Toward an Adaptable Chemo-Sensory System

    PubMed Central

    Vergara, Alexander; Llobet, Eduard

    2011-01-01

    Over the past two decades, despite the tremendous research on chemical sensors and machine olfaction to develop micro-sensory systems that will accomplish the growing existent needs in personal health (implantable sensors), environment monitoring (widely distributed sensor networks), and security/threat detection (chemo/bio warfare agents), simple, low-cost molecular sensing platforms capable of long-term autonomous operation remain beyond the current state-of-the-art of chemical sensing. A fundamental issue within this context is that most of the chemical sensors depend on interactions between the targeted species and the surfaces functionalized with receptors that bind the target species selectively, and that these binding events are coupled with transduction processes that begin to change when they are exposed to the messy world of real samples. With the advent of fundamental breakthroughs at the intersection of materials science, micro- and nano-technology, and signal processing, hybrid chemo-sensory systems have incorporated tunable, optimizable operating parameters, through which changes in the response characteristics can be modeled and compensated as the environmental conditions or application needs change. The objective of this article, in this context, is to bring together the key advances at the device, data processing, and system levels that enable chemo-sensory systems to “adapt” in response to their environments. Accordingly, in this review we will feature the research effort made by selected experts on chemical sensing and information theory, whose work has been devoted to develop strategies that provide tunability and adaptability to single sensor devices or sensory array systems. Particularly, we consider sensor-array selection, modulation of internal sensing parameters, and active sensing. The article ends with some conclusions drawn from the results presented and a visionary look toward the future in terms of how the field may evolve. PMID:22319492

  15. a R-Shiny Based Phenology Analysis System and Case Study Using Digital Camera Dataset

    NASA Astrophysics Data System (ADS)

    Zhou, Y. K.

    2018-05-01

    Accurate extracting of the vegetation phenology information play an important role in exploring the effects of climate changes on vegetation. Repeated photos from digital camera is a useful and huge data source in phonological analysis. Data processing and mining on phenological data is still a big challenge. There is no single tool or a universal solution for big data processing and visualization in the field of phenology extraction. In this paper, we proposed a R-shiny based web application for vegetation phenological parameters extraction and analysis. Its main functions include phenological site distribution visualization, ROI (Region of Interest) selection, vegetation index calculation and visualization, data filtering, growth trajectory fitting, phenology parameters extraction, etc. the long-term observation photography data from Freemanwood site in 2013 is processed by this system as an example. The results show that: (1) this system is capable of analyzing large data using a distributed framework; (2) The combination of multiple parameter extraction and growth curve fitting methods could effectively extract the key phenology parameters. Moreover, there are discrepancies between different combination methods in unique study areas. Vegetation with single-growth peak is suitable for using the double logistic module to fit the growth trajectory, while vegetation with multi-growth peaks should better use spline method.

  16. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  17. Recent Prospects in the Inline Monitoring of Nanocomposites and Nanocoatings by Optical Technologies.

    PubMed

    Bugnicourt, Elodie; Kehoe, Timothy; Latorre, Marcos; Serrano, Cristina; Philippe, Séverine; Schmid, Markus

    2016-08-19

    Nanostructured materials have emerged as a key research field in order to confer materials with unique or enhanced properties. The performance of nanocomposites depends on a number of parameters, but the suitable dispersion of nanoparticles remains the key in order to obtain the full nanocomposites' potential in terms of, e.g., flame retardance, mechanical, barrier, thermal properties, etc. Likewise, the performance of nanocoatings to obtain, for example, tailored surface affinity with selected liquids (e.g., for self-cleaning ability or anti-fog properties), protective effects against flame propagation, ultra violet (UV) radiation or gas permeation, is highly dependent on the nanocoating's thickness and homogeneity. In terms of recent advances in the monitoring of nanocomposites and nanocoatings, this review discusses commonly-used offline characterization approaches, as well as promising inline systems. All in all, having good control over both the dispersion and thickness of these materials would help with reaching optimal and consistent properties to allow nanocomposites to extend their use.

  18. Recent Prospects in the Inline Monitoring of Nanocomposites and Nanocoatings by Optical Technologies

    PubMed Central

    Bugnicourt, Elodie; Kehoe, Timothy; Latorre, Marcos; Serrano, Cristina; Philippe, Séverine; Schmid, Markus

    2016-01-01

    Nanostructured materials have emerged as a key research field in order to confer materials with unique or enhanced properties. The performance of nanocomposites depends on a number of parameters, but the suitable dispersion of nanoparticles remains the key in order to obtain the full nanocomposites’ potential in terms of, e.g., flame retardance, mechanical, barrier, thermal properties, etc. Likewise, the performance of nanocoatings to obtain, for example, tailored surface affinity with selected liquids (e.g., for self-cleaning ability or anti-fog properties), protective effects against flame propagation, ultra violet (UV) radiation or gas permeation, is highly dependent on the nanocoating’s thickness and homogeneity. In terms of recent advances in the monitoring of nanocomposites and nanocoatings, this review discusses commonly-used offline characterization approaches, as well as promising inline systems. All in all, having good control over both the dispersion and thickness of these materials would help with reaching optimal and consistent properties to allow nanocomposites to extend their use. PMID:28335278

  19. CO{sub 2} Laser Ablation Propulsion Area Scaling With Polyoxymethylene Propellant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinko, John E.; Ichihashi, Katsuhiro; Ogita, Naoya

    The topic of area scaling is of great importance in the laser propulsion field, including applications to removal of space debris and to selection of size ranges for laser propulsion craft in air or vacuum conditions. To address this issue experimentally, a CO{sub 2} laser operating at up to 10 J was used to irradiate targets. Experiments were conducted in air and vacuum conditions over a range of areas from about 0.05-5 cm{sup 2} to ablate flat polyoxymethylene targets at several fluences. Theoretical effects affecting area scaling, such as rarefaction waves, thermal diffusion, and diffraction, are discussed in terms ofmore » the experimental results. Surface profilometry was used to characterize the ablation samples. A CFD model is used to facilitate analysis, and key results are compared between experimental and model considerations. The dependence of key laser propulsion parameters, including the momentum coupling coefficient and specific impulse, are calculated based on experimental data, and results are compared to existing literature data.« less

  20. Key parameters design of an aerial target detection system on a space-based platform

    NASA Astrophysics Data System (ADS)

    Zhu, Hanlu; Li, Yejin; Hu, Tingliang; Rao, Peng

    2018-02-01

    To ensure flight safety of an aerial aircraft and avoid recurrence of aircraft collisions, a method of multi-information fusion is proposed to design the key parameter to realize aircraft target detection on a space-based platform. The key parameters of a detection wave band and spatial resolution using the target-background absolute contrast, target-background relative contrast, and signal-to-clutter ratio were determined. This study also presented the signal-to-interference ratio for analyzing system performance. Key parameters are obtained through the simulation of a specific aircraft. And the simulation results show that the boundary ground sampling distance is 30 and 35 m in the mid- wavelength infrared (MWIR) and long-wavelength infrared (LWIR) bands for most aircraft detection, and the most reasonable detection wavebands is 3.4 to 4.2 μm and 4.35 to 4.5 μm in the MWIR bands, and 9.2 to 9.8 μm in the LWIR bands. We also found that the direction of detection has a great impact on the detection efficiency, especially in MWIR bands.

  1. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  2. IMBLMS phase B4, additional tasks 5.0. Microbial identification system

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A laboratory study was undertaken to provide simplified procedures leading to the presumptive identification (I/D) of defined microorganisms on-board an orbiting spacecraft. Identifications were to be initiated by nonprofessional bacteriologists, (crew members) on a contingency basis only. Key objectives/constraints for this investigation were as follows:(1) I/D procedures based on limited, defined diagnostic tests, (2) testing oriented about ten selected microorganisms, (3) provide for definitive I/D key and procedures per selected organism, (4) define possible occurrences of false positives for the resulting I/D key by search of the appropriate literature, and (5) evaluation of the I/D key and procedure through a limited field trial on randomly selected subjects using the I/D key.

  3. Key comparison SIM.EM.RF-K5b.CL: scattering coefficients by broad-band methods, 2 GHz-18 GHz — type N connector

    NASA Astrophysics Data System (ADS)

    Silva, H.; Monasterios, G.

    2016-01-01

    The first key comparison in microwave frequencies within the SIM (Sistema Interamericano de Metrología) region has been carried out. The measurands were the S-parameters of 50 ohm coaxial devices with Type-N connectors and were measured at 2 GHz, 9 GHz and 18 GHz. SIM.EM.RF-K5b.CL was the identification assigned and it was based on a parent CCEM key comparison named CCEM.RF-K5b.CL. For this reason, the measurements standards and their nominal values were selected accordingly, i.e. two one-port devices (a matched and a mismatched load) to cover low and high reflection coefficients and two attenuators (3dB and 20 dB) to cover low and high transmission coefficients. This key comparison has met the need for ensuring traceability in high-frequency measurements across America by linking SIM's results to CCEM. Six NMIs have participated in this comparison which was piloted by the Instituto Nacional de Tecnología Industrial (Argentina). A linking method of multivariate values was proposed and implemented in order to allow the linking of 2-dimensional results. KEY WORDS FOR SEARCH Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  4. Development of an Agent-Based Model (ABM) to Simulate the Immune System and Integration of a Regression Method to Estimate the Key ABM Parameters by Fitting the Experimental Data

    PubMed Central

    Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le

    2015-01-01

    Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589

  5. Development of Processing Parameters for Organic Binders Using Selective Laser Sintering

    NASA Technical Reports Server (NTRS)

    Mobasher, Amir A.

    2003-01-01

    This document describes rapid prototyping, its relation to Computer Aided Design (CAD), and the application of these techniques to choosing parameters for Selective Laser Sintering (SLS). The document reviews the parameters selected by its author for his project, the SLS machine used, and its software.

  6. 40 CFR 86.001-22 - Approval of application for certification; test fleet selections; determinations of parameters...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...

  7. 40 CFR 86.001-22 - Approval of application for certification; test fleet selections; determinations of parameters...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...

  8. Inference of directional selection and mutation parameters assuming equilibrium.

    PubMed

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Improvement of non-key traits in radiata pine breeding programme when long-term economic importance is uncertain.

    PubMed

    Li, Yongjun; Dungey, Heidi; Yanchuk, Alvin; Apiolaza, Luis A

    2017-01-01

    Diameter at breast height (DBH), wood density (DEN) and predicted modulus of elasticity (PME) are considered as 'key traits' (KT) in the improvement in radiata pine breeding programmes in New Zealand. Any other traits which are also of interest to radiata pine breeders and forest growers are called 'non-key traits' (NKTs). External resin bleeding (ERB), internal checking (IC), number of heartwood rings (NHR) are three such non-key traits which affect wood quality of radiata pine timber. Economic importance of the KTs and NKTs is hard to define in radiata pine breeding programmes due to long rotation period. Desired-gain index (DGIs) and robust selection were proposed to incorporate NKTs into radiata pine breeding programme in order to deal with the uncertainty of economic importance. Four desired-gain indices A-D were proposed in this study. The desired-gain index A (DGI-A) emphasized growth and led to small decrease in ERB and small increase in IC and NHR. The expected genetic gains of all traits in the desired-gain index B (DGI-B) were in the favourable directions (positive genetic gains in the key traits and negative genetic gains in the non-key traits). The desired-gain index C (DGI-C) placed emphasis on wood density, leading to favourable genetic gain in the NKTs but reduced genetic gains for DBH and PME. The desired-gain index D (DGI-D) exerted a bit more emphasis on the non-key traits, leading large favourable reduction in the non-key traits and lower increase in the key traits compared with the other DGIs. When selecting both the key traits and the non-key traits, the average EBVs of six traits were all in the same directions as the expected genetic gains except for DBH in the DGI-D. When the key traits were measured and selected, internal checking always had a negative (favourable) genetic gain but ERB and NHR had unfavourable genetic gain in the most of time. After removing some individuals with high sensitivity to the change of economic weights, robust desired-gain selection made genetic gains of all the key and non-key traits to move a little bit toward unfavourable directions in the four indices. It is concluded that desired-gain index combined with robust selection concept is an efficient way for selecting the key and non-key traits in radiata pine breeding programmes.

  10. Bayesian Parameter Inference and Model Selection by Population Annealing in Systems Biology

    PubMed Central

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor. PMID:25089832

  11. Capsule implosion optimization during the indirect-drive National Ignition Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landen, O. L.; Edwards, J.; Haan, S. W.

    2011-05-15

    Capsule performance optimization campaigns will be conducted at the National Ignition Facility [G. H. Miller, E. I. Moses, and C. R. Wuest, Nucl. Fusion 44, 228 (2004)] to substantially increase the probability of ignition. The campaigns will experimentally correct for residual uncertainties in the implosion and hohlraum physics used in our radiation-hydrodynamic computational models using a variety of ignition capsule surrogates before proceeding to cryogenic-layered implosions and ignition experiments. The quantitative goals and technique options and down selections for the tuning campaigns are first explained. The computationally derived sensitivities to key laser and target parameters are compared to simple analyticmore » models to gain further insight into the physics of the tuning techniques. The results of the validation of the tuning techniques at the OMEGA facility [J. M. Soures et al., Phys. Plasmas 3, 2108 (1996)] under scaled hohlraum and capsule conditions relevant to the ignition design are shown to meet the required sensitivity and accuracy. A roll-up of all expected random and systematic uncertainties in setting the key ignition laser and target parameters due to residual measurement, calibration, cross-coupling, surrogacy, and scale-up errors has been derived that meets the required budget. Finally, we show how the tuning precision will be improved after a number of shots and iterations to meet an acceptable level of residual uncertainty.« less

  12. Genomic data assimilation for estimating hybrid functional Petri net from time-course gene expression data.

    PubMed

    Nagasaki, Masao; Yamaguchi, Rui; Yoshida, Ryo; Imoto, Seiya; Doi, Atsushi; Tamada, Yoshinori; Matsuno, Hiroshi; Miyano, Satoru; Higuchi, Tomoyuki

    2006-01-01

    We propose an automatic construction method of the hybrid functional Petri net as a simulation model of biological pathways. The problems we consider are how we choose the values of parameters and how we set the network structure. Usually, we tune these unknown factors empirically so that the simulation results are consistent with biological knowledge. Obviously, this approach has the limitation in the size of network of interest. To extend the capability of the simulation model, we propose the use of data assimilation approach that was originally established in the field of geophysical simulation science. We provide genomic data assimilation framework that establishes a link between our simulation model and observed data like microarray gene expression data by using a nonlinear state space model. A key idea of our genomic data assimilation is that the unknown parameters in simulation model are converted as the parameter of the state space model and the estimates are obtained as the maximum a posteriori estimators. In the parameter estimation process, the simulation model is used to generate the system model in the state space model. Such a formulation enables us to handle both the model construction and the parameter tuning within a framework of the Bayesian statistical inferences. In particular, the Bayesian approach provides us a way of controlling overfitting during the parameter estimations that is essential for constructing a reliable biological pathway. We demonstrate the effectiveness of our approach using synthetic data. As a result, parameter estimation using genomic data assimilation works very well and the network structure is suitably selected.

  13. Determination of remodeling parameters for a strain-adaptive finite element model of the distal ulna.

    PubMed

    Neuert, Mark A C; Dunning, Cynthia E

    2013-09-01

    Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.

  14. Performance of Digital Communications over Selective Fading Channels.

    DTIC Science & Technology

    1983-09-01

    and we assume that the "" receiver has compensated for the mean path delay td . Equivalently, we let d - : - 0 in order to establish a time reference...Pv dl g(V v( 4bd 2G2 E 211v/T2 (3.37) N0 4’. .4 which implies that iv is the key parameter in determining the signal-to-noise ratio for single-pulse...in (4.8) are defined by R-77 7777 7777,077 7-7.7 70 jnht(-b )/ Td Kv(h,bi,:) = , v(t) v(t-4) e h i dt (4.10a) and R=hbit v(t) v(t+T-4) e•j uh t (1- b

  15. Speech emotion recognition methods: A literature review

    NASA Astrophysics Data System (ADS)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  16. Mercury orbiter transport study

    NASA Technical Reports Server (NTRS)

    Friedlander, A. L.; Feingold, H.

    1977-01-01

    A data base and comparative performance analyses of alternative flight mode options for delivering a range of payload masses to Mercury orbit are provided. Launch opportunities over the period 1980-2000 are considered. Extensive data trades are developed for the ballistic flight mode option utilizing one or more swingbys of Venus. Advanced transport options studied include solar electric propulsion and solar sailing. Results show the significant performance tradeoffs among such key parameters as trip time, payload mass, propulsion system mass, orbit size, launch year sensitivity and relative cost-effectiveness. Handbook-type presentation formats, particularly in the case of ballistic mode data, provide planetary program planners with an easily used source of reference information essential in the preliminary steps of mission selection and planning.

  17. Review of Random Phase Encoding in Volume Holographic Storage

    PubMed Central

    Su, Wei-Chia; Sun, Ching-Cherng

    2012-01-01

    Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  18. Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) on the Vision II turbine rotorcraft UAV over the Florida Keys

    NASA Astrophysics Data System (ADS)

    Holasek, R. E.; Nakanishi, K.; Swartz, B.; Zacaroli, R.; Hill, B.; Naungayan, J.; Herwitz, S.; Kavros, P.; English, D. C.

    2013-12-01

    As part of the NASA ROSES program, the NovaSol Selectable Hyperspectral Airborne Remote-sensing Kit (SHARK) was flown as the payload on the unmanned Vision II helicopter. The goal of the May 2013 data collection was to obtain high resolution visible and near-infrared (visNIR) hyperspectral data of seagrasses and coral reefs in the Florida Keys. The specifications of the SHARK hyperspectral system and the Vision II turbine rotorcraft will be described along with the process of integrating the payload to the vehicle platform. The minimal size, weight, and power (SWaP) specifications of the SHARK system is an ideal match to the Vision II helicopter and its flight parameters. One advantage of the helicopter over fixed wing platforms is its inherent ability to take off and land in a limited area and without a runway, enabling the UAV to be located in close proximity to the experiment areas and the science team. Decisions regarding integration times, waypoint selection, mission duration, and mission frequency are able to be based upon the local environmental conditions and can be modified just prior to take off. The operational procedures and coordination between the UAV pilot, payload operator, and scientist will be described. The SHARK system includes an inertial navigation system and digital elevation model (DEM) which allows image coordinates to be calculated onboard the aircraft in real-time. Examples of the geo-registered images from the data collection will be shown. SHARK mounted below VTUAV. SHARK deployed on VTUAV over water.

  19. A Parameter Subset Selection Algorithm for Mixed-Effects Models

    DOE PAGES

    Schmidt, Kathleen L.; Smith, Ralph C.

    2016-01-01

    Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less

  20. Focal-Plane Alignment Sensing

    DTIC Science & Technology

    1993-02-01

    amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection

  1. Selection of noisy measurement locations for error reduction in static parameter identification

    NASA Astrophysics Data System (ADS)

    Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.

    1992-09-01

    An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.

  2. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as alter the...

  3. Microwave moisture sensing of seedcotton: Part 1: Seedcotton microwave material properties

    USDA-ARS?s Scientific Manuscript database

    Moisture content at harvest is a key parameter that impacts quality and how well the cotton crop can be stored without degrading before processing. It is also a key parameter of interest for harvest time field trials as it can directly influence the quality of the harvested crop as well as skew the...

  4. A Nomographic Methodology for Use in Performance Trade-Off Studies of Parabolic Dish Solar Power Modules

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K.; Fujita, T.

    1984-01-01

    A simple graphical method was developed to undertake technical design trade-off studies for individual parabolic dish models comprising a two-axis tracking parabolic dish with a cavity receiver and power conversion assembly at the focal point. The results of these technical studies are then used in performing the techno-economic analyses required for determining appropriate subsystem sizing. Selected graphs that characterize the performance of subsystems within the module were arranged in the form of a nomogram that would enable an investigator to carry out several design trade-off studies. Key performance parameters encompassed in the nomogram include receiver losses, intercept factor, engine rating, and engine efficiency. Design and operation parameters such as concentrator size, receiver type (open or windowed aperture), receiver aperture size, operating temperature of the receiver and engine, engine partial load characteristics, concentrator slope error, and the type of reflector surface, are also included in the graphical solution. Cost considerations are not included.

  5. Shocks in the relativistic transonic accretion with low angular momentum

    NASA Astrophysics Data System (ADS)

    Suková, P.; Charzyński, S.; Janiuk, A.

    2017-12-01

    We perform 1D/2D/3D relativistic hydrodynamical simulations of accretion flows with low angular momentum, filling the gap between spherically symmetric Bondi accretion and disc-like accretion flows. Scenarios with different directional distributions of angular momentum of falling matter and varying values of key parameters such as spin of central black hole, energy and angular momentum of matter are considered. In some of the scenarios the shock front is formed. We identify ranges of parameters for which the shock after formation moves towards or outwards the central black hole or the long-lasting oscillating shock is observed. The frequencies of oscillations of shock positions which can cause flaring in mass accretion rate are extracted. The results are scalable with mass of central black hole and can be compared to the quasi-periodic oscillations of selected microquasars (such as GRS 1915+105, XTE J1550-564 or IGR J17091-3624), as well as to the supermassive black holes in the centres of weakly active galaxies, such as Sgr A*.

  6. Soft sensor development for Mooney viscosity prediction in rubber mixing process based on GMMDJITGPR algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping

    2017-01-01

    In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.

  7. Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research

    PubMed Central

    Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi

    2016-01-01

    The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637

  8. PEGASO: A Personalised and Motivational ICT System to Empower Adolescents Towards Healthy Lifestyles.

    PubMed

    Carrino, Stefano; Caon, Maurizio; Angelini, Leonardo; Mugellini, Elena; Abou Khaled, Omar; Orte, Silvia; Vargiu, Eloisa; Coulson, Neil; Serrano, José C E; Tabozzi, Sarah; Lafortuna, Claudio; Rizzo, Giovanna

    2014-01-01

    Unhealthy alimentary behaviours and physical inactivity habits are key risk factors for major non communicable diseases. Several researches demonstrate that juvenile obesity can lead to serious medical conditions, pathologies and have important psycho-social consequences. PEGASO is a multidisciplinary project aimed at promoting healthy lifestyles among teenagers through assistive technology. The core of this project is represented by the ICT system, which allows providing tailored interventions to the users through their smartphones in order to motivate them. The novelty of this approach consists of developing a Virtual Individual Model (VIM) for user characterization, which is based on physical, functional and behavioural parameters opportunely selected by experts. These parameters are digitised and updated thanks to the user monitoring through smartphone; data mining algorithms are applied for the detection of activity and nutrition habits and this information is used to provide personalised feedback. The user interface will be developed using gamified approaches and integrating serious games to effectively promote health literacy and facilitate behaviour change.

  9. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  10. Synergy of the SimSphere land surface process model with ASTER imagery for the retrieval of spatially distributed estimates of surface turbulent heat fluxes and soil moisture content

    NASA Astrophysics Data System (ADS)

    Petropoulos, George; Wooster, Martin J.; Carlson, Toby N.; Drake, Nick

    2010-05-01

    Accurate information on spatially explicit distributed estimates of key land-atmosphere fluxes and related land surface parameters is of key importance in a range of disciplines including hydrology, meteorology, agriculture and ecology. Estimation of those parameters from remote sensing frequently employs the integration of such data with mathematical representations of the transfers of energy, mass and radiation between soil, vegetation and atmosphere continuum, known as Soil Vegetation Atmosphere Transfer (SVAT) models. The ability of one such inversion modelling scheme to resolve for key surface energy fluxes and of soil surface moisture content is examined here using data from a multispectral high spatial resolution imaging instrument, the Advanced Spaceborne Thermal Emission and Reflection Scanning Radiometer (ASTER) and SimSphere one-dimensional SVAT model. Accuracy of the investigated methodology, so-called as the "triangle" method, is verified using validated ground observations obtained from selected days collected from nine CARBOEUROPE IP sites representing a variety of climatic, topographic and environmental conditions. Subsequently, a new framework is suggested for the retrieval of two additional parameters by the investigated method, namely the Evaporative (EF) and the Non-Evaporative (NEF) Fractions. Results indicated a close agreement between the inverted surface fluxes and surface moisture availability maps as well as of the EF and NEF parameters with the observations both spatially and temporally with accuracies comparable to those obtained in similar experiments with high spatial resolution data. Inspection of the inverted surface fluxes maps regionally, showed an explainable distribution in the range of the inverted parameters in relation with the surface heterogeneity. Overall performance of the "triangle" inversion methodology was found to be affected predominantly by the SVAT model "correct" initialisation representative of the test site environment, most importantly the atmospheric conditions required in the SVAT model initial conditions. This study represents the first comprehensive evaluation of the performance of this particular methodological implementation at a European setting using the SimSphere SVAT with the ASTER data. The present work is also very timely in that, a variation of this specific inversion methodology has been proposed for the operational retrieval of the soil surface moisture content by National Polar-orbiting Operational Environmental Satellite System (NPOESS), in a series of satellite platforms that are due to be launched in the next 12 years starting from 2012. KEYWORDS: micrometeorology, surface heat fluxes, soil moisture content, ASTER, triangle method, SimSphere, CarboEurope IP

  11. Population genetics of polymorphism and divergence for diploid selection models with arbitrary dominance.

    PubMed

    Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D

    2004-09-01

    We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.

  12. Process development for robust removal of aggregates using cation exchange chromatography in monoclonal antibody purification with implementation of quality by design.

    PubMed

    Xu, Zhihao; Li, Jason; Zhou, Joe X

    2012-01-01

    Aggregate removal is one of the most important aspects in monoclonal antibody (mAb) purification. Cation-exchange chromatography (CEX), a widely used polishing step in mAb purification, is able to clear both process-related impurities and product-related impurities. In this study, with the implementation of quality by design (QbD), a process development approach for robust removal of aggregates using CEX is described. First, resin screening studies were performed and a suitable CEX resin was chosen because of its relatively better selectivity and higher dynamic binding capacity. Second, a pH-conductivity hybrid gradient elution method for the CEX was established, and the risk assessment for the process was carried out. Third, a process characterization study was used to evaluate the impact of the potentially important process parameters on the process performance with respect to aggregate removal. Accordingly, a process design space was established. Aggregate level in load is the critical parameter. Its operating range is set at 0-3% and the acceptable range is set at 0-5%. Equilibration buffer is the key parameter. Its operating range is set at 40 ± 5 mM acetate, pH 5.0 ± 0.1, and acceptable range is set at 40 ± 10 mM acetate, pH 5.0 ± 0.2. Elution buffer, load mass, and gradient elution volume are non-key parameters; their operating ranges and acceptable ranges are equally set at 250 ± 10 mM acetate, pH 6.0 ± 0.2, 45 ± 10 g/L resin, and 10 ± 20% CV respectively. Finally, the process was scaled up 80 times and the impurities removal profiles were revealed. Three scaled-up runs showed that the size-exclusion chromatography (SEC) purity of the CEX pool was 99.8% or above and the step yield was above 92%, thereby proving that the process is both consistent and robust.

  13. Picking ChIP-seq peak detectors for analyzing chromatin modification experiments

    PubMed Central

    Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D.; Kluger, Yuval

    2012-01-01

    Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development. PMID:22307239

  14. Modeling seasonal variability of carbonate system parameters at the sediment -water interface in the Baltic Sea (Gdansk Deep)

    NASA Astrophysics Data System (ADS)

    Protsenko, Elizaveta; Yakubov, Shamil; Lessin, Gennady; Yakushev, Evgeniy; Sokołowski, Adam

    2017-04-01

    A one-dimensional fully-coupled benthic pelagic biogeochemical model BROM (Bottom RedOx Model) was used for simulations of seasonal variability of biogeochemical parameters in the upper sediment, Bottom Boundary Layer and the water column in the Gdansk Deep of the Baltic Sea. This model represents key biogeochemical processes of transformation of C, N, P, Si, O, S, Mn, Fe and the processes of vertical transport in the water column and the sediments. The hydrophysical block of BROM was forced by the output calculated with model GETM (General Estuarine Transport Model). In this study we focused on parameters of carbonate system at Baltic Sea, and mainly on their distributions near the sea-water interface. For validating of BROM we used field data (concentrations of main nutrients at water column and porewater of upper sediment) from the Gulf of Gdansk. The model allowed us to simulate the baseline ranges of seasonal variability of pH, Alkalinity, TIC and calcite/aragonite saturation as well as vertical fluxes of carbon in a region potentially selected for the CCS storage. This work was supported by project EEA CO2MARINE and STEMM-CCS.

  15. A comprehensive method for GNSS data quality determination to improve ionospheric data analysis.

    PubMed

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-08-14

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis.

  16. Picking ChIP-seq peak detectors for analyzing chromatin modification experiments.

    PubMed

    Micsinai, Mariann; Parisi, Fabio; Strino, Francesco; Asp, Patrik; Dynlacht, Brian D; Kluger, Yuval

    2012-05-01

    Numerous algorithms have been developed to analyze ChIP-Seq data. However, the complexity of analyzing diverse patterns of ChIP-Seq signals, especially for epigenetic marks, still calls for the development of new algorithms and objective comparisons of existing methods. We developed Qeseq, an algorithm to detect regions of increased ChIP read density relative to background. Qeseq employs critical novel elements, such as iterative recalibration and neighbor joining of reads to identify enriched regions of any length. To objectively assess its performance relative to other 14 ChIP-Seq peak finders, we designed a novel protocol based on Validation Discriminant Analysis (VDA) to optimally select validation sites and generated two validation datasets, which are the most comprehensive to date for algorithmic benchmarking of key epigenetic marks. In addition, we systematically explored a total of 315 diverse parameter configurations from these algorithms and found that typically optimal parameters in one dataset do not generalize to other datasets. Nevertheless, default parameters show the most stable performance, suggesting that they should be used. This study also provides a reproducible and generalizable methodology for unbiased comparative analysis of high-throughput sequencing tools that can facilitate future algorithmic development.

  17. A Comprehensive Method for GNSS Data Quality Determination to Improve Ionospheric Data Analysis

    PubMed Central

    Kim, Minchan; Seo, Jiwon; Lee, Jiyun

    2014-01-01

    Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis. PMID:25196005

  18. Synthesis and analysis of separation networks for the recovery of intracellular chemicals generated from microbial-based conversions

    DOE PAGES

    Yenkie, Kirti M.; Wu, Wenzhao; Maravelias, Christos T.

    2017-05-08

    Background. Bioseparations can contribute to more than 70% in the total production cost of a bio-based chemical, and if the desired chemical is localized intracellularly, there can be additional challenges associated with its recovery. Based on the properties of the desired chemical and other components in the stream, there can be multiple feasible options for product recovery. These options are composed of several alternative technologies, performing similar tasks. The suitability of a technology for a particular chemical depends on (1) its performance parameters, such as separation efficiency; (2) cost or amount of added separating agent; (3) properties of the bioreactormore » effluent (e.g., biomass titer, product content); and (4) final product specifications. Our goal is to first synthesize alternative separation options and then analyze how technology selection affects the overall process economics. To achieve this, we propose an optimization-based framework that helps in identifying the critical technologies and parameters. Results. We study the separation networks for two representative classes of chemicals based on their properties. The separation network is divided into three stages: cell and product isolation (stage I), product concentration (II), and product purification and refining (III). Each stage exploits differences in specific product properties for achieving the desired product quality. The cost contribution analysis for the two cases (intracellular insoluble and intracellular soluble) reveals that stage I is the key cost contributor (>70% of the overall cost). Further analysis suggests that changes in input conditions and technology performance parameters lead to new designs primarily in stage I. Conclusions. The proposed framework provides significant insights for technology selection and assists in making informed decisions regarding technologies that should be used in combination for a given set of stream/product properties and final output specifications. Additionally, the parametric sensitivity provides an opportunity to make crucial design and selection decisions in a comprehensive and rational manner. This will prove valuable in the selection of chemicals to be produced using bioconversions (bioproducts) as well as in creating better bioseparation flow sheets for detailed economic assessment and process implementation on the commercial scale.« less

  19. Synthesis and analysis of separation networks for the recovery of intracellular chemicals generated from microbial-based conversions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yenkie, Kirti M.; Wu, Wenzhao; Maravelias, Christos T.

    Background. Bioseparations can contribute to more than 70% in the total production cost of a bio-based chemical, and if the desired chemical is localized intracellularly, there can be additional challenges associated with its recovery. Based on the properties of the desired chemical and other components in the stream, there can be multiple feasible options for product recovery. These options are composed of several alternative technologies, performing similar tasks. The suitability of a technology for a particular chemical depends on (1) its performance parameters, such as separation efficiency; (2) cost or amount of added separating agent; (3) properties of the bioreactormore » effluent (e.g., biomass titer, product content); and (4) final product specifications. Our goal is to first synthesize alternative separation options and then analyze how technology selection affects the overall process economics. To achieve this, we propose an optimization-based framework that helps in identifying the critical technologies and parameters. Results. We study the separation networks for two representative classes of chemicals based on their properties. The separation network is divided into three stages: cell and product isolation (stage I), product concentration (II), and product purification and refining (III). Each stage exploits differences in specific product properties for achieving the desired product quality. The cost contribution analysis for the two cases (intracellular insoluble and intracellular soluble) reveals that stage I is the key cost contributor (>70% of the overall cost). Further analysis suggests that changes in input conditions and technology performance parameters lead to new designs primarily in stage I. Conclusions. The proposed framework provides significant insights for technology selection and assists in making informed decisions regarding technologies that should be used in combination for a given set of stream/product properties and final output specifications. Additionally, the parametric sensitivity provides an opportunity to make crucial design and selection decisions in a comprehensive and rational manner. This will prove valuable in the selection of chemicals to be produced using bioconversions (bioproducts) as well as in creating better bioseparation flow sheets for detailed economic assessment and process implementation on the commercial scale.« less

  20. Synthesis and analysis of separation networks for the recovery of intracellular chemicals generated from microbial-based conversions.

    PubMed

    Yenkie, Kirti M; Wu, Wenzhao; Maravelias, Christos T

    2017-01-01

    Bioseparations can contribute to more than 70% in the total production cost of a bio-based chemical, and if the desired chemical is localized intracellularly, there can be additional challenges associated with its recovery. Based on the properties of the desired chemical and other components in the stream, there can be multiple feasible options for product recovery. These options are composed of several alternative technologies, performing similar tasks. The suitability of a technology for a particular chemical depends on (1) its performance parameters, such as separation efficiency; (2) cost or amount of added separating agent; (3) properties of the bioreactor effluent (e.g., biomass titer, product content); and (4) final product specifications. Our goal is to first synthesize alternative separation options and then analyze how technology selection affects the overall process economics. To achieve this, we propose an optimization-based framework that helps in identifying the critical technologies and parameters. We study the separation networks for two representative classes of chemicals based on their properties. The separation network is divided into three stages: cell and product isolation (stage I), product concentration (II), and product purification and refining (III). Each stage exploits differences in specific product properties for achieving the desired product quality. The cost contribution analysis for the two cases (intracellular insoluble and intracellular soluble) reveals that stage I is the key cost contributor (>70% of the overall cost). Further analysis suggests that changes in input conditions and technology performance parameters lead to new designs primarily in stage I. The proposed framework provides significant insights for technology selection and assists in making informed decisions regarding technologies that should be used in combination for a given set of stream/product properties and final output specifications. Additionally, the parametric sensitivity provides an opportunity to make crucial design and selection decisions in a comprehensive and rational manner. This will prove valuable in the selection of chemicals to be produced using bioconversions (bioproducts) as well as in creating better bioseparation flow sheets for detailed economic assessment and process implementation on the commercial scale.

  1. Fire regime: history and definition of a key concept in disturbance ecology.

    PubMed

    Krebs, Patrik; Pezzatti, Gianni B; Mazzoleni, Stefano; Talbot, Lee M; Conedera, Marco

    2010-06-01

    "Fire regime" has become, in recent decades, a key concept in many scientific domains. In spite of its wide spread use, the concept still lacks a clear and wide established definition. Many believe that it was first discussed in a famous report on national park management in the United States, and that it may be simply defined as a selection of a few measurable parameters that summarize the fire occurrence patterns in an area. This view has been uncritically perpetuated in the scientific community in the last decades. In this paper we attempt a historical reconstruction of the origin, the evolution and the current meaning of "fire regime" as a concept. Its roots go back to the 19th century in France and to the first half of the 20th century in French African colonies. The "fire regime" concept took time to evolve and pass from French into English usage and thus to the whole scientific community. This coincided with a paradigm shift in the early 1960s in the United States, where a favourable cultural, social and scientific climate led to the natural role of fires as a major disturbance in ecosystem dynamics becoming fully acknowledged. Today the concept of "fire regime" refers to a collection of several fire-related parameters that may be organized, assembled and used in different ways according to the needs of the users. A structure for the most relevant categories of parameters is proposed, aiming to contribute to a unified concept of "fire regime" that can reconcile the physical nature of fire with the socio-ecological context within which it occurs.

  2. Elisa technology consolidation study overview

    NASA Astrophysics Data System (ADS)

    Fitzsimons, E. D.; Brandt, N.; Johann, U.; Kemble, S.; Schulte, H.-R.; Weise, D.; Ziegler, T.

    2017-11-01

    The eLISA (evolved Laser Interferometer Space Antenna) mission is an ESA L3 concept mission intended to detect and characterise gravitational radiation emitted from astrophysical sources [1]. Current designs for eLISA [2] are based on the ESA study conducted in 2011 to reformulate the original ESA/NASA LISA concept [3] into an ESA-only L1 candidate named NGO (New Gravitational Observatory) [4]. During this brief reformulation period, a number of significant changes were made to the baseline LISA design in order to create a more costeffective mission. Some of the key changes implemented during this reformulation were: • A reduction in the inter satellite distance (the arm length) from 5 Gm to 1 Gm. • A reduction in the diameter of the telescope from 40 cm to 20 cm. • A reduction in the required laser power by approximately 40%. • Implementation of only 2 laser arms instead of 3. Many further simplifications were then enabled by these main design changes including the elimination of payload items in the two spacecraft (S/C) with no laser-link between them (the daughter S/C), a reduction in the size and complexity of the optical bench and the elimination of the Point Ahead Angle Mechanism (PAAM), which corrects for variations in the pointing direction to the far S/C caused by orbital dynamics [4] [5]. In the run-up to an L3 mission definition phase later in the decade, it is desirable to review these design choices and analyse the inter-dependencies and scaling between the key mission parameters with the goal of better understanding the parameter space and ensuring that in the final selection of the eLISA mission parameters the optimal balance between cost, complexity and science return can be achieved.

  3. Extending the Peak Bandwidth of Parameters for Softmax Selection in Reinforcement Learning.

    PubMed

    Iwata, Kazunori

    2016-05-11

    Softmax selection is one of the most popular methods for action selection in reinforcement learning. Although various recently proposed methods may be more effective with full parameter tuning, implementing a complicated method that requires the tuning of many parameters can be difficult. Thus, softmax selection is still worth revisiting, considering the cost savings of its implementation and tuning. In fact, this method works adequately in practice with only one parameter appropriately set for the environment. The aim of this paper is to improve the variable setting of this method to extend the bandwidth of good parameters, thereby reducing the cost of implementation and parameter tuning. To achieve this, we take advantage of the asymptotic equipartition property in a Markov decision process to extend the peak bandwidth of softmax selection. Using a variety of episodic tasks, we show that our setting is effective in extending the bandwidth and that it yields a better policy in terms of stability. The bandwidth is quantitatively assessed in a series of statistical tests.

  4. Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix

    NASA Astrophysics Data System (ADS)

    Hagen, V. S.; Arntsen, B.; Raknes, E. B.

    2017-12-01

    Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.

  5. Study for the selection of optimal site in northeastern, Mexico for wind power generation using genetic algorithms.

    NASA Astrophysics Data System (ADS)

    Gonzalez, T.; Ruvalcaba, A.; Oliver, L.

    2016-12-01

    The electricity generation from renewable resources has acquired a leading role. Mexico particularrly it has great interest in renewable natural resources for power generation, especially wind energy. Therefore, the country is rapidly entering in the development of wind power generators sites. The development of a wind places as an energy project, does not have a standardized methodology. Techniques vary according to the developer to select the best place to install a wind turbine system. Generally to install the system the developers consider three key factors: 1) the characteristics of the wind, 2) the potential distribution of electricity and 3) transport access to the site. This paper presents a study with a different methodology which is carried out in two stages: the first at regional scale uses "space" and "natural" criteria in order to select a region based on its cartographic features such as politics and physiographic division, location of conservation natural areas, water bodies, urban criteria; and natural criteria such as the amount and direction of the wind, the type and land use, vegetation, topography and biodiversity of the site. The result of the application of these criteria, gives a first optimal selection area. The second part of the methodology includes criteria and variables on detail scale. The analysis of all data information collected will provide new parameters (decision variables) for the site. The overall analysis of the information, based in these criteria, indicates that the best location that the best location of the field would be the southern Coahuila and the central part of Nuevo Leon. The wind power site will contribute to the economy grow of important cities including Monterrey. Finally, computational model of genetic algorithm will be used as a tool to determine the best site selection depending on the parameters considered.

  6. Fish oil LC-PUFAs do not affect blood coagulation parameters and bleeding manifestations: Analysis of 8 clinical studies with selected patient groups on omega-3-enriched medical nutrition.

    PubMed

    Jeansen, Stephanie; Witkamp, Renger F; Garthoff, Jossie A; van Helvoort, Ardy; Calder, Philip C

    2018-06-01

    The increased consumption of fish oil enriched-products exposes a wide diversity of people, including elderly and those with impaired health to relatively high amounts of n-3 long-chain polyunsaturated fatty acids (n-3 LC-PUFAs). There is an ongoing debate around the possible adverse effects of n-3 LC-PUFAs on bleeding risk, particularly relevant in people with a medical history of cardiovascular events or using antithrombotic drugs. This analysis of 8 clinical intervention studies conducted with enteral medical nutrition products containing fish oil as a source of n-3 LC-PUFAs addresses the occurrence of bleeding-related adverse events and effects on key coagulation parameters (Prothrombin Time [PT], (activated) and Partial Thromboplastin Time [(a)PTT]). In all the patients considered (over 600 subjects treated with the active product in total), with moderate to severe disease, with or without concomitant use of antithrombotic agents, at home or in an Intensive Care Unit (ICU), no evidence of increased risk of bleeding with use of n-3 LC-PUFAs was observed. Furthermore there were no statistically significant changes from baseline in measured coagulation parameters. These findings further support the safe consumption of n-3 LC-PUFAs, even at short-term doses up to 10 g/day of eicosapentaenoic acid + docosahexaenoic acid (EPA + DHA) or consumed for up to 52 weeks above 1.5 g/day, in selected vulnerable and sensitive populations such as subjects with gastrointestinal cancer or patients in an ICU. We found no evidence to support any concern raised with regards to the application of n-3 LC-PUFAs and the potentially increased risk for the occurrence of adverse bleeding manifestations in these selected patient populations consuming fish oil enriched medical nutrition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  8. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  9. An evaluation of microbial profile in halitosis with tongue coating using PCR (polymerase chain reaction)- a clinical and microbiological study.

    PubMed

    Kamaraj R, Dinesh; Bhushan, Kala S; K L, Vandana

    2014-01-01

    Medline search using key words halitosis, tongue coating, polymerase chain reaction, microbial profile did not reveal any study. Hence, the purpose of the present investigation was to assess the malodor using the organoleptic method and tanita device; to quantify odoriferous microorganisms using Polymerase Chain Reaction technique in chronic periodontitis patients. The study included 30 chronic periodontitis patients. Halitosis was detected using organoleptic assessment & tanita breath alert. Microbial analysis of Pg, Tf & Fn was done using PCR. Plaque index (PI), gingival index (GI), gingival bleeding index (GBI) were recorded. The maximum score of 3 for tongue coating was found in 60% of selected subjects. The tanita breath alert measured VSC level of score 2 in 60% of selected subjects while organoleptic score of 4 was found in 50% of subjects. The maximum mean value of 31.1±36.5 was found to be of F. nucleatum (Fn) followed by P. gingivalis (Pg) (13±13.3) & T. forsythia (Tf) (7.16±8.68) in tongue samples of selected patients. A weak positive correlation was found between VSC levels (tanita score & organoleptic score) and clinical parameters. The halitosis assessment by measuring VSC levels using organoleptic method and tanita breath alert are clinically feasible. Maximum tongue coating was found in 60% of patients. Fn was found comparatively more than the Pg & Tf. A weak positive correlation was found between VSC levels and clinical parameters such as PI, GI & GBI. Thus,the dentist/ periodontist should emphasise on tongue cleaning measures that would reduce the odoriferous microbial load.

  10. Optimal design and experimental validation of a simulated moving bed chromatography for continuous recovery of formic acid in a model mixture of three organic acids from Actinobacillus bacteria fermentation.

    PubMed

    Park, Chanhun; Nam, Hee-Geun; Lee, Ki Bong; Mun, Sungyong

    2014-10-24

    The economically-efficient separation of formic acid from acetic acid and succinic acid has been a key issue in the production of formic acid with the Actinobacillus bacteria fermentation. To address this issue, an optimal three-zone simulated moving bed (SMB) chromatography for continuous separation of formic acid from acetic acid and succinic acid was developed in this study. As a first step for this task, the adsorption isotherm and mass-transfer parameters of each organic acid on the qualified adsorbent (Amberchrom-CG300C) were determined through a series of multiple frontal experiments. The determined parameters were then used in optimizing the SMB process for the considered separation. During such optimization, the additional investigation for selecting a proper SMB port configuration, which could be more advantageous for attaining better process performances, was carried out between two possible configurations. It was found that if the properly selected port configuration was adopted in the SMB of interest, the throughout and the formic-acid product concentration could be increased by 82% and 181% respectively. Finally, the optimized SMB process based on the properly selected port configuration was tested experimentally using a self-assembled SMB unit with three zones. The SMB experimental results and the relevant computer simulation verified that the developed process in this study was successful in continuous recovery of formic acid from a ternary organic-acid mixture of interest with high throughput, high purity, high yield, and high product concentration. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  12. Numerical Simulation Of Cratering Effects In Adobe

    DTIC Science & Technology

    2013-07-01

    DEVELOPMENT OF MATERIAL PARAMETERS .........................................................7 PROBLEM SETUP...37 PARAMETER ADJUSTMENTS ......................................................................................38 GLOSSARY...dependent yield surface with the Geological Yield Surface (GEO) modeled in CTH using well characterized adobe. By identifying key parameters that

  13. On Proper Selection of Multihop Relays for Future Enhancement of AeroMACS Networks

    NASA Technical Reports Server (NTRS)

    Kamali, Behnam; Kerczewski, Robert J.; Apaza, Rafael D.

    2015-01-01

    As the Aeronautical Mobile Airport Communications System (AeroMACS) has evolved from a technology concept to a deployed communications network over major US airports, it is now time to contemplate whether the existing capacity of AeroMACS is sufficient to meet the demands set forth by all fixed and mobile applications over the airport surface given the AeroMACS constraints regarding bandwidth and transmit power. The underlying idea in this article is to present IEEE 802.16j-based WiMAX as a technology that can address future capacity enhancements and therefore is most feasible for AeroMACS applications. The principal argument in favor IEEE 802.16j technology is the flexible and cost effective extension of radio coverage that is afforded by relay fortified networks, with virtually no increase in the power requirements and virtually no rise in interference levels to co-allocated applications. The IEEE 802.16j-based multihop relay systems are briefly described. The focus is on key features of this technology, frame structure, and its architecture. Next, AeroMACS is described as a WiMAX-based wireless network. The two major relay modes supported by IEEE 802.16j amendment, i.e., transparent and non-transparent are described. The benefits of employing multihop relays are listed. Some key challenges related to incorporating relays into AeroMACS networks are discussed. The selection of relay type in a broadband wireless network affects a number of network parameters such as latency, signal overhead, PHY (Scalable Physical Layer) and MAC (Media Access Layer) layer protocols, consequently it can alter key network quantities of throughput and QoS (Quality of Service).

  14. Key Considerations of Community, Scalability, Supportability, Security, and Functionality in Selecting Open-Source Software in California Universities as Perceived by Technology Leaders

    ERIC Educational Resources Information Center

    Britton, Todd Alan

    2014-01-01

    Purpose: The purpose of this study was to examine the key considerations of community, scalability, supportability, security, and functionality for selecting open-source software in California universities as perceived by technology leaders. Methods: After a review of the cogent literature, the key conceptual framework categories were identified…

  15. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  16. Space Debris Removal Using Multi-Mission Modular Spacecraft

    NASA Astrophysics Data System (ADS)

    Savioli, L.; Francesconi, A.; Maggi, F.; Olivieri, L.; Lorenzini, E.; Pardini, C.

    2013-08-01

    The study and development of ADR missions in LEO have become an issue of topical interest to the attention of the space community since the future space flight activities could be threatened by collisional cascade events. This paper presents the analysis of an ADR mission scenario where modular remover kits are employed to de-orbit some selected debris in SSO, while a distinct space tug performs the orbital transfers and rendezvous manoeuvres, and installs the remover kits on the client debris. Electro-dynamic tether and electric propulsion are considered as de-orbiting alternatives, while chemical propulsion is employed for the space tug. The total remover mass and de-orbiting time are identified as key parameters to compare the performances of the two de-orbiting options, while an optimization of the ΔV required to move between five selected objects is performed for a preliminary design at system level of the space tug. Final controlled re-entry is also considered and performed by means of a hybrid engine.

  17. Life support system definition for a low cost shuttle launched space station.

    NASA Technical Reports Server (NTRS)

    Nelson, W. G.; Cody, J.

    1972-01-01

    Discussion of the tradeoffs and EC/LS definition for a low cost shuttle launched space station to be launched in the late 1970s for use as a long-term manned scientific laboratory. The space station consists of 14-ft-diam modules, clustered together to support a six-man crew at the initial space station (ISS) level and a 12-man crew at the growth space station (GSS) level. Key design guidelines specify low initial cost and low total program cost and require two separate pressurized habitable compartments with independent lift support capability. The methodology used to select the EC/LS design consisted of systematically reducing quantitative parameters to a common denominator of cost. This approach eliminates many of the inconsistencies that can occur in such decision making. The EC/LS system selected is a partially closed system which recovers urine, condensate, and wash water and concentrates crew expired CO2 for use in a low thrust resistojet propulsion system.

  18. Insight into spin transport in oxide heterostructures from interface-resolved magnetic mapping

    DOE PAGES

    Bruno, F. Y.; Grisolia, M. N.; Visani, C.; ...

    2015-02-17

    At interfaces between complex oxides, electronic, orbital and magnetic reconstructions may produce states of matter absent from the materials involved, offering novel possibilities for electronic and spintronic devices. Here we show that magnetic reconstruction has a strong influence on the interfacial spin selectivity, a key parameter controlling spin transport in magnetic tunnel junctions. In epitaxial heterostructures combining layers of antiferromagnetic LaFeO 3 (LFO) and ferromagnetic La 0.7Sr 0.3MnO 3 (LSMO), we find that a net magnetic moment is induced in the first few unit planes of LFO near the interface with LSMO. Using X-ray photoemission electron microscopy, we show thatmore » the ferromagnetic domain structure of the manganite electrodes is imprinted into the antiferromagnetic tunnel barrier, endowing it with spin selectivity. Finally, we find that the spin arrangement resulting from coexisting ferromagnetic and antiferromagnetic interactions strongly influences the tunnel magnetoresistance of LSMO/LFO/LSMO junctions through competing spin-polarization and spin-filtering effects.« less

  19. Engineering trade studies for a quantum key distribution system over a 30  km free-space maritime channel.

    PubMed

    Gariano, John; Neifeld, Mark; Djordjevic, Ivan

    2017-01-20

    Here, we present the engineering trade studies of a free-space optical communication system operating over a 30 km maritime channel for the months of January and July. The system under study follows the BB84 protocol with the following assumptions: a weak coherent source is used, Eve is performing the intercept resend attack and photon number splitting attack, prior knowledge of Eve's location is known, and Eve is allowed to know a small percentage of the final key. In this system, we examine the effect of changing several parameters in the following areas: the implementation of the BB84 protocol over the public channel, the technology in the receiver, and our assumptions about Eve. For each parameter, we examine how different values impact the secure key rate for a constant brightness. Additionally, we will optimize the brightness of the source for each parameter to study the improvement in the secure key rate.

  20. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  1. Ion selection of charge-modified large nanopores in a graphene sheet

    NASA Astrophysics Data System (ADS)

    Zhao, Shijun; Xue, Jianming; Kang, Wei

    2013-09-01

    Water desalination becomes an increasingly important approach for clean water supply to meet the rapidly growing demand of population boost, industrialization, and urbanization. The main challenge in current desalination technologies lies in the reduction of energy consumption and economic costs. Here, we propose to use charged nanopores drilled in a graphene sheet as ion exchange membranes to promote the efficiency and capacity of desalination systems. Using molecular dynamics simulations, we investigate the selective ion transport behavior of electric-field-driven KCl electrolyte solution through charge modified graphene nanopores. Our results reveal that the presence of negative charges at the edge of graphene nanopore can remarkably impede the passage of Cl- while enhance the transport of K+, which is an indication of ion selectivity for electrolytes. We further demonstrate that this selectivity is dependent on the pore size and total charge number assigned at the nanopore edge. By adjusting the nanopore diameter and electric charge on the graphene nanopore, a nearly complete rejection of Cl- can be realized. The electrical resistance of nanoporous graphene, which is a key parameter to evaluate the performance of ion exchange membranes, is found two orders of magnitude lower than commercially used membranes. Our results thus suggest that graphene nanopores are promising candidates to be used in electrodialysis technology for water desalinations with a high permselectivity.

  2. Model of the best-of-N nest-site selection process in honeybees.

    PubMed

    Reina, Andreagiovanni; Marshall, James A R; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N-1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  3. Model of the best-of-N nest-site selection process in honeybees

    NASA Astrophysics Data System (ADS)

    Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas

    2017-05-01

    The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.

  4. Effect of medium pH on chemical selectivity of oxalic acid biosynthesis by Aspergillus niger W78C in submerged batch cultures with sucrose as a carbon source.

    PubMed

    Walaszczyk, Ewa; Podgórski, Waldemar; Janczar-Smuga, Małgorzata; Dymarska, Ewelina

    2018-01-01

    The pH of the medium is the key environmental parameter of chemical selectivity of oxalic acid biosynthesis by Aspergillus niger . The activity of the enzyme oxaloacetate hydrolase, which is responsible for decomposition of oxaloacetate to oxalate and acetate inside the cell of the fungus, is highest at pH 6. In the present study, the influence of pH in the range of 3-7 on oxalic acid secretion by A. niger W78C from sucrose was investigated. The highest oxalic acid concentration, 64.3 g dm -3 , was reached in the medium with pH 6. The chemical selectivity of the process was 58.6% because of the presence of citric and gluconic acids in the cultivation broth in the amount of 15.3 and 30.2 g dm -3 , respectively. Both an increase and a decrease of medium pH caused a decrease of oxalic acid concentration. The obtained results confirm that pH 6 of the carbohydrate medium is appropriate for oxalic acid synthesis by A. niger , but the chemical selectivity of the process described in this paper was high in comparison to values reported previously in the literature.

  5. Research on filter’s parameter selection based on PROMETHEE method

    NASA Astrophysics Data System (ADS)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  6. Charting the parameter space of the global 21-cm signal

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan

    2017-12-01

    The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.

  7. Impact of Applying Sex Sorted Semen on the Selection Proportion of the Sire of Dams Selection Pathway in a Nucleus Program.

    PubMed

    Joezy-Shekalgorabi, Sahereh

    2017-11-03

    In a nucleus breeding scheme, the sire of dams pathway plays an important role in producing genetic improvement. Selection proportion is the key parameter for predicting selection intensity, through truncating the normal distribution. Semen sexing using flowcytometrie reduces the number of vials of sperm that can be obtained from a proved bull. In addition, a lower fertility of this kind of sperm is expected because of the lower sperm dosage in sex sorted semen. Both of these factors could affect the selection proportion in the sire of dams pathway (pSD). In the current study, through a deterministic simulation, effect of utilizing sex sorted semen on selection (pSD) was investigated in three different strategies including 1: continuous use of sex sorted semen in heifers (CS), 2: the use of sex sorted semen for the first two (S2) and 3: the first (S1) inseminations followed by conventional semen. Results indicated that the use of sex sorted semen has a negative impact on the sire of dams (SD) pathway due to increase in selection proportion. Consequently selection intensity was decreased by 10.24 to 20.57, 6.38 to 8.87 and 3.76 to 6.25 percent in the CS, S2 and S1 strategies, respectively. Considering the low effect of sexed semen on genetic improvement in dam pathways, it is necessary to consider the joint effect of using sex sorted semen on the sire and dams pathway to estimate about the real effect of sexed semen on genetic improvement in a nucleus breeding scheme.

  8. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  9. Global parameter estimation for thermodynamic models of transcriptional regulation.

    PubMed

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Selective laser melting of Ni-rich NiTi: selection of process parameters and the superelastic response

    NASA Astrophysics Data System (ADS)

    Shayesteh Moghaddam, Narges; Saedi, Soheil; Amerinatanzi, Amirhesam; Saghaian, Ehsan; Jahadakbar, Ahmadreza; Karaca, Haluk; Elahinia, Mohammad

    2018-03-01

    Material and mechanical properties of NiTi shape memory alloys strongly depend on the fabrication process parameters and the resulting microstructure. In selective laser melting, the combination of parameters such as laser power, scanning speed, and hatch spacing determine the microstructural defects, grain size and texture. Therefore, processing parameters can be adjusted to tailor the microstructure and mechanical response of the alloy. In this work, NiTi samples were fabricated using Ni50.8Ti (at.%) powder via SLM PXM by Phenix/3D Systems and the effects of processing parameters were systematically studied. The relationship between the processing parameters and superelastic properties were investigated thoroughly. It will be shown that energy density is not the only parameter that governs the material response. It will be shown that hatch spacing is the dominant factor to tailor the superelastic response. It will be revealed that with the selection of right process parameters, perfect superelasticity with recoverable strains of up to 5.6% can be observed in the as-fabricated condition.

  11. BATMAN: a DMD-based MOS demonstrator on Galileo Telescope

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frédéric; Spanò, Paolo; Bon, William; Riva, Marco; Lanzoni, Patrick; Nicastro, Luciano; Molinari, Emilio; Cosentino, Rosario; Ghedina, Adriano; Gonzalez, Manuel; Di Marcantonio, Paolo; Coretti, Igor; Cirami, Roberto; Manetta, Marco; Zerbi, Filippo; Tresoldi, Daniela; Valenziano, Luca

    2012-09-01

    Multi-Object Spectrographs (MOS) are the major instruments for studying primary galaxies and remote and faint objects. Current object selection systems are limited and/or difficult to implement in next generation MOS for space and groundbased telescopes. A promising solution is the use of MOEMS devices such as micromirror arrays which allow the remote control of the multi-slit configuration in real time. We are developing a Digital Micromirror Device (DMD) - based spectrograph demonstrator called BATMAN. We want to access the largest FOV with the highest contrast. The selected component is a DMD chip from Texas Instruments in 2048 x 1080 mirrors format, with a pitch of 13.68μm. Our optical design is an all-reflective spectrograph design with F/4 on the DMD component. This demonstrator permits the study of key parameters such as throughput, contrast and ability to remove unwanted sources in the FOV (background, spoiler sources), PSF effect, new observational modes. This study will be conducted in the visible with possible extension in the IR. A breadboard on an optical bench, ROBIN, has been developed for a preliminary determination of these parameters. The demonstrator on the sky is then of prime importance for characterizing the actual performance of this new family of instruments, as well as investigating the operational procedures on astronomical objects. BATMAN will be placed on the Nasmyth focus of Telescopio Nazionale Galileo (TNG) during next year.

  12. Key factors regulating protein carbonylation by α,β unsaturated carbonyls: A structural study based on a retrospective meta-analysis.

    PubMed

    Vistoli, Giulio; Mantovani, Chiara; Gervasoni, Silvia; Pedretti, Alessandro; Aldini, Giancarlo

    2017-11-01

    Protein carbonylation represents one of the most important oxidative-based modifications involving nucleophilic amino acids and affecting protein folding and function. Protein carbonylation is induced by electrophilic carbonyl species and is an highly selective process since few nucleophilic residues are carbonylated within each protein. While considering the great interest for protein carbonylation, few studies investigated the factors which render a nucleophilic residue susceptible to carbonylation. Hence, the present study is aimed to delve into the factors which modulate the reactivity of cysteine, histidine and lysine residues towards α,β unsaturated carbonyls by a retrospective analysis of the available studies which identified the adducted residues for proteins, the structure of which was resolved. Such an analysis involved different parameters including exposure, nucleophilicity, surrounding residues and capacity to attract carbonyl species (as derived by docking simulations). The obtained results allowed a meaningful clustering of the analyzed proteins suggesting that on average carbonylation selectivity increases with protein size. The comparison between adducted and unreactive residues revealed differences in all monitored parameters which are markedly more pronounced for cysteines compared to lysines and histidines. Overall, these results suggest that cysteine's carbonylation is a finely (and reasonably purposely) modulated process, while the carbonylation of lysines and histidines seems to be a fairly random event in which limited differences influence their reactivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Application of target costing in machining

    NASA Astrophysics Data System (ADS)

    Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.

    2004-11-01

    In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.

  14. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE PAGES

    Ilas, Germina; Liljenfeldt, Henrik

    2017-05-19

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  15. Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ilas, Germina; Liljenfeldt, Henrik

    Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less

  16. Field spectrometer (S191H) preprocessor tape quality test program design document

    NASA Technical Reports Server (NTRS)

    Campbell, H. M.

    1976-01-01

    Program QA191H performs quality assurance tests on field spectrometer data recorded on 9-track magnetic tape. The quality testing involves the comparison of key housekeeping and data parameters with historic and predetermined tolerance limits. Samples of key parameters are processed during the calibration period and wavelength cal period, and the results are printed out and recorded on an historical file tape.

  17. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  18. Identification of atypical flight patterns

    NASA Technical Reports Server (NTRS)

    Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)

    2005-01-01

    Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.

  19. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  20. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm.

    PubMed

    Mao, Yong; Zhou, Xiao-Bo; Pi, Dao-Ying; Sun, You-Xian; Wong, Stephen T C

    2005-10-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  1. Sequential weighted Wiener estimation for extraction of key tissue parameters in color imaging: a phantom study

    NASA Astrophysics Data System (ADS)

    Chen, Shuo; Lin, Xiaoqian; Zhu, Caigang; Liu, Quan

    2014-12-01

    Key tissue parameters, e.g., total hemoglobin concentration and tissue oxygenation, are important biomarkers in clinical diagnosis for various diseases. Although point measurement techniques based on diffuse reflectance spectroscopy can accurately recover these tissue parameters, they are not suitable for the examination of a large tissue region due to slow data acquisition. The previous imaging studies have shown that hemoglobin concentration and oxygenation can be estimated from color measurements with the assumption of known scattering properties, which is impractical in clinical applications. To overcome this limitation and speed-up image processing, we propose a method of sequential weighted Wiener estimation (WE) to quickly extract key tissue parameters, including total hemoglobin concentration (CtHb), hemoglobin oxygenation (StO2), scatterer density (α), and scattering power (β), from wide-band color measurements. This method takes advantage of the fact that each parameter is sensitive to the color measurements in a different way and attempts to maximize the contribution of those color measurements likely to generate correct results in WE. The method was evaluated on skin phantoms with varying CtHb, StO2, and scattering properties. The results demonstrate excellent agreement between the estimated tissue parameters and the corresponding reference values. Compared with traditional WE, the sequential weighted WE shows significant improvement in the estimation accuracy. This method could be used to monitor tissue parameters in an imaging setup in real time.

  2. Rules of thumb for superfund remedy selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-08-01

    The guidance document describes key principles and expectations, interspersed with `best practices` based on program experience, that should be consulted during the Superfund remedy selection process. These remedy selection `Rules of Thumb` are organized into three major policy areas: (1) risk assessment and risk management, (2) developing remedial alternatives, and (3) ground-water response actions. The purpose of this guide is to briefly summarize key elements of various remedy selection guidance documents and policies in one publication.

  3. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2016-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  4. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this presentation is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  5. Turboelectric Aircraft Drive Key Performance Parameters and Functional Requirements

    NASA Technical Reports Server (NTRS)

    Jansen, Ralph H.; Brown, Gerald V.; Felder, James L.; Duffy, Kirsten P.

    2015-01-01

    The purpose of this paper is to propose specific power and efficiency as the key performance parameters for a turboelectric aircraft power system and investigate their impact on the overall aircraft. Key functional requirements are identified that impact the power system design. Breguet range equations for a base aircraft and a turboelectric aircraft are found. The benefits and costs that may result from the turboelectric system are enumerated. A break-even analysis is conducted to find the minimum allowable electric drive specific power and efficiency that can preserve the range, initial weight, operating empty weight, and payload weight of the base aircraft.

  6. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  7. Active systems based on silver-montmorillonite nanoparticles embedded into bio-based polymer matrices for packaging applications.

    PubMed

    Incoronato, A L; Buonocore, G G; Conte, A; Lavorgna, M; Nobile, M A Del

    2010-12-01

    Silver-montmorillonite (Ag-MMT) antimicrobial nanoparticles were obtained by allowing silver ions from nitrate solutions to replace the Na(+) of natural montmorillonite and to be reduced by thermal treatment. The Ag-MMT nanoparticles were embedded in agar, zein, and poly(ε-caprolactone) polymer matrices. These nanocomposites were tested in vitro with a three-strain cocktail of Pseudomonas spp. to assess antimicrobial effectiveness. The results indicate that Ag-MMT nanoparticles embedded into agar may have antimicrobial activity against selected spoilage microorganisms. No antimicrobial effects were recorded with active zein and poly(ε-caprolactone). The water content of the polymeric matrix was the key parameter associated with antimicrobial effectiveness of this active system intended for food packaging applications.

  8. Parametric Investigation of Liquid Jets in Low Gravity

    NASA Technical Reports Server (NTRS)

    Chato, David J.

    2005-01-01

    An axisymmetric phase field model is developed and used to model surface tension forces on liquid jets in microgravity. The previous work in this area is reviewed and a baseline drop tower experiment selected for model comparison. This paper uses the model to parametrically investigate the influence of key parameters on the geysers formed by jets in microgravity. Investigation of the contact angle showed the expected trend of increasing contact angle increasing geyser height. Investigation of the tank radius showed some interesting effects and demonstrated the zone of free surface deformation is quite large. Variation of the surface tension with a laminar jet showed clearly the evolution of free surface shape with Weber number. It predicted a breakthrough Weber number of 1.

  9. 75 FR 28805 - Science Advisory Board Staff Office; Notification of a Public Teleconference and Public Meeting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ... selection of key data sets for analysis; and (3) transparency, thoroughness, and clarity in quantitative... being asked to evaluate: The transparency and clarity in the selection of key data sets for dose...

  10. Pattern-oriented modelling: a ‘multi-scope’ for predictive systems ecology

    PubMed Central

    Grimm, Volker; Railsback, Steven F.

    2012-01-01

    Modern ecology recognizes that modelling systems across scales and at multiple levels—especially to link population and ecosystem dynamics to individual adaptive behaviour—is essential for making the science predictive. ‘Pattern-oriented modelling’ (POM) is a strategy for doing just this. POM is the multi-criteria design, selection and calibration of models of complex systems. POM starts with identifying a set of patterns observed at multiple scales and levels that characterize a system with respect to the particular problem being modelled; a model from which the patterns emerge should contain the right mechanisms to address the problem. These patterns are then used to (i) determine what scales, entities, variables and processes the model needs, (ii) test and select submodels to represent key low-level processes such as adaptive behaviour, and (iii) find useful parameter values during calibration. Patterns are already often used in these ways, but a mini-review of applications of POM confirms that making the selection and use of patterns more explicit and rigorous can facilitate the development of models with the right level of complexity to understand ecological systems and predict their response to novel conditions. PMID:22144392

  11. Random Time Identity Based Firewall In Mobile Ad hoc Networks

    NASA Astrophysics Data System (ADS)

    Suman, Patel, R. B.; Singh, Parvinder

    2010-11-01

    A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.

  12. Gestation-Specific Changes in the Anatomy and Physiology of Healthy Pregnant Women: An Extended Repository of Model Parameters for Physiologically Based Pharmacokinetic Modeling in Pregnancy.

    PubMed

    Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg

    2017-11-01

    In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.

  13. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    PubMed

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.

  14. The Nearly Neutral and Selection Theories of Molecular Evolution Under the Fisher Geometrical Framework: Substitution Rate, Population Size, and Complexity

    PubMed Central

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.

    2012-01-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879

  15. Channel-parameter estimation for satellite-to-submarine continuous-variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Xie, Cailang; Huang, Peng; Li, Jiawei; Zhang, Ling; Huang, Duan; Zeng, Guihua

    2018-05-01

    This paper deals with a channel-parameter estimation for continuous-variable quantum key distribution (CV-QKD) over a satellite-to-submarine link. In particular, we focus on the channel transmittances and the excess noise which are affected by atmospheric turbulence, surface roughness, zenith angle of the satellite, wind speed, submarine depth, etc. The estimation method is based on proposed algorithms and is applied to low-Earth orbits using the Monte Carlo approach. For light at 550 nm with a repetition frequency of 1 MHz, the effects of the estimated parameters on the performance of the CV-QKD system are assessed by a simulation by comparing the secret key bit rate in the daytime and at night. Our results show the feasibility of satellite-to-submarine CV-QKD, providing an unconditionally secure approach to achieve global networks for underwater communications.

  16. Automatic rocks detection and classification on high resolution images of planetary surfaces

    NASA Astrophysics Data System (ADS)

    Aboudan, A.; Pacifici, A.; Murana, A.; Cannarsa, F.; Ori, G. G.; Dell'Arciprete, I.; Allemand, P.; Grandjean, P.; Portigliotti, S.; Marcer, A.; Lorenzoni, L.

    2013-12-01

    High-resolution images can be used to obtain rocks location and size on planetary surfaces. In particular rock size-frequency distribution is a key parameter to evaluate the surface roughness, to investigate the geologic processes that formed the surface and to assess the hazards related with spacecraft landing. The manual search for rocks on high-resolution images (even for small areas) can be a very intensive work. An automatic or semi-automatic algorithm to identify rocks is mandatory to enable further processing as determining the rocks presence, size, height (by means of shadows) and spatial distribution over an area of interest. Accurate rocks and shadows contours localization are the key steps for rock detection. An approach to contour detection based on morphological operators and statistical thresholding is presented in this work. The identified contours are then fitted using a proper geometric model of the rocks or shadows and used to estimate salient rocks parameters (position, size, area, height). The performances of this approach have been evaluated both on images of Martian analogue area of Morocco desert and on HiRISE images. Results have been compared with ground truth obtained by means of manual rock mapping and proved the effectiveness of the algorithm. The rock abundance and rocks size-frequency distribution derived on selected HiRISE images have been compared with the results of similar analyses performed for the landing site certification of Mars landers (Viking, Pathfinder, MER, MSL) and with the available thermal data from IRTM and TES.

  17. Lead-acid batteries in micro-hybrid applications. Part I. Selected key parameters

    NASA Astrophysics Data System (ADS)

    Schaeck, S.; Stoermer, A. O.; Kaiser, F.; Koehler, L.; Albers, J.; Kabza, H.

    Micro-hybrid electric vehicles were launched by BMW in March 2007. These are equipped with brake energy regeneration (BER) and the automatic start and stop function (ASSF) of the internal combustion engine. These functions are based on common 14 V series components and lead-acid (LA) batteries. The novelty is given by the intelligent onboard energy management, which upgrades the conventional electric system to the micro-hybrid power system (MHPS). In part I of this publication the key factors for the operation of LA batteries in the MHPS are discussed. Especially for BER one is high dynamic charge acceptance (DCA) for effective boost charging. Vehicle rest time is identified as a particular negative parameter for DCA. It can be refreshed by regular fully charging at elevated charge voltage. Thus, the batteries have to be outstandingly robust against overcharge and water loss. This can be accomplished for valve-regulated lead-acid (VRLA) batteries at least if they are mounted in the trunk. ASSF goes along with frequent high-rate loads for warm cranking. The internal resistance determines the drop of the power net voltage during cranking and is preferably low for reasons of power net stability even after years of operation. Investigations have to be done with aged 90 Ah VRLA-absorbent glass mat (AGM) batteries. Battery operation at partial state-of-charge gives a higher risk of deep discharging (overdischarging). Subsequent re-charging then is likely to lead to the formation of micro-short circuits in the absorbent glass mat separator.

  18. Nuclear Power System Architecture and Safety Study- Feasibility of Launch Pad Explosion Simulation using Radios

    NASA Astrophysics Data System (ADS)

    Destefanis, Stefano; Tracino, Emanuele; Giraudo, Martina

    2014-06-01

    During a mission involving a spacecraft using nuclear power sources (NPS), the consequences to the population induced by an accident has to be taken into account carefully.Part of the study (led by AREVA, with TAS-I as one of the involved parties) was devoted to "Worst Case Scenario Consolidation". In particular, one of the activities carried out by TAS-I had the aim of characterizing the accidental environment (explosion on launch pad or during launch) and consolidate the requirements given as input in the study. The resulting requirements became inputs for Nuclear Power Source container design.To do so, TAS-I did first an overview of the available technical literature (mostly developed in the frame of NASA Mercury / Apollo program), to identify the key parameters to be used for analytical assessment (blast pressure wave, fragments size, speed and distribution, TNT equivalent of liquid propellant).Then, a simplified Radioss model was setup, to verify both the cards needed for blast / fragment impact analysis and the consistency between preliminary results and available technical literature (Radioss is commonly used to design mine - resistant vehicles, by simulating the effect of blasts onto structural elements, and it is used in TAS-I for several types of analysis, including land impact, water impact and fluid - structure interaction).The obtained results (albeit produced by a very simplified model) are encouraging, showing that the analytical tool and the selected key parameters represent a step in the right direction.

  19. The art and science of missile defense sensor design

    NASA Astrophysics Data System (ADS)

    McComas, Brian K.

    2014-06-01

    A Missile Defense Sensor is a complex optical system, which sits idle for long periods of time, must work with little or no on-­board calibration, be used to find and discriminate targets, and guide the kinetic warhead to the target within minutes of launch. A short overview of the Missile Defense problem will be discussed here, as well as, the top-level performance drivers, like Noise Equivalent Irradiance (NEI), Acquisition Range, and Dynamic Range. These top-level parameters influence the choice of optical system, mechanical system, focal plane array (FPA), Read Out Integrated Circuit (ROIC), and cryogenic system. This paper will not only discuss the physics behind the performance of the sensor, but it will also discuss the "art" of optimizing the performance of the sensor given the top level performance parameters. Balancing the sensor sub-­systems is key to the sensor's performance in these highly stressful missions. Top-­level performance requirements impact the choice of lower level hardware and requirements. The flow down of requirements to the lower level hardware will be discussed. This flow down directly impacts the FPA, where careful selection of the detector is required. The flow down also influences the ROIC and cooling requirements. The key physics behind the detector and cryogenic system interactions will be discussed, along with the balancing of subsystem performance. Finally, the overall system balance and optimization will be discussed in the context of missile defense sensors and expected performance of the overall kinetic warhead.

  20. Voluntary wheel running in dystrophin-deficient (mdx) mice: Relationships between exercise parameters and exacerbation of the dystrophic phenotype.

    PubMed

    Smythe, Gayle M; White, Jason D

    2011-12-18

    Voluntary wheel running can potentially be used to exacerbate the disease phenotype in dystrophin-deficient mdx mice. While it has been established that voluntary wheel running is highly variable between individuals, the key parameters of wheel running that impact the most on muscle pathology have not been examined in detail. We conducted a 2-week test of voluntary wheel running by mdx mice and the impact of wheel running on disease pathology. There was significant individual variation in the average daily distance (ranging from 0.003 ± 0.005 km to 4.48 ± 0.96 km), culminating in a wide range (0.040 km to 67.24 km) of total cumulative distances run by individuals. There was also variation in the number and length of run/rest cycles per night, and the average running rate. Correlation analyses demonstrated that in the quadriceps muscle, a low number of high distance run/rest cycles was the most consistent indicator for increased tissue damage. The amount of rest time between running bouts was a key factor associated with gastrocnemius damage. These data emphasize the need for detailed analysis of individual running performance, consideration of the length of wheel exposure time, and the selection of appropriate muscle groups for analysis, when applying the use of voluntary wheel running to disease exacerbation and/or pre-clinical testing of the efficacy of therapeutic agents in the mdx mouse.

  1. Selection of operating parameters on the basis of hydrodynamics in centrifugal partition chromatography for the purification of nybomycin derivatives.

    PubMed

    Adelmann, S; Baldhoff, T; Koepcke, B; Schembecker, G

    2013-01-25

    The selection of solvent systems in centrifugal partition chromatography (CPC) is the most critical point in setting up a separation. Therefore, lots of research was done on the topic in the last decades. But the selection of suitable operating parameters (mobile phase flow rate, rotational speed and mode of operation) with respect to hydrodynamics and pressure drop limit in CPC is still mainly driven by experience of the chromatographer. In this work we used hydrodynamic analysis for the prediction of most suitable operating parameters. After selection of different solvent systems with respect to partition coefficients for the target compound the hydrodynamics were visualized. Based on flow pattern and retention the operating parameters were selected for the purification runs of nybomycin derivatives that were carried out with a 200 ml FCPC(®) rotor. The results have proven that the selection of optimized operating parameters by analysis of hydrodynamics only is possible. As the hydrodynamics are predictable by the physical properties of the solvent system the optimized operating parameters can be estimated, too. Additionally, we found that dispersion and especially retention are improved if the less viscous phase is mobile. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  2. Interrogating selectivity in catalysis using molecular vibrations

    NASA Astrophysics Data System (ADS)

    Milo, Anat; Bess, Elizabeth N.; Sigman, Matthew S.

    2014-03-01

    The delineation of molecular properties that underlie reactivity and selectivity is at the core of physical organic chemistry, and this knowledge can be used to inform the design of improved synthetic methods or identify new chemical transformations. For this reason, the mathematical representation of properties affecting reactivity and selectivity trends, that is, molecular parameters, is paramount. Correlations produced by equating these molecular parameters with experimental outcomes are often defined as free-energy relationships and can be used to evaluate the origin of selectivity and to generate new, experimentally testable hypotheses. The premise behind successful correlations of this type is that a systematically perturbed molecular property affects a transition-state interaction between the catalyst, substrate and any reaction components involved in the determination of selectivity. Classic physical organic molecular descriptors, such as Hammett, Taft or Charton parameters, seek to independently probe isolated electronic or steric effects. However, these parameters cannot address simultaneous, non-additive variations to more than one molecular property, which limits their utility. Here we report a parameter system based on the vibrational response of a molecule to infrared radiation that can be used to mathematically model and predict selectivity trends for reactions with interlinked steric and electronic effects at positions of interest. The disclosed parameter system is mechanistically derived and should find broad use in the study of chemical and biological systems.

  3. Fast Simulation of the Impact Parameter Calculation of Electrons through Pair Production

    NASA Astrophysics Data System (ADS)

    Bang, Hyesun; Kweon, MinJung; Huh, Kyoung Bum; Pachmayer, Yvonne

    2018-05-01

    A fast simulation method is introduced that reduces tremendously the time required for the impact parameter calculation, a key observable in physics analyses of high energy physics experiments and detector optimisation studies. The impact parameter of electrons produced through pair production was calculated considering key related processes using the Bethe-Heitler formula, the Tsai formula and a simple geometric model. The calculations were performed at various conditions and the results were compared with those from full GEANT4 simulations. The computation time using this fast simulation method is 104 times shorter than that of the full GEANT4 simulation.

  4. Effect of Thermal Budget on the Electrical Characterization of Atomic Layer Deposited HfSiO/TiN Gate Stack MOSCAP Structure

    PubMed Central

    Khan, Z. N.; Ahmed, S.; Ali, M.

    2016-01-01

    Metal Oxide Semiconductor (MOS) capacitors (MOSCAP) have been instrumental in making CMOS nano-electronics realized for back-to-back technology nodes. High-k gate stacks including the desirable metal gate processing and its integration into CMOS technology remain an active research area projecting the solution to address the requirements of technology roadmaps. Screening, selection and deposition of high-k gate dielectrics, post-deposition thermal processing, choice of metal gate structure and its post-metal deposition annealing are important parameters to optimize the process and possibly address the energy efficiency of CMOS electronics at nano scales. Atomic layer deposition technique is used throughout this work because of its known deposition kinetics resulting in excellent electrical properties and conformal structure of the device. The dynamics of annealing greatly influence the electrical properties of the gate stack and consequently the reliability of the process as well as manufacturable device. Again, the choice of the annealing technique (migration of thermal flux into the layer), time-temperature cycle and sequence are key parameters influencing the device’s output characteristics. This work presents a careful selection of annealing process parameters to provide sufficient thermal budget to Si MOSCAP with atomic layer deposited HfSiO high-k gate dielectric and TiN gate metal. The post-process annealing temperatures in the range of 600°C -1000°C with rapid dwell time provide a better trade-off between the desirable performance of Capacitance-Voltage hysteresis and the leakage current. The defect dynamics is thought to be responsible for the evolution of electrical characteristics in this Si MOSCAP structure specifically designed to tune the trade-off at low frequency for device application. PMID:27571412

  5. Satellite Galaxies in the Illustris-1 Simulation: Poor Tracers of the Underlying Mass Distribution

    NASA Astrophysics Data System (ADS)

    Brainerd, Tereasa G.

    2018-06-01

    The 3-d spatial distribution of luminous satellite galaxies in the z=0 snapshot of the Illustris-1 simulation is compared to the 3-d spatial distribution of the mass surrounding the primary galaxies about which the satellites orbit. The primary-satellite sample is selected in such a way that it matches the selection criteria used in a previous study of luminous satellite galaxies in the Millennium Run simulation. A key difference between the two simulations is that luminous galaxies in the Millennium Run are the result of a semi-analytic galaxy formation model, while in Illustris-1 the luminous galaxies are the result of numerical hydrodynamics, star formation and feedback models. The sample consists of 1,025 primary galaxies with absolute magnitudes Mr < -20.5, and there are a total of 4,546 satellites with absolute magnitudes Mr < -14.5 within the virial radii of the primary galaxies. The mass distribution surrounding the primary galaxies is well fitted by an NFW profile with a concentration parameter c = 11.9. Contrary to a previous study using satellite galaxies in the Millennium Run, the number density profile of the full satellite sample from Illustris-1 is not at all well-fitted by an NFW profile. In the case of the faintest satellites (Mr > -17), the satellite number density profile is well-fitted by an NFW profile, but the concentration parameter is exceptionally low (c = 1.8) compared to the concentration parameter of the mass surrounding the primary galaxies. The conclusion from this work is that luminous satellite galaxies in Illustris-1 are poor tracers of the mass distribution surrounding their primary galaxies.

  6. Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables

    DTIC Science & Technology

    2008-12-01

    into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work

  7. Resilience of Key Biological Parameters of the Senegalese Flat Sardinella to Overfishing and Climate Change.

    PubMed

    Ba, Kamarel; Thiaw, Modou; Lazar, Najih; Sarr, Alassane; Brochier, Timothée; Ndiaye, Ismaïla; Faye, Alioune; Sadio, Oumar; Panfili, Jacques; Thiaw, Omar Thiom; Brehmer, Patrice

    2016-01-01

    The stock of the Senegalese flat sardinella, Sardinella maderensis, is highly exploited in Senegal, West Africa. Its growth and reproduction parameters are key biological indicators for improving fisheries management. This study reviewed these parameters using landing data from small-scale fisheries in Senegal and literature information dated back more than 25 years. Age was estimated using length-frequency data to calculate growth parameters and assess the growth performance index. With global climate change there has been an increase in the average sea surface temperature along the Senegalese coast but the length-weight parameters, sex ratio, size at first sexual maturity, period of reproduction and condition factor of S. maderensis have not changed significantly. The above parameters of S. maderensis have hardly changed, despite high exploitation and fluctuations in environmental conditions that affect the early development phases of small pelagic fish in West Africa. This lack of plasticity of the species regarding of the biological parameters studied should be considered when planning relevant fishery management plans.

  8. The Research and Implementation of Vehicle Bluetooth Hands-free Devices Key Parameters Downloading Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-bo; Wang, Zhi-xue; Li, Jian-xin; Ma, Jian-hui; Li, Yang; Li, Yan-qiang

    In order to facilitate Bluetooth function realization and information can be effectively tracked in the process of production, the vehicle Bluetooth hands-free devices need to download such key parameters as Bluetooth address, CVC license and base plate numbers, etc. Therefore, it is the aim to search simple and effective methods to download parameters for each vehicle Bluetooth hands-free device, and to control and record the use of parameters. In this paper, by means of Bluetooth Serial Peripheral Interface programmer device, the parallel port is switched to SPI. The first step is to download parameters is simulating SPI with the parallel port. To perform SPI function, operating the parallel port in accordance with the SPI timing. The next step is to achieve SPI data transceiver functions according to the programming parameters of options. Utilizing the new method, downloading parameters is fast and accurate. It fully meets vehicle Bluetooth hands-free devices production requirements. In the production line, it has played a large role.

  9. HSCT materials and structures: An MDC perspective

    NASA Technical Reports Server (NTRS)

    Sutton, Jay O.

    1992-01-01

    The key High Speed Civil Transport (HSCT) features which control the materials selection are discussed. Materials are selected based on weight and production economics. The top-down and bottoms-up approaches to material selection are compared for the Mach 2.4 study baseline aircraft. The key materials and structures related tasks which remain to be accomplished prior to proceeding with the building of the HSCT aircraft are examined.

  10. Surface Nuclear Power for Human Mars Missions

    NASA Technical Reports Server (NTRS)

    Mason, Lee S.

    1999-01-01

    The Design Reference Mission for NASA's human mission to Mars indicates the desire for in-situ propellant production and bio-regenerative life systems to ease Earth launch requirements. These operations, combined with crew habitation and science, result in surface power requirements approaching 160 kilowatts. The power system, delivered on an early cargo mission, must be deployed and operational prior to crew departure from Earth. The most mass efficient means of satisfying these requirements is through the use of nuclear power. Studies have been performed to identify a potential system concept using a mobile cart to transport the power system away from the Mars lander and provide adequate separation between the reactor and crew. The studies included an assessment of reactor and power conversion technology options, selection of system and component redundancy, determination of optimum separation distance, and system performance sensitivity to some key operating parameters. The resulting system satisfies the key mission requirements including autonomous deployment, high reliability, and cost effectiveness at a overall system mass of 12 tonnes and a stowed volume of about 63 cu m.

  11. CATE: A Case Study of an Interdisciplinary Student-Led Microgravity Experiment

    NASA Astrophysics Data System (ADS)

    Colwell, J. E.; Dove, A.; Lane, S. S.; Tiller, C.; Whitaker, A.; Lai, K.; Hoover, B.; Benjamin, S.

    2015-12-01

    The Collisional Accretion Experiment (CATE) was designed, built, and flown on NASA's C-9 parabolic flight airplane in less than a year by an interdisciplinary team of 6 undergraduate students under the supervision of two faculty. CATE was selected in the initial NASA Undergraduate Student Instrument Project (USIP) solicitation in the Fall of 2013, and the experiment flight campaign was in July 2014. The experiment studied collisions between different particle populations at low velocities (sub-m/s) in a vacuum and microgravity to gain insight into processes in the protoplanetary disk and planetary ring systems. Faculty provided the experiment concept and key experiment design parameters, and the student team developed the detailed hardware design for all components, manufactured and tested hardware, operated the experiment in flight, and analyzed data post-flight. Students also developed and led an active social media campaign and education and public outreach campaign to engage local high school students in the project. The ability to follow an experiment through from conception to flight was a key benefit for undergraduate students whose available time for projects such as this is frequently limited to their junior and senior years. Key factors for success of the program included having an existing laboratory infrastructure and experience in developing flight payloads and an intrinsically simple experiment concept. Students were highly motivated, in part, by their sense of technical and scientific ownership of the project, and this engagement was key to the project's success.

  12. Natural Selection and Functional Potentials of Human Noncoding Elements Revealed by Analysis of Next Generation Sequencing Data

    PubMed Central

    Xu, Shuhua

    2015-01-01

    Noncoding DNA sequences (NCS) have attracted much attention recently due to their functional potentials. Here we attempted to reveal the functional roles of noncoding sequences from the point of view of natural selection that typically indicates the functional potentials of certain genomic elements. We analyzed nearly 37 million single nucleotide polymorphisms (SNPs) of Phase I data of the 1000 Genomes Project. We estimated a series of key parameters of population genetics and molecular evolution to characterize sequence variations of the noncoding genome within and between populations, and identified the natural selection footprints in NCS in worldwide human populations. Our results showed that purifying selection is prevalent and there is substantial constraint of variations in NCS, while positive selectionis more likely to be specific to some particular genomic regions and regional populations. Intriguingly, we observed larger fraction of non-conserved NCS variants with lower derived allele frequency in the genome, indicating possible functional gain of non-conserved NCS. Notably, NCS elements are enriched for potentially functional markers such as eQTLs, TF motif, and DNase I footprints in the genome. More interestingly, some NCS variants associated with diseases such as Alzheimer's disease, Type 1 diabetes, and immune-related bowel disorder (IBD) showed signatures of positive selection, although the majority of NCS variants, reported as risk alleles by genome-wide association studies, showed signatures of negative selection. Our analyses provided compelling evidence of natural selection forces on noncoding sequences in the human genome and advanced our understanding of their functional potentials that play important roles in disease etiology and human evolution. PMID:26053627

  13. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  14. Parameters of Technological Growth

    ERIC Educational Resources Information Center

    Starr, Chauncey; Rudman, Richard

    1973-01-01

    Examines the factors involved in technological growth and identifies the key parameters as societal resources and societal expectations. Concludes that quality of life can only be maintained by reducing population growth, since this parameter is the product of material levels, overcrowding, food, and pollution. (JR)

  15. Maximizing the information learned from finite data selects a simple model

    NASA Astrophysics Data System (ADS)

    Mattingly, Henry H.; Transtrum, Mark K.; Abbott, Michael C.; Machta, Benjamin B.

    2018-02-01

    We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.

  16. Assessing the effect of selection with deltamethrin on biological parameters and detoxifying enzymes in Aedes aegypti (L.).

    PubMed

    Alvarez-Gonzalez, Leslie C; Briceño, Arelis; Ponce-Garcia, Gustavo; Villanueva-Segura, O Karina; Davila-Barboza, Jesus A; Lopez-Monroy, Beatriz; Gutierrez-Rodriguez, Selene M; Contreras-Perera, Yamili; Rodriguez-Sanchez, Iram P; Flores, Adriana E

    2017-11-01

    Resistance to insecticides through one or several mechanisms has a cost for an insect in various parameters of its biological cycle. The present study evaluated the effect of deltamethrin on detoxifying enzymes and biological parameters in a population of Aedes aegypti selected for 15 generations. The enzyme activities of alpha- and beta-esterases, mixed-function oxidases and glutathione-S-transferases were determined during selection, along with biological parameters. Overexpression of mixed-function oxidases as a mechanism of metabolic resistance to deltamethrin was found. There were decreases in percentages of eggs hatching, pupation and age-specific survival and in total survival at the end of the selection (F 16 ). Although age-specific fecundity was not affected by selection with deltamethrin, total fertility, together with lower survival, significantly affected gross reproduction rate, gradually decreasing due to deltamethrin selection. Similarly, net reproductive rate and intrinsic growth rate were affected by selection. Alterations in life parameters could be due to the accumulation of noxious effects or deleterious genes related to detoxifying enzymes, specifically those coding for mixed-function oxidases, along with the presence of recessive alleles of the V1016I and F1534C mutations, associating deltamethrin resistance with fitness cost in Ae. aegypti. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  17. Interrogating Key Positions of Size-Reduced TALE Repeats Reveals a Programmable Sensor of 5-Carboxylcytosine.

    PubMed

    Maurer, Sara; Giess, Mario; Koch, Oliver; Summerer, Daniel

    2016-12-16

    Transcription-activator-like effector (TALE) proteins consist of concatenated repeats that recognize consecutive canonical nucleobases of DNA via the major groove in a programmable fashion. Since this groove displays unique chemical information for the four human epigenetic cytosine nucleobases, TALE repeats with epigenetic selectivity can be engineered, with potential to establish receptors for the programmable decoding of all human nucleobases. TALE repeats recognize nucleobases via key amino acids in a structurally conserved loop whose backbone is positioned very close to the cytosine 5-carbon. This complicates the engineering of selectivities for large 5-substituents. To interrogate a more promising structural space, we engineered size-reduced repeat loops, performed saturation mutagenesis of key positions, and screened a total of 200 repeat-nucleobase interactions for new selectivities. This provided insight into the structural requirements of TALE repeats for affinity and selectivity, revealed repeats with improved or relaxed selectivity, and resulted in the first selective sensor of 5-carboxylcytosine.

  18. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    NASA Astrophysics Data System (ADS)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  19. An Evaluation of Microbial Profile in Halitosis with Tongue Coating Using PCR (Polymerase Chain Reaction)- A Clinical and Microbiological Study

    PubMed Central

    Kamaraj R., Dinesh; Bhushan, Kala S.; K.L., Vandana

    2014-01-01

    Background: Medline search using key words halitosis, tongue coating, polymerase chain reaction, microbial profile did not reveal any study. Hence, the purpose of the present investigation was to assess the malodor using the organoleptic method and tanita device; to quantify odoriferous microorganisms using Polymerase Chain Reaction technique in chronic periodontitis patients. Materials and Methods: The study included 30 chronic periodontitis patients. Halitosis was detected using organoleptic assessment & tanita breath alert. Microbial analysis of Pg, Tf & Fn was done using PCR. Plaque index (PI), gingival index (GI), gingival bleeding index (GBI) were recorded. Result: The maximum score of 3 for tongue coating was found in 60% of selected subjects. The tanita breath alert measured VSC level of score 2 in 60% of selected subjects while organoleptic score of 4 was found in 50% of subjects. The maximum mean value of 31.1±36.5 was found to be of F. nucleatum (Fn) followed by P. gingivalis (Pg) (13±13.3) & T. forsythia (Tf) (7.16±8.68) in tongue samples of selected patients. A weak positive correlation was found between VSC levels (tanita score & organoleptic score) and clinical parameters. Conclusion: The halitosis assessment by measuring VSC levels using organoleptic method and tanita breath alert are clinically feasible. Maximum tongue coating was found in 60% of patients. Fn was found comparatively more than the Pg & Tf. A weak positive correlation was found between VSC levels and clinical parameters such as PI, GI & GBI. Thus,the dentist/ periodontist should emphasise on tongue cleaning measures that would reduce the odoriferous microbial load. PMID:24596791

  20. Performance mapping of a 30 cm engineering model thruster

    NASA Technical Reports Server (NTRS)

    Poeschel, R. L.; Vahrenkamp, R. P.

    1975-01-01

    A 30 cm thruster representative of the engineering model design has been tested over a wide range of operating parameters to document performance characteristics such as electrical and propellant efficiencies, double ion and beam divergence thrust loss, component equilibrium temperatures, operational stability, etc. Data obtained show that optimum power throttling, in terms of maximum thruster efficiency, is not highly sensitive to parameter selection. Consequently, considerations of stability, discharge chamber erosion, thrust losses, etc. can be made the determining factors for parameter selection in power throttling operations. Options in parameter selection based on these considerations are discussed.

  1. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology.

    PubMed

    Faltermeier, Rupert; Proescholdt, Martin A; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses.

  2. Parameter Optimization for Selected Correlation Analysis of Intracranial Pathophysiology

    PubMed Central

    Faltermeier, Rupert; Proescholdt, Martin A.; Bele, Sylvia; Brawanski, Alexander

    2015-01-01

    Recently we proposed a mathematical tool set, called selected correlation analysis, that reliably detects positive and negative correlations between arterial blood pressure (ABP) and intracranial pressure (ICP). Such correlations are associated with severe impairment of the cerebral autoregulation and intracranial compliance, as predicted by a mathematical model. The time resolved selected correlation analysis is based on a windowing technique combined with Fourier-based coherence calculations and therefore depends on several parameters. For real time application of this method at an ICU it is inevitable to adjust this mathematical tool for high sensitivity and distinct reliability. In this study, we will introduce a method to optimize the parameters of the selected correlation analysis by correlating an index, called selected correlation positive (SCP), with the outcome of the patients represented by the Glasgow Outcome Scale (GOS). For that purpose, the data of twenty-five patients were used to calculate the SCP value for each patient and multitude of feasible parameter sets of the selected correlation analysis. It could be shown that an optimized set of parameters is able to improve the sensitivity of the method by a factor greater than four in comparison to our first analyses. PMID:26693250

  3. Continuous Variable Quantum Key Distribution Using Polarized Coherent States

    NASA Astrophysics Data System (ADS)

    Vidiella-Barranco, A.; Borelli, L. F. M.

    We discuss a continuous variables method of quantum key distribution employing strongly polarized coherent states of light. The key encoding is performed using the variables known as Stokes parameters, rather than the field quadratures. Their quantum counterpart, the Stokes operators Ŝi (i=1,2,3), constitute a set of non-commuting operators, being the precision of simultaneous measurements of a pair of them limited by an uncertainty-like relation. Alice transmits a conveniently modulated two-mode coherent state, and Bob randomly measures one of the Stokes parameters of the incoming beam. After performing reconciliation and privacy amplification procedures, it is possible to distill a secret common key. We also consider a non-ideal situation, in which coherent states with thermal noise, instead of pure coherent states, are used for encoding.

  4. Effect of Porosity Parameters and Surface Chemistry on Carbon Dioxide Adsorption in Sulfur-Doped Porous Carbons.

    PubMed

    Wang, En-Jie; Sui, Zhu-Yin; Sun, Ya-Nan; Ma, Zhuang; Han, Bao-Hang

    2018-05-22

    In this work, a series of highly porous sulfur-doped carbons are prepared through physical activation methods by using polythiophene as a precursor. The morphology, structure, and physicochemical properties are revealed by a variety of characterization methods, such as scanning electron microscopy, Raman spectroscopy, X-ray photoelectron spectroscopy, and nitrogen sorption measurement. Their porosity parameters and chemical compositions can be well-tuned by changing the activating agents (steam and carbon dioxide) and reaction temperature. These sulfur-doped porous carbons possess specific surface area of 670-2210 m 2 g -1 , total pore volume of 0.31-1.26 cm 3 g -1 , and sulfur content of 0.6-4.9 atom %. The effect of porosity parameters and surface chemistry on carbon dioxide adsorption in sulfur-doped porous carbons is studied in detail. After a careful analysis of carbon dioxide uptake at different temperatures (273 and 293 K), pore volumes from small pore size (less than 1 nm) play an important role in carbon dioxide adsorption at 273 K, whereas surface chemistry is the key factor at a higher adsorption temperature or lower relative pressure. Furthermore, sulfur-doped porous carbons also possess good gas adsorption selectivity and excellent recyclability for regeneration.

  5. Validation of systems biology derived molecular markers of renal donor organ status associated with long term allograft function.

    PubMed

    Perco, Paul; Heinzel, Andreas; Leierer, Johannes; Schneeberger, Stefan; Bösmüller, Claudia; Oberhuber, Rupert; Wagner, Silvia; Engler, Franziska; Mayer, Gert

    2018-05-03

    Donor organ quality affects long term outcome after renal transplantation. A variety of prognostic molecular markers is available, yet their validity often remains undetermined. A network-based molecular model reflecting donor kidney status based on transcriptomics data and molecular features reported in scientific literature to be associated with chronic allograft nephropathy was created. Significantly enriched biological processes were identified and representative markers were selected. An independent kidney pre-implantation transcriptomics dataset of 76 organs was used to predict estimated glomerular filtration rate (eGFR) values twelve months after transplantation using available clinical data and marker expression values. The best-performing regression model solely based on the clinical parameters donor age, donor gender, and recipient gender explained 17% of variance in post-transplant eGFR values. The five molecular markers EGF, CD2BP2, RALBP1, SF3B1, and DDX19B representing key molecular processes of the constructed renal donor organ status molecular model in addition to the clinical parameters significantly improved model performance (p-value = 0.0007) explaining around 33% of the variability of eGFR values twelve months after transplantation. Collectively, molecular markers reflecting donor organ status significantly add to prediction of post-transplant renal function when added to the clinical parameters donor age and gender.

  6. A simple physiologically based pharmacokinetic model evaluating the effect of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans

    PubMed Central

    Saylor, Kyle; Zhang, Chenming

    2017-01-01

    Physiologically based pharmacokinetic (PBPK) modeling was applied to investigate the effects of anti-nicotine antibodies on nicotine disposition in the brains of rats and humans. Successful construction of both rat and human models was achieved by fitting model outputs to published nicotine concentration time course data in the blood and in the brain. Key parameters presumed to have the most effect on the ability of these antibodies to prevent nicotine from entering the brain were selected for investigation using the human model. These parameters, which included antibody affinity for nicotine, antibody cross-reactivity with cotinine, and antibody concentration, were broken down into different, clinically-derived in silico treatment levels and fed into the human PBPK model. Model predictions suggested that all three parameters, in addition to smoking status, have a sizable impact on anti-nicotine antibodies’ ability to prevent nicotine from entering the brain and that the antibodies elicited by current human vaccines do not have sufficient binding characteristics to reduce brain nicotine concentrations. If the antibody binding characteristics achieved in animal studies can similarly be achieved in human studies, however, nicotine vaccine efficacy in terms of brain nicotine concentration reduction is predicted to meet threshold values for alleviating nicotine dependence. PMID:27473014

  7. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  8. From Mouth-level to Tooth-level DMFS: Conceptualizing a Theoretical Framework

    PubMed Central

    Bandyopadhyay, Dipankar

    2015-01-01

    Objective There is no dearth of correlated count data in any biological or clinical settings, and the ability to accurately analyze and interpret such data remains an exciting area of research. In oral health epidemiology, the Decayed, Missing, Filled (DMF) index has been continuously used for over 70 years as the key measure to quantify caries experience. The DMF index projects a subject’s caries status using either the DMF(T), the total number of DMF teeth, or the DMF(S), counting the total DMF teeth surfaces, for that subject. However, surfaces within a particular tooth or a subject constitute clustered data, and the DMFS mostly overlook this clustering effect to attain an over-simplified summary index, ignoring the true tooth-level caries status. Besides, the DMFT/DMFS might exhibit excess of some specific counts (say, zeroes representing the set of relatively disease-free carious state), or can exhibit overdispersion, and accounting for the excess responses or overdispersion remains a key component is selecting the appropriate modeling strategy. Methods & Results This concept paper presents the rationale and the theoretical framework which a dental researcher might consider at the onset in order to choose a plausible statistical model for tooth-level DMFS. Various nuances related to model fitting, selection and parameter interpretation are also explained. Conclusion The author recommends conceptualizing the correct stochastic framework should serve as the guiding force to the dental researcher’s never-ending goal of assessing complex covariate-response relationships efficiently. PMID:26618183

  9. PCS optical fibers for an automobile data bus

    NASA Astrophysics Data System (ADS)

    Clarkin, James P.; Timmerman, Richard J.; Stolte, Gary W.; Klein, Karl-Friedrich

    2005-02-01

    Optical fibers have been used for data communications in automobiles for several years. The fiber of choice thus far has been a plastic core/plastic clad optical fiber (POF) consisting of the plastic polymethylmethacrylate (PMMA). The POF fiber provides a low cost fiber with relatively easy termination. However, increasing demands regarding temperature performance, transmission losses and bandwidth have pushed the current limits of the POF fiber, and the automotive industry is now moving towards an optical fiber with a silica glass core/plastic clad (PCS). PCS optical fibers have been used successfully in industrial, medical, sensor, military and data communications systems for over two decades. The PCS fiber is now being adapted specifically for automotive use. In the following, the design criteria and design alternatives for the PCS as well as optical, thermal, and mechanical testing results for key automotive parameters are described. The fiber design tested was 200&mum synthetic silica core/230&mum fluoropolymer cladding/1510&mum nylon buffer. Key attributes such as 700 - 900 nm spectral attenuation, 125°C thermal soak, -40 to 125°C thermal cycling, bending losses, mechanical strength, termination capability, and cost are discussed and compared. Overall, a specifically designed PCS fiber is expected to be acceptable for the use in an automotive data bus, and will show improvement in optical transmission, temperature range and bandwidth. However, the final selection of buffer and jacket materials and properties will be most dependent on the selection of a reliable and economical termination method.

  10. Evolution and ecology meet molecular genetics: adaptive phenotypic plasticity in two isolated Negev desert populations of Acacia raddiana at either end of a rainfall gradient

    PubMed Central

    Ward, David; Shrestha, Madan K.; Golan-Goldhirsh, Avi

    2012-01-01

    Background and Aims The ecological, evolutionary and genetic bases of population differentiation in a variable environment are often related to the selection pressures that plants experience. We compared differences in several growth- and defence-related traits in two isolated populations of Acacia raddiana trees from sites at either end of an extreme environmental gradient in the Negev desert. Methods We used random amplified polymorphic DNA (RAPD) to determine the molecular differences between populations. We grew plants under two levels of water, three levels of nutrients and three levels of herbivory to test for phenotypic plasticity and adaptive phenotypic plasticity. Key Results The RAPD analyses showed that these populations are highly genetically differentiated. Phenotypic plasticity in various morphological traits in A. raddiana was related to patterns of population genetic differentiation between the two study sites. Although we did not test for maternal effects in these long-lived trees, significant genotype × environment (G × E) interactions in some of these traits indicated that such plasticity may be adaptive. Conclusions The main selection pressure in this desert environment, perhaps unsurprisingly, is water. Increased water availability resulted in greater growth in the southern population, which normally receives far less rain than the northern population. Even under the conditions that we defined as low water and/or nutrients, the performance of the seedlings from the southern population was significantly better, perhaps reflecting selection for these traits. Consistent with previous studies of this genus, there was no evidence of trade-offs between physical and chemical defences and plant growth parameters in this study. Rather, there appeared to be positive correlations between plant size and defence parameters. The great variation in several traits in both populations may result in a diverse potential for responding to selection pressures in different environments. PMID:22039007

  11. Electropolymerized fluorinated aniline-based fiber for headspace solid-phase microextraction and gas chromatographic determination of benzaldehyde in injectable pharmaceutical formulations.

    PubMed

    Mohammadi, Ali; Mohammadi, Somayeh; Bayandori Moghaddam, Abdolmajid; Masoumi, Vahideh; Walker, Roderick B

    2014-10-01

    In this study, a simple method was developed and validated to detect trace levels of benzaldehyde in injectable pharmaceutical formulations by solid-phase microextraction coupled with gas chromatography-flame ionization detector. Polyaniline was electrodeposited on a platinum wire in trifluoroacetic acid solvent by cyclic voltammetry technique. This fiber shows high thermal and mechanical stability and high performance in extraction of benzaldehyde. Extraction and desorption time and temperature, salt effect and gas chromatography parameters were optimized as key parameters. At the optimum conditions, the fiber shows good linearity between peak area ratio of benzaldehyde/3-chlorobenzaldehyde and benzaldehyde concentration in the range of 50-800 ng/mL with percent relative standard deviation values ranging from 0.75 to 8.64% (n = 3). The limits of quantitation and detection were 50 and 16 ng/mL, respectively. The method has the requisite selectivity, sensitivity, accuracy and precision to assay benzaldehyde in injectable pharmaceutical dosage forms. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Self-Optimized Biological Channels in Facilitating the Transmembrane Movement of Charged Molecules

    PubMed Central

    Huyen, V. T. N.; Lap, Vu Cong; Nguyen, V. Lien

    2016-01-01

    We consider an anisotropically two-dimensional diffusion of a charged molecule (particle) through a large biological channel under an external voltage. The channel is modeled as a cylinder of three structure parameters: radius, length, and surface density of negative charges located at the channel interior-lining. These charges induce inside the channel a potential that plays a key role in controlling the particle current through the channel. It was shown that to facilitate the transmembrane particle movement the channel should be reasonably self-optimized so that its potential coincides with the resonant one, resulting in a large particle current across the channel. Observed facilitation appears to be an intrinsic property of biological channels, regardless of the external voltage or the particle concentration gradient. This facilitation is very selective in the sense that a channel of definite structure parameters can facilitate the transmembrane movement of only particles of proper valence at corresponding temperatures. Calculations also show that the modeled channel is nonohmic with the ion conductance which exhibits a resonance at the same channel potential as that identified in the current. PMID:27022394

  13. An Investigation of Candidate Sensor-Observable Wake Vortex Strength Parameters for the NASA Aircraft Vortex Spacing System (AVOSS)

    NASA Technical Reports Server (NTRS)

    Tatnall, Chistopher R.

    1998-01-01

    The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.

  14. CBM Resources/reserves classification and evaluation based on PRMS rules

    NASA Astrophysics Data System (ADS)

    Fa, Guifang; Yuan, Ruie; Wang, Zuoqian; Lan, Jun; Zhao, Jian; Xia, Mingjun; Cai, Dechao; Yi, Yanjing

    2018-02-01

    This paper introduces a set of definitions and classification requirements for coalbed methane (CBM) resources/reserves, based on Petroleum Resources Management System (PRMS). The basic CBM classification criterions of 1P, 2P, 3P and contingent resources are put forward from the following aspects: ownership, project maturity, drilling requirements, testing requirements, economic requirements, infrastructure and market, timing of production and development, and so on. The volumetric method is used to evaluate the OGIP, with focuses on analyses of key parameters and principles of the parameter selection, such as net thickness, ash and water content, coal rank and composition, coal density, cleat volume and saturation and absorbed gas content etc. A dynamic method is used to assess the reserves and recovery efficiency. Since the differences in rock and fluid properties, displacement mechanism, completion and operating practices and wellbore type resulted in different production curve characteristics, the factors affecting production behavior, the dewatering period, pressure build-up and interference effects were analyzed. The conclusion and results that the paper achieved can be used as important references for reasonable assessment of CBM resources/reserves.

  15. A Software Toolkit to Study Systematic Uncertainties of the Physics Models of the Geant4 Simulation Package

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael

    The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less

  16. Effect of liquid crystal birefringence on the opacity and off-axis haze of PDLC films

    NASA Astrophysics Data System (ADS)

    Pane, S.; Caporusso, M.

    1998-02-01

    PDLC systems are thin films consisting of a dispersion of liquid crystal micro-droplets in a continuous solid phase of polymer matrix. Application of an electric field on a thin layer of PDLC sandwiched between two transparent on-state. This effect make them useful for a wide variety of applications. Among them, smart windows for architectural is the most popular subject in literature. For this application, the key parameters of performance are the haze and the opacity. There are essentially two technologies used to prepare PDLC films, namely micro-encapsulation and phase separation.In the present work we will show the correlation between the opacity and the off-axis haze in PDLC films prepared with a phase separation technology. We will give the general rule in order to select the liquid crystal properties that allow the preparation of high opacity ad low haze PDLC films. Further study about the control of the parameters which influence the performances of PDLC films prepared with phase separation technology and the difference with the NCAP approach are in progress at our laboratory.

  17. Optimization of operating parameters for gas-phase photocatalytic splitting of H2S by novel vermiculate packed tubular reactor.

    PubMed

    Preethi, V; Kanmani, S

    2016-10-01

    Hydrogen production by gas-phase photocatalytic splitting of Hydrogen Sulphide (H2S) was investigated on four semiconductor photocatalysts including CuGa1.6Fe0.4O2, ZnFe2O3, (CdS + ZnS)/Fe2O3 and Ce/TiO2. The CdS and ZnS coated core shell particles (CdS + ZnS)/Fe2O3 shows the highest rate of hydrogen (H2) production under optimized conditions. Packed bed tubular reactor was used to study the performance of prepared photocatalysts. Selection of the best packing material is a key for maximum removal efficiency. Cheap, lightweight and easily adsorbing vermiculate materials were used as a novel packing material and were found to be effective in splitting H2S. Effect of various operating parameters like flow rate, sulphide concentration, catalyst dosage, light irradiation were tested and optimized for maximum H2 conversion of 92% from industrial waste H2S. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Experimental statistics for biological sciences.

    PubMed

    Bang, Heejung; Davidian, Marie

    2010-01-01

    In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.

  19. Practical procedure for discriminating monofloral honey with a broad pollen profile variability using an electronic tongue.

    PubMed

    Sousa, Mara E B C; Dias, Luís G; Veloso, Ana C A; Estevinho, Letícia; Peres, António M; Machado, Adélio A S C

    2014-10-01

    Colour and floral origin are key parameters that may influence the honey market. Monofloral light honey are more demanded by consumers, mainly due to their flavour, being more valuable for producers due to their higher price when compared to darker honey. The latter usually have a high anti-oxidant content that increases their healthy potential. This work showed that it is possible to correctly classify monofloral honey with a high variability in floral origin with a potentiometric electronic tongue after making a preliminary selection of honey according their colours: white, amber and dark honey. The results showed that the device had a very satisfactory sensitivity towards floral origin (Castanea sp., Echium sp., Erica sp., Lavandula sp., Prunus sp. and Rubus sp.), allowing a leave-one-out cross validation correct classification of 100%. Therefore, the E-tongue shows potential to be used at analytical laboratory level for honey samples classification according to market and quality parameters, as a practical tool for ensuring monofloral honey authenticity. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  1. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  2. The DREO Elint Browser Utility (DEBU) reference manual

    NASA Astrophysics Data System (ADS)

    Ford, Barbara; Jones, David

    1992-04-01

    An electronic intelligent database browsing tool called DEBU has been developed that allows databases such as ELP, Kilting, EWIR, and AFEWC to be reviewed and analyzed from a user-friendly environment on a personal computer. DEBU's basic function is to allow users to examine the contents of user-selected subfiles of user-selected emitters of user-selected databases. DEBU augments this functionality with support for selecting (filtering) and combining subsets of emitters by user-selected attributes such as name, parameter type, or parameter value. DEBU provides facilities for examining histograms and x-y plots of selected parameters, for doing ambiguity analysis and mode level analysis, and for generating and printing a variety of reports. A manual is provided for users of DEBU, including descriptions and illustrations of menus and windows.

  3. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  4. Universal statistics of selected values

    NASA Astrophysics Data System (ADS)

    Smerlak, Matteo; Youssef, Ahmed

    2017-03-01

    Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.

  5. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  6. Simulating the role of visual selective attention during the development of perceptual completion

    PubMed Central

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P.

    2014-01-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds’ performance on a second measure, the perceptual unity task. Two parameters in the model – corresponding to areas in the occipital and parietal cortices – were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. PMID:23106728

  7. Simulating the role of visual selective attention during the development of perceptual completion.

    PubMed

    Schlesinger, Matthew; Amso, Dima; Johnson, Scott P

    2012-11-01

    We recently proposed a multi-channel, image-filtering model for simulating the development of visual selective attention in young infants (Schlesinger, Amso & Johnson, 2007). The model not only captures the performance of 3-month-olds on a visual search task, but also implicates two cortical regions that may play a role in the development of visual selective attention. In the current simulation study, we used the same model to simulate 3-month-olds' performance on a second measure, the perceptual unity task. Two parameters in the model - corresponding to areas in the occipital and parietal cortices - were systematically varied while the gaze patterns produced by the model were recorded and subsequently analyzed. Three key findings emerged from the simulation study. First, the model successfully replicated the performance of 3-month-olds on the unity perception task. Second, the model also helps to explain the improved performance of 2-month-olds when the size of the occluder in the unity perception task is reduced. Third, in contrast to our previous simulation results, variation in only one of the two cortical regions simulated (i.e. recurrent activity in posterior parietal cortex) resulted in a performance pattern that matched 3-month-olds. These findings provide additional support for our hypothesis that the development of perceptual completion in early infancy is promoted by progressive improvements in visual selective attention and oculomotor skill. © 2012 Blackwell Publishing Ltd.

  8. Selective photoinactivation of Histoplasma capsulatum by water-soluble derivatives chalcones.

    PubMed

    Melo, Wanessa C M A; Santos, Mariana Bastos Dos; Marques, Beatriz de Carvalho; Regasini, Luis Octávio; Giannini, Maria José Soares Mendes; Almeida, Ana Marisa Fusco

    2017-06-01

    Histoplasmosis is a respiratory and systemic disease caused by the dimorphic fungus Histoplasma capsulatum. The clinical features may vary from asymptomatic infections to disseminated severe form depending of patient immunity. The treatment of histoplasmosis can be performed with itraconazole, fluconazole, and in the disseminated forms is used amphotericin B. However, the critical side effects of amphotericin B, the cases of itraconazole therapy failure and the appearance of fluconozole-resistant strains makes necessary the search of new strategies to treat this disease. Antimicrobial photodynamic therapy (aPDT) seems to be a potential candidate once have been show efficacy to inhibit others dimorphic fungi. Although the photosensitizer (PS) chalcone aggregates in biological medium, it has antifungal activity and show a high quantum yield of ROS formation. So, the aim of this study was to obtain the experimental parameters to achieve an acceptable selective chalcone water-soluble derivatives photoinactivation of H. capsulatum comparing with fibroblastic and keratinocytes cells which are the constituents of some potential host tissues. Yeast and cells were incubated with the same chalchones concentrations and short incubation time followed by irradiation with equal dose of light. The best conditions to kill H. capsulatum selectively were very low photosensitizers concentration (1.95μgmL -1 ) incubated by 15min and irradiated with LED 450nm with 24Jcm -2 . Key words: chalcone, Histoplasma capsulatum, aPDT, selectivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Spectral ageing in the era of big data: integrated versus resolved models

    NASA Astrophysics Data System (ADS)

    Harwood, Jeremy J.

    2017-04-01

    Continuous injection models of spectral ageing have long been used to determine the age of radio galaxies from their integrated spectrum; however, many questions about their reliability remain unanswered. With various large area surveys imminent (e.g. LOw Frequency ARray, MeerKAT, Murchison Widefield Array) and planning for the next generation of radio interferometers are well underway (e.g. next generation VLA, Square Kilometre Array), investigations of radio galaxy physics are set to shift away from studies of individual sources to the population as a whole. Determining if and how integrated models of spectral ageing can be applied in the era of big data is therefore crucial. In this paper, I compare classical integrated models of spectral ageing to recent well-resolved studies that use modern analysis techniques on small spatial scales to determine their robustness and validity as a source selection method. I find that integrated models are unable to recover key parameters and, even when known a priori, provide a poor, frequency-dependent description of a source's spectrum. I show a disparity of up to a factor of 6 in age between the integrated and resolved methods but suggest, even with these inconsistencies, such models still provide a potential method of candidate selection in the search for remnant radio galaxies and in providing a cleaner selection of high redshift radio galaxies in z - α selected samples.

  10. Exploring Several Methods of Groundwater Model Selection

    NASA Astrophysics Data System (ADS)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  11. Estimation of genetic parameters and selection of high-yielding, upright common bean lines with slow seed-coat darkening.

    PubMed

    Alvares, R C; Silva, F C; Melo, L C; Melo, P G S; Pereira, H S

    2016-11-21

    Slow seed coat darkening is desirable in common bean cultivars and genetic parameters are important to define breeding strategies. The aims of this study were to estimate genetic parameters for plant architecture, grain yield, grain size, and seed-coat darkening in common bean; identify any genetic association among these traits; and select lines that associate desirable phenotypes for these traits. Three experiments were set up in the winter 2012 growing season, in Santo Antônio de Goiás and Brasília, Brazil, including 220 lines obtained from four segregating populations and five parents. A triple lattice 15 x 15 experimental design was used. The traits evaluated were plant architecture, grain yield, grain size, and seed-coat darkening. Analyses of variance were carried out and genetic parameters such as heritability, gain expected from selection, and correlations, were estimated. For selection of superior lines, a "weight-free and parameter-free" index was used. The estimates of genetic variance, heritability, and gain expected from selection were high, indicating good possibility for success in selection of the four traits. The genotype x environment interaction was proportionally more important for yield than for the other traits. There was no strong genetic correlation observed among the four traits, which indicates the possibility of selection of superior lines with many traits. Considering simultaneous selection, it was not possible to join high genetic gains for the four traits. Forty-four lines that combined high yield, more upright plant architecture, slow darkening grains, and commercial grade size were selected.

  12. MAGENCO: A map generalization controller for Arc/Info

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.; Cashwell, J.W.

    The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyzemore » the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.« less

  13. Can Nucleoli Be Markers of Developmental Potential in Human Zygotes?

    PubMed

    Fulka, Helena; Kyogoku, Hirohisa; Zatsepina, Olga; Langerova, Alena; Fulka, Josef

    2015-11-01

    In 1999, Tesarik and Greco reported that they could predict the developmental potential of human zygotes from a single static evaluation of their pronuclei. This was based on the distribution and number of specific nuclear organelles - the nucleoli. Recent studies in mice show that nucleoli play a key role in parental genome restructuring after fertilization, and that interfering with this process may lead to developmental failure. These studies thus support the Tesarik-Greco evaluation as a potentially useful method for selecting high-quality embryos in human assisted reproductive technologies. In this opinion article we discuss recent evidence linking nucleoli to parental genome reprogramming, and ask whether nucleoli can mirror or be used as representative markers of embryonic parameters such as chromosome content or DNA fragmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Elastic Response and Failure Studies of Multi-Wall Carbon Nanotube Twisted Yarns

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Jefferson, Gail D.; Frankland, Sarah-Jane V.

    2007-01-01

    Experimental data on the stress-strain behavior of a polymer multiwall carbon nanotube (MWCNT) yarn composite are used to motivate an initial study in multi-scale modeling of strength and stiffness. Atomistic and continuum length scale modeling methods are outlined to illustrate the range of parameters required to accurately model behavior. The carbon nanotubes yarns are four-ply, twisted, and combined with an elastomer to form a single-layer, unidirectional composite. Due to this textile structure, the yarn is a complicated system of unique geometric relationships subjected to combined loads. Experimental data illustrate the local failure modes induced by static, tensile tests. Key structure-property relationships are highlighted at each length scale indicating opportunities for parametric studies to assist the selection of advantageous material development and manufacturing methods.

  15. Meta-analysis using Dirichlet process.

    PubMed

    Muthukumarana, Saman; Tiwari, Ram C

    2016-02-01

    This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.

  16. Study program to improve the open-circuit voltage of low resistivity single crystal silicon solar cells

    NASA Technical Reports Server (NTRS)

    Minnucci, J. A.; Matthei, K. W.

    1980-01-01

    The results of a 14 month program to improve the open circuit voltage of low resistivity silicon solar cells are described. The approach was based on ion implantation in 0.1- to 10.0-ohm-cm float-zone silicon. As a result of the contract effort, open circuit voltages as high as 645 mV (AMO 25 C) were attained by high dose phosphorus implantation followed by furnace annealing and simultaneous SiO2 growth. One key element was to investigate the effects of bandgap narrowing caused by high doping concentrations in the junction layer. Considerable effort was applied to optimization of implant parameters, selection of furnace annealing techniques, and utilization of pulsed electron beam annealing to minimize thermal process-induced defects in the completed solar cells.

  17. Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert

    2011-12-01

    General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.

  18. System Architecture Modeling for Technology Portfolio Management using ATLAS

    NASA Technical Reports Server (NTRS)

    Thompson, Robert W.; O'Neil, Daniel A.

    2006-01-01

    Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.

  19. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  20. Additive Manufacturing of Fuel Injectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadek Tadros, Dr. Alber Alphonse; Ritter, Dr. George W.; Drews, Charles Donald

    Additive manufacturing (AM), also known as 3D-printing, has been shifting from a novelty prototyping paradigm to a legitimate manufacturing tool capable of creating components for highly complex engineered products. An emerging AM technology for producing metal parts is the laser powder bed fusion (L-PBF) process; however, industry manufacturing specifications and component design practices for L-PBF have not yet been established. Solar Turbines Incorporated (Solar), an industrial gas turbine manufacturer, has been evaluating AM technology for development and production applications with the desire to enable accelerated product development cycle times, overall turbine efficiency improvements, and supply chain flexibility relative to conventionalmore » manufacturing processes (casting, brazing, welding). Accordingly, Solar teamed with EWI on a joint two-and-a-half-year project with the goal of developing a production L-PBF AM process capable of consistently producing high-nickel alloy material suitable for high temperature gas turbine engine fuel injector components. The project plan tasks were designed to understand the interaction of the process variables and their combined impact on the resultant AM material quality. The composition of the high-nickel alloy powders selected for this program met the conventional cast Hastelloy X compositional limits and were commercially available in different particle size distributions (PSD) from two suppliers. Solar produced all the test articles and both EWI and Solar shared responsibility for analyzing them. The effects of powder metal input stock, laser parameters, heat treatments, and post-finishing methods were evaluated. This process knowledge was then used to generate tensile, fatigue, and creep material properties data curves suitable for component design activities. The key process controls for ensuring consistent material properties were documented in AM powder and process specifications. The basic components of the project were: • Powder metal input stock: Powder characterization, dimensional accuracy, metallurgical characterization, and mechanical properties evaluation. • Process parameters: Laser parameter effects, post-printing heat-treatment development, mechanical properties evaluation, and post-finishing technique. • Material design curves: Room and elevated temperature tensiles, low cycle fatigue, and creep rupture properties curves generated. • AM specifications: Key metal powder characteristics, laser parameters, and heat-treatment controls identified.« less

  1. Bayesian network representing system dynamics in risk analysis of nuclear systems

    NASA Astrophysics Data System (ADS)

    Varuttamaseni, Athi

    2011-12-01

    A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.

  2. The PROCARE consortium: toward an improved allocation strategy for kidney allografts.

    PubMed

    Otten, H G; Joosten, I; Allebes, W A; van der Meer, A; Hilbrands, L B; Baas, M; Spierings, E; Hack, C E; van Reekum, F; van Zuilen, A D; Verhaar, M C; Bots, M L; Seelen, M A J; Sanders, J S F; Hepkema, B G; Lambeck, A J; Bungener, L B; Roozendaal, C; Tilanus, M G J; Vanderlocht, J; Voorter, C E; Wieten, L; van Duijnhoven, E; Gelens, M; Christiaans, M; van Ittersum, F; Nurmohamed, A; Lardy, N M; Swelsen, W T; van Donselaar-van der Pant, K A M I; van der Weerd, N C; Ten Berge, I J M; Bemelman, F J; Hoitsma, A J; de Fijter, J W; Betjes, M G H; Roelen, D L; Claas, F H J

    2014-10-01

    Kidney transplantation is the best treatment option for patients with end-stage renal failure. At present, approximately 800 Dutch patients are registered on the active waiting list of Eurotransplant. The waiting time in the Netherlands for a kidney from a deceased donor is on average between 3 and 4 years. During this period, patients are fully dependent on dialysis, which replaces only partly the renal function, whereas the quality of life is limited. Mortality among patients on the waiting list is high. In order to increase the number of kidney donors, several initiatives have been undertaken by the Dutch Kidney Foundation including national calls for donor registration and providing information on organ donation and kidney transplantation. The aim of the national PROCARE consortium is to develop improved matching algorithms that will lead to a prolonged survival of transplanted donor kidneys and a reduced HLA immunization. The latter will positively affect the waiting time for a retransplantation. The present algorithm for allocation is among others based on matching for HLA antigens, which were originally defined by antibodies using serological typing techniques. However, several studies suggest that this algorithm needs adaptation and that other immune parameters which are currently not included may assist in improving graft survival rates. We will employ a multicenter-based evaluation on 5429 patients transplanted between 1995 and 2005 in the Netherlands. The association between key clinical endpoints and selected laboratory defined parameters will be examined, including Luminex-defined HLA antibody specificities, T and B cell epitopes recognized on the mismatched HLA antigens, non-HLA antibodies, and also polymorphisms in complement and Fc receptors functionally associated with effector functions of anti-graft antibodies. From these data, key parameters determining the success of kidney transplantation will be identified which will lead to the identification of additional parameters to be included in future matching algorithms aiming to extend survival of transplanted kidneys and to diminish HLA immunization. Computer simulation studies will reveal the number of patients having a direct benefit from improved matching, the effect on shortening of the waiting list, and the decrease in waiting time. Copyright © 2014. Published by Elsevier B.V.

  3. Impact of signal scattering and parametric uncertainties on receiver operating characteristics

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Breton, Daniel J.; Hart, Carl R.; Pettit, Chris L.

    2017-05-01

    The receiver operating characteristic (ROC curve), which is a plot of the probability of detection as a function of the probability of false alarm, plays a key role in the classical analysis of detector performance. However, meaningful characterization of the ROC curve is challenging when practically important complications such as variations in source emissions, environmental impacts on the signal propagation, uncertainties in the sensor response, and multiple sources of interference are considered. In this paper, a relatively simple but realistic model for scattered signals is employed to explore how parametric uncertainties impact the ROC curve. In particular, we show that parametric uncertainties in the mean signal and noise power substantially raise the tails of the distributions; since receiver operation with a very low probability of false alarm and a high probability of detection is normally desired, these tails lead to severely degraded performance. Because full a priori knowledge of such parametric uncertainties is rarely available in practice, analyses must typically be based on a finite sample of environmental states, which only partially characterize the range of parameter variations. We show how this effect can lead to misleading assessments of system performance. For the cases considered, approximately 64 or more statistically independent samples of the uncertain parameters are needed to accurately predict the probabilities of detection and false alarm. A connection is also described between selection of suitable distributions for the uncertain parameters, and Bayesian adaptive methods for inferring the parameters.

  4. Multi-frequency and polarimetric radar backscatter signatures for discrimination between agricultural crops at the Flevoland experimental test site

    NASA Technical Reports Server (NTRS)

    Freeman, A.; Villasenor, J.; Klein, J. D.

    1991-01-01

    We describe the calibration and analysis of multi-frequency, multi-polarization radar backscatter signatures over an agriculture test site in the Netherlands. The calibration procedure involved two stages: in the first stage, polarimetric and radiometric calibrations (ignoring noise) were carried out using square-base trihedral corner reflector signatures and some properties of the clutter background. In the second stage, a novel algorithm was used to estimate the noise level in the polarimetric data channels by using the measured signature of an idealized rough surface with Bragg scattering (the ocean in this case). This estimated noise level was then used to correct the measured backscatter signatures from the agriculture fields. We examine the significance of several key parameters extracted from the calibrated and noise-corrected backscatter signatures. The significance is assessed in terms of the ability to uniquely separate among classes from 13 different backscatter types selected from the test site data, including eleven different crops, one forest and one ocean area. Using the parameters with the highest separation for a given class, we use a hierarchical algorithm to classify the entire image. We find that many classes, including ocean, forest, potato, and beet, can be identified with high reliability, while the classes for which no single parameter exhibits sufficient separation have higher rates of misclassification. We expect that modified decision criteria involving simultaneous consideration of several parameters increase performance for these classes.

  5. Review of collagen I hydrogels for bioengineered tissue microenvironments: characterization of mechanics, structure, and transport.

    PubMed

    Antoine, Elizabeth E; Vlachos, Pavlos P; Rylander, Marissa Nichole

    2014-12-01

    Type I collagen hydrogels have been used successfully as three-dimensional substrates for cell culture and have shown promise as scaffolds for engineered tissues and tumors. A critical step in the development of collagen hydrogels as viable tissue mimics is quantitative characterization of hydrogel properties and their correlation with fabrication parameters, which enables hydrogels to be tuned to match specific tissues or fulfill engineering requirements. A significant body of work has been devoted to characterization of collagen I hydrogels; however, due to the breadth of materials and techniques used for characterization, published data are often disjoint and hence their utility to the community is reduced. This review aims to determine the parameter space covered by existing data and identify key gaps in the literature so that future characterization and use of collagen I hydrogels for research can be most efficiently conducted. This review is divided into three sections: (1) relevant fabrication parameters are introduced and several of the most popular methods of controlling and regulating them are described, (2) hydrogel properties most relevant for tissue engineering are presented and discussed along with their characterization techniques, (3) the state of collagen I hydrogel characterization is recapitulated and future directions are proposed. Ultimately, this review can serve as a resource for selection of fabrication parameters and material characterization methodologies in order to increase the usefulness of future collagen-hydrogel-based characterization studies and tissue engineering experiments.

  6. Review of Collagen I Hydrogels for Bioengineered Tissue Microenvironments: Characterization of Mechanics, Structure, and Transport

    PubMed Central

    Vlachos, Pavlos P.; Rylander, Marissa Nichole

    2014-01-01

    Type I collagen hydrogels have been used successfully as three-dimensional substrates for cell culture and have shown promise as scaffolds for engineered tissues and tumors. A critical step in the development of collagen hydrogels as viable tissue mimics is quantitative characterization of hydrogel properties and their correlation with fabrication parameters, which enables hydrogels to be tuned to match specific tissues or fulfill engineering requirements. A significant body of work has been devoted to characterization of collagen I hydrogels; however, due to the breadth of materials and techniques used for characterization, published data are often disjoint and hence their utility to the community is reduced. This review aims to determine the parameter space covered by existing data and identify key gaps in the literature so that future characterization and use of collagen I hydrogels for research can be most efficiently conducted. This review is divided into three sections: (1) relevant fabrication parameters are introduced and several of the most popular methods of controlling and regulating them are described, (2) hydrogel properties most relevant for tissue engineering are presented and discussed along with their characterization techniques, (3) the state of collagen I hydrogel characterization is recapitulated and future directions are proposed. Ultimately, this review can serve as a resource for selection of fabrication parameters and material characterization methodologies in order to increase the usefulness of future collagen-hydrogel-based characterization studies and tissue engineering experiments. PMID:24923709

  7. 8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A; Cole, Wesley J; Sun, Yinong

    Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve demand over the evolution of many years or decades. Various CEM formulations are used to evaluate systems ranging in scale from states or utility service territories to national or multi-national systems. CEMs can be computationally complex, and to achieve acceptable solve times, key parameters are often estimated using simplified methods. In this paper, we focus on two of these key parameters associated with the integration of variable generation (VG) resources: capacity value and curtailment. We first discuss commonmore » modeling simplifications used in CEMs to estimate capacity value and curtailment, many of which are based on a representative subset of hours that can miss important tail events or which require assumptions about the load and resource distributions that may not match actual distributions. We then present an alternate approach that captures key elements of chronological operation over all hours of the year without the computationally intensive economic dispatch optimization typically employed within more detailed operational models. The updated methodology characterizes the (1) contribution of VG to system capacity during high load and net load hours, (2) the curtailment level of VG, and (3) the potential reductions in curtailments enabled through deployment of storage and more flexible operation of select thermal generators. We apply this alternate methodology to an existing CEM, the Regional Energy Deployment System (ReEDS). Results demonstrate that this alternate approach provides more accurate estimates of capacity value and curtailments by explicitly capturing system interactions across all hours of the year. This approach could be applied more broadly to CEMs at many different scales where hourly resource and load data is available, greatly improving the representation of challenges associate with integration of variable generation resources.« less

  8. Exploring the effects of spatial autocorrelation when identifying key drivers of wildlife crop-raiding.

    PubMed

    Songhurst, Anna; Coulson, Tim

    2014-03-01

    Few universal trends in spatial patterns of wildlife crop-raiding have been found. Variations in wildlife ecology and movements, and human spatial use have been identified as causes of this apparent unpredictability. However, varying spatial patterns of spatial autocorrelation (SA) in human-wildlife conflict (HWC) data could also contribute. We explicitly explore the effects of SA on wildlife crop-raiding data in order to facilitate the design of future HWC studies. We conducted a comparative survey of raided and nonraided fields to determine key drivers of crop-raiding. Data were subsampled at different spatial scales to select independent raiding data points. The model derived from all data was fitted to subsample data sets. Model parameters from these models were compared to determine the effect of SA. Most methods used to account for SA in data attempt to correct for the change in P-values; yet, by subsampling data at broader spatial scales, we identified changes in regression estimates. We consequently advocate reporting both model parameters across a range of spatial scales to help biological interpretation. Patterns of SA vary spatially in our crop-raiding data. Spatial distribution of fields should therefore be considered when choosing the spatial scale for analyses of HWC studies. Robust key drivers of elephant crop-raiding included raiding history of a field and distance of field to a main elephant pathway. Understanding spatial patterns and determining reliable socio-ecological drivers of wildlife crop-raiding is paramount for designing mitigation and land-use planning strategies to reduce HWC. Spatial patterns of HWC are complex, determined by multiple factors acting at more than one scale; therefore, studies need to be designed with an understanding of the effects of SA. Our methods are accessible to a variety of practitioners to assess the effects of SA, thereby improving the reliability of conservation management actions.

  9. Quantum key distribution with passive decoy state selection

    NASA Astrophysics Data System (ADS)

    Mauerer, Wolfgang; Silberhorn, Christine

    2007-05-01

    We propose a quantum key distribution scheme which closely matches the performance of a perfect single photon source. It nearly attains the physical upper bound in terms of key generation rate and maximally achievable distance. Our scheme relies on a practical setup based on a parametric downconversion source and present day, nonideal photon-number detection. Arbitrary experimental imperfections which lead to bit errors are included. We select decoy states by classical postprocessing. This allows one to improve the effective signal statistics and achievable distance.

  10. Modelling the water balance of irrigated fields in tropical floodplain soils using Hydrus-1D

    NASA Astrophysics Data System (ADS)

    Beyene, Abebech; Frankl, Amaury; Verhoest, Niko E. C.; Tilahun, Seifu; Alamirew, Tena; Adgo, Enyew; Nyssen, Jan

    2017-04-01

    Accurate estimation of evaporation, transpiration and deep percolation is crucial in irrigated agriculture and the sustainable management of water resources. Here, the Hydrus-1D process-based numerical model was used to estimate the actual transpiration, soil evaporation and deep percolation from irrigated fields of floodplain soils. Field experiments were conducted from Dec 2015 to May 2016 in a small irrigation scheme (50 ha) called 'Shina' located in the Lake Tana floodplains of Ethiopia. Six experimental plots (three for onion and three for maize) were selected along a topographic transect to account for soil and groundwater variability. Irrigation amount (400 to 550 mm during the growing period) was measured using V-notches installed at each plot boundary and daily groundwater levels were measured manually from piezometers. There was no surface runoff observed in the growing period and rainfall was measured using a manual rain gauge. All daily weather data required for the evapotranspiration calculation using Pen Man Monteith equation were collected from a nearby metrological station. The soil profiles were described for each field to include the vertical soil heterogeneity in the soil water balance simulations. The soil texture, organic matter, bulk density, field capacity, wilting point and saturated moisture content were measured for all the soil horizons. Soil moisture monitoring at 30 and 60 cm depths was performed. The soil hydraulic parameters for each horizon was estimated using KNN pedotransfer functions for tropical soils and were effectively fitted using the RETC program (R2= 0.98±0.011) for initial prediction. A local sensitivity analysis was performed to select and optimize the most important hydraulic parameters for soil water flow in the unsaturated zone. The most sensitive parameters were saturated hydraulic conductivity (Ks), saturated moisture content (θs) and pore size distribution (n). Inverse modelling using Hydrus-1D further optimized these parameters (R2 =0.74±0.13). Using the optimized hydraulic parameters, the soil water dynamics were simulated using Hydrus-1D. The atmospheric boundary conditions with surface runoff was used as upper boundary condition with measured rainfall and irrigation input data. The variable pressure head was selected as lower boundary conditions with daily records of groundwater level as time-variable input data. The Hydrus-1D model was successfully applied and calibrated in the study area. The average seasonal actual transpiration values are 310±13 mm for onion and 429±24.7 mm for maize fields. The seasonal average soil evaporation ranges from 12±2.05 mm for maize fields to 38±2.85 mm for onion fields. The seasonal deep percolation from irrigation appeared to be 12 to 40% of applied irrigation. The Hydrus-1D model was able to simulate the temporal and the spatial variations of soil water dynamics in the unsaturated zone of tropical floodplain soils. Key words: floodplains, hydraulic parameters, parameter optimization, small-scale irrigation

  11. Adaptive real time selection for quantum key distribution in lossy and turbulent free-space channels

    NASA Astrophysics Data System (ADS)

    Vallone, Giuseppe; Marangon, Davide G.; Canale, Matteo; Savorgnan, Ilaria; Bacco, Davide; Barbieri, Mauro; Calimani, Simon; Barbieri, Cesare; Laurenti, Nicola; Villoresi, Paolo

    2015-04-01

    The unconditional security in the creation of cryptographic keys obtained by quantum key distribution (QKD) protocols will induce a quantum leap in free-space communication privacy in the same way that we are beginning to realize secure optical fiber connections. However, free-space channels, in particular those with long links and the presence of atmospheric turbulence, are affected by losses, fluctuating transmissivity, and background light that impair the conditions for secure QKD. Here we introduce a method to contrast the atmospheric turbulence in QKD experiments. Our adaptive real time selection (ARTS) technique at the receiver is based on the selection of the intervals with higher channel transmissivity. We demonstrate, using data from the Canary Island 143-km free-space link, that conditions with unacceptable average quantum bit error rate which would prevent the generation of a secure key can be used once parsed according to the instantaneous scintillation using the ARTS technique.

  12. 40 CFR 86.094-22 - Approval of application for certification; test fleet selections; determinations of parameters...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Audit and Production Compliance Audit testing, the adequacy of the limits, stops, seals, or other means... (Selective Enforcement Audit and Production Compliance Audit) only the actual settings to which the parameter... Selective Enforcement Audit, adequacy of limits, and physically adjustable ranges. 86.094-22 Section 86.094...

  13. Chapter 2. Selecting Key Habitat Attributes for Monitoring

    Treesearch

    Gregory D. Hayward; Lowell H. Suring

    2013-01-01

    The success of habitat monitoring programs depends, to a large extent, on carefully selecting key habitat attributes to monitor. The challenge of choosing a limited but sufficient set of attributes will differ depending on the objectives of the monitoring program. In some circumstances, such as managing National Forest System lands for threatened and endangered species...

  14. An Attempt of Formalizing the Selection Parameters for Settlements Generalization in Small-Scales

    NASA Astrophysics Data System (ADS)

    Karsznia, Izabela

    2014-12-01

    The paper covers one of the most important problems concerning context-sensitive settlement selection for the purpose of the small-scale maps. So far, no formal parameters for small-scale settlements generalization have been specified, hence the problem seems to be an important and innovative challenge. It is also crucial from the practical point of view as it is necessary to develop appropriate generalization algorithms for the purpose of the General Geographic Objects Database generalization which is the essential Spatial Data Infrastructure component in Poland. The author proposes and verifies quantitative generalization parameters for the purpose of the settlement selection process in small-scale maps. The selection of settlements was carried out in two research areas - in Lower Silesia and Łódź Province. Based on the conducted analysis appropriate contextual-sensitive settlements selection parameters have been defined. Particular effort has been made to develop a methodology of quantitative settlements selection which would be useful in the automation processes and that would make it possible to keep specifics of generalized objects unchanged.

  15. Evaluation of hydrogen embrittlement and temper embrittlement by key curve method in instrumented Charpy test

    NASA Astrophysics Data System (ADS)

    Ohtsuka, N.; Shindo, Y.; Makita, A.

    2010-06-01

    Instrumented Charpy test was conducted on small sized specimen of 21/4Cr-1Mo steel. In the test the single specimen key curve method was applied to determine the value of fracture toughness for the initiation of crack extension with hydrogen free, KIC, and for hydrogen embrittlement cracking, KIH. Also the tearing modulus as a parameter for resistance to crack extension was determined. The role of these parameters was discussed at an upper shelf temperature and at a transition temperature. Then the key curve method combined with instrumented Charpy test was proven to be used to evaluate not only temper embrittlement but also hydrogen embrittlement.

  16. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  17. Device-independent secret-key-rate analysis for quantum repeaters

    NASA Astrophysics Data System (ADS)

    Holz, Timo; Kampermann, Hermann; Bruß, Dagmar

    2018-01-01

    The device-independent approach to quantum key distribution (QKD) aims to establish a secret key between two or more parties with untrusted devices, potentially under full control of a quantum adversary. The performance of a QKD protocol can be quantified by the secret key rate, which can be lower bounded via the violation of an appropriate Bell inequality in a setup with untrusted devices. We study secret key rates in the device-independent scenario for different quantum repeater setups and compare them to their device-dependent analogon. The quantum repeater setups under consideration are the original protocol by Briegel et al. [Phys. Rev. Lett. 81, 5932 (1998), 10.1103/PhysRevLett.81.5932] and the hybrid quantum repeater protocol by van Loock et al. [Phys. Rev. Lett. 96, 240501 (2006), 10.1103/PhysRevLett.96.240501]. For a given repeater scheme and a given QKD protocol, the secret key rate depends on a variety of parameters, such as the gate quality or the detector efficiency. We systematically analyze the impact of these parameters and suggest optimized strategies.

  18. A practical guide to assessing clinical decision-making skills using the key features approach.

    PubMed

    Farmer, Elizabeth A; Page, Gordon

    2005-12-01

    This paper in the series on professional assessment provides a practical guide to writing key features problems (KFPs). Key features problems test clinical decision-making skills in written or computer-based formats. They are based on the concept of critical steps or 'key features' in decision making and represent an advance on the older, less reliable patient management problem (PMP) formats. The practical steps in writing these problems are discussed and illustrated by examples. Steps include assembling problem-writing groups, selecting a suitable clinical scenario or problem and defining its key features, writing the questions, selecting question response formats, preparing scoring keys, reviewing item quality and item banking. The KFP format provides educators with a flexible approach to testing clinical decision-making skills with demonstrated validity and reliability when constructed according to the guidelines provided.

  19. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  20. A parameter for the assessment of the segmentation of TEM tomography reconstructed volumes based on mutual information.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-12-01

    A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Centrifugal slurry pump wear and hydraulic studies. Phase II report. Experimental studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mistry, D.; Cooper, P.; Biswas, C.

    1983-01-01

    This report describes the work performed by Ingersoll-Rand Research, Inc., under Phase II, Experimental Studies for the contract entitled, Centrifugal Slurry Pump Wear and Hydraulic Studies. This work was carried out for the US Department of Energy under Contract No. DE-AC-82PC50035. The basic development approach pursued this phase is presented, followed by a discussion on wear relationships. The analysis, which resulted in the development of a mathematical wear model relating pump life to some of the key design and operating parameters, is presented. The results, observations, and conclusions of the experimental investigation on small scale pumps that led to themore » selected design features for the prototype pump are discussed. The material investigation was performed at IRRI, ORNL and Battelle. The rationale for selecting the materials for testing, the test methods and apparatus used, and the results obtained are presented followed by a discussion on materials for a prototype pump. In addition, the prototype pump test facility description, as well as the related design and equipment details, are presented. 20 references, 53 figures, 13 tables.« less

  2. Booster Main Engine Selection Criteria for the Liquid Fly-Back Booster

    NASA Technical Reports Server (NTRS)

    Ryan, Richard M.; Rothschild, William J.; Christensen, David L.

    1998-01-01

    The Liquid Fly-Back Booster (LFBB) Program seeks to enhance the Space Shuttle system safety performance and economy of operations through the use of an advanced, liquid propellant Booster Main Engine (BME). There are several viable BME candidates that could be suitable for this application. The objective of this study was to identify the key criteria to be applied in selecting among these BME candidates. This study involved an assessment of influences on the overall LFBB utility due to variations in the candidate rocket engines' characteristics. This includes BME impacts on vehicle system weight, perfortnance,design approaches, abort modes, margins of safety, engine-out operations, and maintenance and support concepts. Systems engineering analyses and trade studies were performed to identify the LFBB system level sensitivities to a wide variety of BME related parameters. This presentation summarizes these trade studies and the resulting findings of the LFBB design teams regarding the BME characteristics that most significantly affect the LFBB system. The resulting BME choice should offer the best combination of reliability, performance, reusability, robustness, cost, and risk for the LFBB program.

  3. Booster Main Engine Selection Criteria for the Liquid Fly-Back Booster

    NASA Technical Reports Server (NTRS)

    Ryan, Richard M.; Rothschild, William J.; Christensen, David L.

    1998-01-01

    The Liquid Fly-Back Booster (LFBB) Program seeks to enhance the Space Shuttle system safety, performance and economy of operations through the use of an advanced, liquid propellant Booster Main Engine (BME). There are several viable BME candidates that could be suitable for this application. The objective of this study was to identify the key Criteria to be applied in selecting among these BME candidates. This study involved an assessment of influences on the overall LFBB utility due to variations in the candidate rocket-engines characteristics. This includes BME impacts on vehicle system weight, performance, design approaches, abort modes, margins of safety, engine-out operations, and maintenance and support concepts. Systems engineering analyses and trade studies were performed to identify the LFBB system level sensitivities to a wide variety of BME related parameters. This presentation summarizes these trade studies and the resulting findings of the LFBB design teams regarding the BME characteristics that most significantly affect the LFBB system. The resulting BME choice should offer the best combination of reliability, performance, reusability, robustness, cost, and risk for the LFBB program.

  4. Rule of five in 2015 and beyond: Target and ligand structural limitations, ligand chemistry structure and drug discovery project decisions.

    PubMed

    Lipinski, Christopher A

    2016-06-01

    The rule of five (Ro5), based on physicochemical profiles of phase II drugs, is consistent with structural limitations in protein targets and the drug target ligands. Three of four parameters in Ro5 are fundamental to the structure of both target and drug binding sites. The chemical structure of the drug ligand depends on the ligand chemistry and design philosophy. Two extremes of chemical structure and design philosophy exist; ligands constructed in the medicinal chemistry synthesis laboratory without input from natural selection and natural product (NP) metabolites biosynthesized based on evolutionary selection. Exceptions to Ro5 are found mostly among NPs. Chemistry chameleon-like behavior of some NPs due to intra-molecular hydrogen bonding as exemplified by cyclosporine A is a strong contributor to NP Ro5 outliers. The fragment derived, drug Navitoclax is an example of the extensive expertise, resources, time and key decisions required for the rare discovery of a non-NP Ro5 outlier. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Hot-spot heating susceptibility due to reverse bias operating conditions

    NASA Technical Reports Server (NTRS)

    Gonzalez, C. C.

    1985-01-01

    Because of field experience (indicating that cell and module degradation could occur as a result of hot spot heating), a laboratory test was developed at JPL to determine hot spot susceptibility of modules. The initial hot spot testing work at JPL formed a foundation for the test development. Test parameters are selected as follows. For high shunt resistance cells, the applied back bias test current is set equal to the test cell current at maximum power. For low shunt resistance cells, the test current is set equal to the cell short circuit current. The shadow level is selected to conform to that which would lead to maximum back bias voltage under the appropriate test current level. The test voltage is determined by the bypass diode frequency. The test conditions are meant to simulate the thermal boundary conditions for 100 mW/sq cm, 40C ambient environment. The test lasts 100 hours. A key assumption made during the development of the test is that no current imbalance results from the connecting of multiparallel cell strings. Therefore, the test as originally developed was applicable for single string case only.

  6. Effect of Precursor Selection on the Photocatalytic Performance of Indium Oxide Nanomaterials for Gas-Phase CO 2 Reduction

    DOE PAGES

    Hoch, Laura B.; He, Le; Qiao, Qiao; ...

    2016-06-01

    Nonstoichiometric indium oxide nanoparticles, In 2O 3–x(OH)y, have been shown to function as active photocatalysts for gas-phase CO 2 reduction under simulated solar irradiation. We demonstrate that the choice of starting material has a strong effect on the photocatalytic activity of indium oxide nanoparticles. We also examine three indium oxide materials prepared via the thermal decomposition of either indium(III) hydroxide or indium(III) nitrate and correlate their stability and photocatalytic activity to the number and type of defect present in the material. Furthermore, we use 13CO 2 isotope-tracing experiments to clearly identify the origins of the observed carbon-containing products. Significantly, wemore » find that the oxidizing nature of the precursor anion has a substantial impact on the defect formation within the sample. Our study demonstrates the importance of surface defects in designing an active heterogeneous photocatalyst and provides valuable insight into key parameters for the precursor design, selection, and performance optimization of materials for gas-phase CO 2 reduction.« less

  7. The conversion of biomass to light olefins on Fe-modified ZSM-5 catalyst: Effect of pyrolysis parameters.

    PubMed

    Zhang, Shihong; Yang, Mingfa; Shao, Jingai; Yang, Haiping; Zeng, Kuo; Chen, Yingquan; Luo, Jun; Agblevor, Foster A; Chen, Hanping

    2018-07-01

    Light olefins are the key building blocks for the petrochemical industry. In this study, the effects of in-situ and ex-situ process, temperature, Fe loading, catalyst to feed ratio and gas flow rate on the olefins carbon yield and selectivity were explored. The results showed that Fe-modified ZSM-5 catalyst increased the olefins yield significantly, and the ex-situ process was much better than in-situ. With the increasing of temperature, Fe-loading amount, catalyst to feed ratio, and gas flow rate, the carbon yields of light olefins were firstly increased and further decreased. The maximum carbon yield of light olefins (6.98% C-mol) was obtained at the pyrolysis temperature of 600°C, catalyst to feed ratio of 2, gas flow rate of 100ml/min, and 3wt% Fe/ZSM-5 for cellulose. The selectivity of C 2 H 4 was more than 60% for all feedstock, and the total light olefins followed the decreasing order of cellulose, corn stalk, hemicelluloses and lignin. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Effect of migration in a diffusion model for template coexistence in protocells.

    PubMed

    Fontanari, José F; Serva, Maurizio

    2014-03-01

    The compartmentalization of distinct templates in protocells and the exchange of templates between them (migration) are key elements of a modern scenario for prebiotic evolution. Here we use the diffusion approximation of population genetics to study analytically the steady-state properties of such a prebiotic scenario. The coexistence of distinct template types inside a protocell is achieved by a selective pressure at the protocell level (group selection) favoring protocells with a mixed template composition. In the degenerate case, where the templates have the same replication rate, we find that a vanishingly small migration rate suffices to eliminate the segregation effect of random drift and so to promote coexistence. In the nondegenerate case, a small migration rate greatly boosts coexistence as compared with the situation where there is no migration. However, increase of the migration rate beyond a critical value leads to the complete dominance of the more efficient template type (homogeneous regime). In this case, we find a continuous phase transition separating the homogeneous and the coexistence regimes, with the order parameter vanishing linearly with the distance to the transition point.

  9. A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.

    ERIC Educational Resources Information Center

    Benson, Jeri

    Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…

  10. Growth curves of carcass traits obtained by ultrasonography in three lines of Nellore cattle selected for body weight.

    PubMed

    Coutinho, C C; Mercadante, M E Z; Jorge, A M; Paz, C C P; El Faro, L; Monteiro, F M

    2015-10-30

    The effect of selection for postweaning weight was evaluated within the growth curve parameters for both growth and carcass traits. Records of 2404 Nellore animals from three selection lines were analyzed: two selection lines for high postweaning weight, selection (NeS) and traditional (NeT); and a control line (NeC) in which animals were selected for postweaning weight close to the average. Body weight (BW), hip height (HH), rib eye area (REA), back fat thickness (BFT), and rump fat thickness (RFT) were measured and records collected from animals 8 to 20 (males) and 11 to 26 (females) months of age. The parameters A (asymptotic value) and k (growth rate) were estimated using the nonlinear model procedure of the Statistical Analysis System program, which included fixed effect of line (NeS, NeT, and NeC) in the model, with the objective to evaluate differences in the estimated parameters between lines. Selected animals (NeS and NeT) showed higher growth rates than control line animals (NeC) for all traits. Line effect on curves parameters was significant (P < 0.001) for BW, HH, and REA in males, and for BFT and RFT in females. Selection for postweaning weight was effective in altering growth curves, resulting in animals with higher growth potential.

  11. Security of Color Image Data Designed by Public-Key Cryptosystem Associated with 2D-DWT

    NASA Astrophysics Data System (ADS)

    Mishra, D. C.; Sharma, R. K.; Kumar, Manish; Kumar, Kuldeep

    2014-08-01

    In present times the security of image data is a major issue. So, we have proposed a novel technique for security of color image data by public-key cryptosystem or asymmetric cryptosystem. In this technique, we have developed security of color image data using RSA (Rivest-Shamir-Adleman) cryptosystem with two-dimensional discrete wavelet transform (2D-DWT). Earlier proposed schemes for security of color images designed on the basis of keys, but this approach provides security of color images with the help of keys and correct arrangement of RSA parameters. If the attacker knows about exact keys, but has no information of exact arrangement of RSA parameters, then the original information cannot be recovered from the encrypted data. Computer simulation based on standard example is critically examining the behavior of the proposed technique. Security analysis and a detailed comparison between earlier developed schemes for security of color images and proposed technique are also mentioned for the robustness of the cryptosystem.

  12. Emotional Bookkeeping and High Partner Selectivity Are Necessary for the Emergence of Partner-Specific Reciprocal Affiliation in an Agent-Based Model of Primate Groups

    PubMed Central

    Evers, Ellen; de Vries, Han; Spruijt, Berry M.; Sterck, Elisabeth H. M.

    2015-01-01

    Primate affiliative relationships are differentiated, individual-specific and often reciprocal. However, the required cognitive abilities are still under debate. Recently, we introduced the EMO-model, in which two emotional dimensions regulate social behaviour: anxiety-FEAR and satisfaction-LIKE. Emotional bookkeeping is modelled by providing each individual with partner-specific LIKE attitudes in which the emotional experiences of earlier affiliations with others are accumulated. Individuals also possess fixed partner-specific FEAR attitudes, reflecting the stable dominance hierarchy. In this paper, we focus on one key parameter of the model, namely the degree of partner selectivity, i.e. the extent to which individuals rely on their LIKE attitudes when choosing affiliation partners. Studying the effect of partner selectivity on the emergent affiliative relationships, we found that at high selectivity, individuals restricted their affiliative behaviours more to similar-ranking individuals and that reciprocity of affiliation was enhanced. We compared the emotional bookkeeping model with a control model, in which individuals had fixed LIKE attitudes simply based on the (fixed) rank-distance, instead of dynamic LIKE attitudes based on earlier events. Results from the control model were very similar to the emotional bookkeeping model: high selectivity resulted in preference of similar-ranking partners and enhanced reciprocity. However, only in the emotional bookkeeping model did high selectivity result in the emergence of reciprocal affiliative relationships that were highly partner-specific. Moreover, in the emotional bookkeeping model, LIKE attitude predicted affiliative behaviour better than rank-distance, especially at high selectivity. Our model suggests that emotional bookkeeping is a likely candidate mechanism to underlie partner-specific reciprocal affiliation. PMID:25785601

  13. Heavy doping effects in high efficiency silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.

    1984-01-01

    Several of the key parameters describing the heavily doped regions of silicon solar cells are examined. The experimentally determined energy gap narrowing and minority carrier diffusivity and mobility are key factors in the investigation.

  14. Robotic vision techniques for space operations

    NASA Technical Reports Server (NTRS)

    Krishen, Kumar

    1994-01-01

    Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.

  15. Nuclear thermal propulsion engine system design analysis code development

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.; Ivanenok, Joseph F.

    1992-01-01

    A Nuclear Thermal Propulsion (NTP) Engine System Design Analyis Code has recently been developed to characterize key NTP engine system design features. Such a versatile, standalone NTP system performance and engine design code is required to support ongoing and future engine system and vehicle design efforts associated with proposed Space Exploration Initiative (SEI) missions of interest. Key areas of interest in the engine system modeling effort were the reactor, shielding, and inclusion of an engine multi-redundant propellant pump feed system design option. A solid-core nuclear thermal reactor and internal shielding code model was developed to estimate the reactor's thermal-hydraulic and physical parameters based on a prescribed thermal output which was integrated into a state-of-the-art engine system design model. The reactor code module has the capability to model graphite, composite, or carbide fuels. Key output from the model consists of reactor parameters such as thermal power, pressure drop, thermal profile, and heat generation in cooled structures (reflector, shield, and core supports), as well as the engine system parameters such as weight, dimensions, pressures, temperatures, mass flows, and performance. The model's overall analysis methodology and its key assumptions and capabilities are summarized in this paper.

  16. Enzyme activities by indicator of quality in organic soil

    NASA Astrophysics Data System (ADS)

    Raigon Jiménez, Mo; Fita, Ana Delores; Rodriguez Burruezo, Adrián

    2016-04-01

    The analytical determination of biochemical parameters, as soil enzyme activities and those related to the microbial biomass is growing importance by biological indicator in soil science studies. The metabolic activity in soil is responsible of important processes such as mineralization and humification of organic matter. These biological reactions will affect other key processes involved with elements like carbon, nitrogen and phosphorus , and all transformations related in soil microbial biomass. The determination of biochemical parameters is useful in studies carried out on organic soil where microbial processes that are key to their conservation can be analyzed through parameters of the metabolic activity of these soils. The main objective of this work is to apply analytical methodologies of enzyme activities in soil collections of different physicochemical characteristics. There have been selective sampling of natural soils, organic farming soils, conventional farming soils and urban soils. The soils have been properly identified conserved at 4 ° C until analysis. The enzyme activities determinations have been: catalase, urease, cellulase, dehydrogenase and alkaline phosphatase, which bring together a representative group of biological transformations that occur in the soil environment. The results indicate that for natural and agronomic soil collections, the values of the enzymatic activities are within the ranges established for forestry and agricultural soils. Organic soils are generally higher level of enzymatic, regardless activity of the enzyme involved. Soil near an urban area, levels of activities have been significantly reduced. The vegetation cover applied to organic soils, results in greater enzymatic activity. So the quality of these soils, defined as the ability to maintain their biological productivity is increased with the use of cover crops, whether or spontaneous species. The practice of cover based on legumes could be used as an ideal choice for the recovery of degraded soils, because these soils have the highest levels of enzymatic activities.

  17. Quantifying Selection with Pool-Seq Time Series Data.

    PubMed

    Taus, Thomas; Futschik, Andreas; Schlötterer, Christian

    2017-11-01

    Allele frequency time series data constitute a powerful resource for unraveling mechanisms of adaptation, because the temporal dimension captures important information about evolutionary forces. In particular, Evolve and Resequence (E&R), the whole-genome sequencing of replicated experimentally evolving populations, is becoming increasingly popular. Based on computer simulations several studies proposed experimental parameters to optimize the identification of the selection targets. No such recommendations are available for the underlying parameters selection strength and dominance. Here, we introduce a highly accurate method to estimate selection parameters from replicated time series data, which is fast enough to be applied on a genome scale. Using this new method, we evaluate how experimental parameters can be optimized to obtain the most reliable estimates for selection parameters. We show that the effective population size (Ne) and the number of replicates have the largest impact. Because the number of time points and sequencing coverage had only a minor effect, we suggest that time series analysis is feasible without major increase in sequencing costs. We anticipate that time series analysis will become routine in E&R studies. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  18. Finite-key analysis for quantum key distribution with weak coherent pulses based on Bernoulli sampling

    NASA Astrophysics Data System (ADS)

    Kawakami, Shun; Sasaki, Toshihiko; Koashi, Masato

    2017-07-01

    An essential step in quantum key distribution is the estimation of parameters related to the leaked amount of information, which is usually done by sampling of the communication data. When the data size is finite, the final key rate depends on how the estimation process handles statistical fluctuations. Many of the present security analyses are based on the method with simple random sampling, where hypergeometric distribution or its known bounds are used for the estimation. Here we propose a concise method based on Bernoulli sampling, which is related to binomial distribution. Our method is suitable for the Bennett-Brassard 1984 (BB84) protocol with weak coherent pulses [C. H. Bennett and G. Brassard, Proceedings of the IEEE Conference on Computers, Systems and Signal Processing (IEEE, New York, 1984), Vol. 175], reducing the number of estimated parameters to achieve a higher key generation rate compared to the method with simple random sampling. We also apply the method to prove the security of the differential-quadrature-phase-shift (DQPS) protocol in the finite-key regime. The result indicates that the advantage of the DQPS protocol over the phase-encoding BB84 protocol in terms of the key rate, which was previously confirmed in the asymptotic regime, persists in the finite-key regime.

  19. Key interventions and quality indicators for quality improvement of STEMI care: a RAND Delphi survey.

    PubMed

    Aeyels, Daan; Sinnaeve, Peter R; Claeys, Marc J; Gevaert, Sofie; Schoors, Danny; Sermeus, Walter; Panella, Massimiliano; Coeckelberghs, Ellen; Bruyneel, Luk; Vanhaecht, Kris

    2017-12-13

    Identification, selection and validation of key interventions and quality indicators for improvement of in hospital quality of care for ST-elevated myocardial infarction (STEMI) patients. A structured literature review was followed by a RAND Delphi Survey. A purposively selected multidisciplinary expert panel of cardiologists, nurse managers and quality managers selected and validated key interventions and quality indicators prior for quality improvement for STEMI. First, 34 experts (76% response rate) individually assessed the appropriateness of items to quality improvement on a nine point Likert scale. Twenty-seven key interventions, 16 quality indicators at patient level and 27 quality indicators at STEMI care programme level were selected. Eighteen additional items were suggested. Experts received personal feedback, benchmarking their score with group results (response rate, mean, median and content validity index). Consequently, 32 experts (71% response rate) openly discussed items with an item-content validity index above 75%. By consensus, the expert panel validated a final set of 25 key interventions, 13 quality indicators at patient level and 20 quality indicators at care programme level prior for improvement of in hospital care for STEMI. A structured literature review and multidisciplinary expertise was combined to validate a set of key interventions and quality indicators prior for improvement of care for STEMI. The results allow researchers and hospital staff to evaluate and support quality improvement interventions in a large cohort within the context of a health care system.

  20. Farm Deployable Microbial Bioreactor for Fuel Ethanol Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okeke, Benedict

    Research was conducted to develop a farm and field deployable microbial bioreactor for bioethanol production from biomass. Experiments were conducted to select the most efficient microorganisms for conversion of plant fiber to sugars for fermentation to ethanol. Mixtures of biomass and surface soil samples were collected from selected sites in Alabama black belt counties (Macon, Sumter, Choctaw, Dallas, Montgomery, Lowndes) and other areas within the state of Alabama. Experiments were conducted to determine the effects of culture parameters on key biomass saccharifying enzymes (cellulase, beta-glucosidase, xylanase and beta-xylosidase). A wide-scale sampling of locally-grown fruits in Central Alabama was embarked tomore » isolate potential xylose fermenting microorganisms. Yeast isolates were evaluated for xylose fermentation. Selected microorganisms were characterized by DNA based methods. Factors affecting enzyme production and biomass saccharification were examined and optimized in the laboratory. Methods of biomass pretreatment were compared. Co-production of amylolytic enzymes with celluloytic-xylanolytic enzymes was evaluated; and co-saccharification of a combination of biomass, and starch-rich materials was examined. Simultaneous saccharification and fermentation with and without pre-saccharifcation was studied. Whole culture broth and filtered culture broth simultaneous saccahrifcation and fermentation were compared. A bioreactor system was designed and constructed to employ laboratory results for scale up of biomass saccharification.« less

  1. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  2. TiO2/bone composite materials for the separation of heavy metal impurities from waste water solutions

    NASA Astrophysics Data System (ADS)

    Dakroury, G.; Labib, Sh.; Abou El-Nour, F. H.

    2012-09-01

    Pure bone material obtained from cow meat, as apatite-rich material, and TiO2-bone composite materials are prepared and studied to be used for heavy metal ions separation from waste water solutions. Meat wastes are chemically and thermally treated to control their microstructure in order to prepare the composite materials that fulfill all the requirements to be used as selective membranes with high performance, stability and mechanical strength. The prepared materials are analyzed using Hg-porosimetry for surface characterization, energy dispersive X-ray spectroscopy (EDAX) for elemental analysis and Fourier transform infrared spectroscopy (FTIR) for chemical composition investigation. Structural studies are performed using X-ray diffraction (XRD). Microstructural properties are studied using scanning electron microscopy (SEM) and specific surface area studies are performed using Brunauer-Emmet-Teller (BET) method. XRD studies show that multiphase structures are obtained as a result of 1h sintering at 700-1200 °C for both pure bone and TiO2-bone composite materials. The factors affecting the transport of different heavy metal ions through the selected membranes are determined from permeation flux measurements. It is found that membrane pore size, membrane surface roughness and membrane surface charge are the key parameters that control the transport or rejection of heavy metal ions through the selected membranes.

  3. Analysis of the selected mechanical parameters of coating of filters protecting against hazardous infrared radiation.

    PubMed

    Gralewicz, Grzegorz; Owczarek, Grzegorz; Kubrak, Janusz

    2017-03-01

    This article presents a comparison of the test results of selected mechanical parameters (hardness, Young's modulus, critical force for delamination) for protective filters intended for eye protection against harmful infrared radiation. Filters with reflective metallic films were studied, as well as interference filters developed at the Central Institute for Labour Protection - National Research Institute (CIOP-PIB). The test results of the selected mechanical parameters were compared with the test results, conducted in accordance with a standardised method, of simulating filter surface destruction that occurs during use.

  4. Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂

    PubMed Central

    Lee, Jong Soo; Cox, Dennis D.

    2009-01-01

    Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976

  5. Large Area Scene Selection Interface (LASSI). Methodology of Selecting Landsat Imagery for the Global Land Survey 2005

    NASA Technical Reports Server (NTRS)

    Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry

    2009-01-01

    The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.

  6. Laboratory Studies on Surface Sampling of Bacillus anthracis Contamination: Summary, Gaps, and Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piepel, Gregory F.; Amidan, Brett G.; Hu, Rebecca

    2011-11-28

    This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Othermore » key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.« less

  7. New insights into time series analysis. II - Non-correlated observations

    NASA Astrophysics Data System (ADS)

    Ferreira Lopes, C. E.; Cross, N. J. G.

    2017-08-01

    Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.

  8. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. General equations for optimal selection of diagnostic image acquisition parameters in clinical X-ray imaging.

    PubMed

    Zheng, Xiaoming

    2017-12-01

    The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.

  10. Particle size distribution: A key factor in estimating powder dustiness.

    PubMed

    López Lilao, Ana; Sanfélix Forner, Vicenta; Mallol Gasch, Gustavo; Monfort Gimeno, Eliseo

    2017-12-01

    A wide variety of raw materials, involving more than 20 samples of quartzes, feldspars, nephelines, carbonates, dolomites, sands, zircons, and alumina, were selected and characterised. Dustiness, i.e., a materials' tendency to generate dust on handling, was determined using the continuous drop method. These raw materials were selected to encompass a wide range of particle sizes (1.6-294 µm) and true densities (2650-4680 kg/m 3 ). The dustiness of the raw materials, i.e., their tendency to generate dust on handling, was determined using the continuous drop method. The influence of some key material parameters (particle size distribution, flowability, and specific surface area) on dustiness was assessed. In this regard, dustiness was found to be significantly affected by particle size distribution. Data analysis enabled development of a model for predicting the dustiness of the studied materials, assuming that dustiness depended on the particle fraction susceptible to emission and on the bulk material's susceptibility to release these particles. On the one hand, the developed model allows the dustiness mechanisms to be better understood. In this regard, it may be noted that relative emission increased with mean particle size. However, this did not necessarily imply that dustiness did, because dustiness also depended on the fraction of particles susceptible to be emitted. On the other hand, the developed model enables dustiness to be estimated using just the particle size distribution data. The quality of the fits was quite good and the fact that only particle size distribution data are needed facilitates industrial application, since these data are usually known by raw materials managers, thus making additional tests unnecessary. This model may therefore be deemed a key tool in drawing up efficient preventive and/or corrective measures to reduce dust emissions during bulk powder processing, both inside and outside industrial facilities. It is recommended, however, to use the developed model only if particle size, true density, moisture content, and shape lie within the studied ranges.

  11. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  12. A framework for streamflow prediction in the world's most severely data-limited regions: Test of applicability and performance in a poorly-gauged region of China

    NASA Astrophysics Data System (ADS)

    Alipour, M. H.; Kibler, Kelly M.

    2018-02-01

    A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.

  13. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  14. Global Low Frequency Protein Motions in Long-Range Allosteric Signaling

    NASA Astrophysics Data System (ADS)

    McLeish, Tom; Rogers, Thomas; Townsend, Philip; Burnell, David; Pohl, Ehmke; Wilson, Mark; Cann, Martin; Richards, Shane; Jones, Matthew

    2015-03-01

    We present a foundational theory for how allostery can occur as a function of low frequency dynamics without a change in protein structure. Elastic inhomogeneities allow entropic ``signalling at a distance.'' Remarkably, many globular proteins display just this class of elastic structure, in particular those that support allosteric binding of substrates (long-range co-operative effects between the binding sites of small molecules). Through multi-scale modelling of global normal modes we demonstrate negative co-operativity between the two cAMP ligands without change to the mean structure. Crucially, the value of the co-operativity is itself controlled by the interactions around a set of third allosteric ``control sites.'' The theory makes key experimental predictions, validated by analysis of variant proteins by a combination of structural biology and isothermal calorimetry. A quantitative description of allostery as a free energy landscape revealed a protein ``design space'' that identified the key inter- and intramolecular regulatory parameters that frame CRP/FNR family allostery. Furthermore, by analyzing naturally occurring CAP variants from diverse species, we demonstrate an evolutionary selection pressure to conserve residues crucial for allosteric control. The methodology establishes the means to engineer allosteric mechanisms that are driven by low frequency dynamics.

  15. Industrial applications of high-average power high-peak power nanosecond pulse duration Nd:YAG lasers

    NASA Astrophysics Data System (ADS)

    Harrison, Paul M.; Ellwi, Samir

    2009-02-01

    Within the vast range of laser materials processing applications, every type of successful commercial laser has been driven by a major industrial process. For high average power, high peak power, nanosecond pulse duration Nd:YAG DPSS lasers, the enabling process is high speed surface engineering. This includes applications such as thin film patterning and selective coating removal in markets such as the flat panel displays (FPD), solar and automotive industries. Applications such as these tend to require working spots that have uniform intensity distribution using specific shapes and dimensions, so a range of innovative beam delivery systems have been developed that convert the gaussian beam shape produced by the laser into a range of rectangular and/or shaped spots, as required by demands of each project. In this paper the authors will discuss the key parameters of this type of laser and examine why they are important for high speed surface engineering projects, and how they affect the underlying laser-material interaction and the removal mechanism. Several case studies will be considered in the FPD and solar markets, exploring the close link between the application, the key laser characteristics and the beam delivery system that link these together.

  16. Research on intrusion detection based on Kohonen network and support vector machine

    NASA Astrophysics Data System (ADS)

    Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi

    2018-05-01

    In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.

  17. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  18. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  19. Developing micro-level urban ecosystem indicators for sustainability assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dizdaroglu, Didem, E-mail: dizdaroglu@bilkent.edu.tr

    Sustainability assessment is increasingly being viewed as an important tool to aid in the shift towards sustainable urban ecosystems. An urban ecosystem is a dynamic system and requires regular monitoring and assessment through a set of relevant indicators. An indicator is a parameter which provides information about the state of the environment by producing a quantitative value. Indicator-based sustainability assessment needs to be considered on all spatial scales to provide efficient information of urban ecosystem sustainability. The detailed data is necessary to assess environmental change in urban ecosystems at local scale and easily transfer this information to the national andmore » global scales. This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. The proposed indicator framework measures the sustainability performance of urban ecosystem in 3 main categories including: natural environment, built environment, and socio-economic environment which are made up of 9 sub-categories, consisting of 23 indicators. This paper also describes theoretical foundations for the selection of each indicator with reference to the literature [Turkish] Highlights: • As the impacts of environmental problems have multi-scale characteristics, sustainability assessment needs to be considered on all scales. • The detailed data is necessary to assess local environmental change in urban ecosystems to provide insights into the national and global scales. • This paper proposes a set of key micro-level urban ecosystem indicators for monitoring the sustainability of residential developments. • This paper also describes theoretical foundations for the selection of each indicator with reference to the literature.« less

  20. Plasma under control: Advanced solutions and perspectives for plasma flux management in material treatment and nanosynthesis

    NASA Astrophysics Data System (ADS)

    Baranov, O.; Bazaka, K.; Kersten, H.; Keidar, M.; Cvelbar, U.; Xu, S.; Levchenko, I.

    2017-12-01

    Given the vast number of strategies used to control the behavior of laboratory and industrially relevant plasmas for material processing and other state-of-the-art applications, a potential user may find themselves overwhelmed with the diversity of physical configurations used to generate and control plasmas. Apparently, a need for clearly defined, physics-based classification of the presently available spectrum of plasma technologies is pressing, and the critically summary of the individual advantages, unique benefits, and challenges against key application criteria is a vital prerequisite for the further progress. To facilitate selection of the technological solutions that provide the best match to the needs of the end user, this work systematically explores plasma setups, focusing on the most significant family of the processes—control of plasma fluxes—which determine the distribution and delivery of mass and energy to the surfaces of materials being processed and synthesized. A novel classification based on the incorporation of substrates into plasma-generating circuitry is also proposed and illustrated by its application to a wide variety of plasma reactors, where the effect of substrate incorporation on the plasma fluxes is emphasized. With the key process and material parameters, such as growth and modification rates, phase transitions, crystallinity, density of lattice defects, and others being linked to plasma and energy fluxes, this review offers direction to physicists, engineers, and materials scientists engaged in the design and development of instrumentation for plasma processing and diagnostics, where the selection of the correct tools is critical for the advancement of emerging and high-performance applications.

  1. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE PAGES

    Garitte, B.; Shao, H.; Wang, X. R.; ...

    2017-01-09

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  2. Inverse Modeling of Water-Rock-CO2 Batch Experiments: Potential Impacts on Groundwater Resources at Carbon Sequestration Sites.

    PubMed

    Yang, Changbing; Dai, Zhenxue; Romanak, Katherine D; Hovorka, Susan D; Treviño, Ramón H

    2014-01-01

    This study developed a multicomponent geochemical model to interpret responses of water chemistry to introduction of CO2 into six water-rock batches with sedimentary samples collected from representative potable aquifers in the Gulf Coast area. The model simulated CO2 dissolution in groundwater, aqueous complexation, mineral reactions (dissolution/precipitation), and surface complexation on clay mineral surfaces. An inverse method was used to estimate mineral surface area, the key parameter for describing kinetic mineral reactions. Modeling results suggested that reductions in groundwater pH were more significant in the carbonate-poor aquifers than in the carbonate-rich aquifers, resulting in potential groundwater acidification. Modeled concentrations of major ions showed overall increasing trends, depending on mineralogy of the sediments, especially carbonate content. The geochemical model confirmed that mobilization of trace metals was caused likely by mineral dissolution and surface complexation on clay mineral surfaces. Although dissolved inorganic carbon and pH may be used as indicative parameters in potable aquifers, selection of geochemical parameters for CO2 leakage detection is site-specific and a stepwise procedure may be followed. A combined study of the geochemical models with the laboratory batch experiments improves our understanding of the mechanisms that dominate responses of water chemistry to CO2 leakage and also provides a frame of reference for designing monitoring strategy in potable aquifers.

  3. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  4. Evaluation of the predictive capability of coupled thermo-hydro-mechanical models for a heated bentonite/clay system (HE-E) in the Mont Terri Rock Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garitte, B.; Shao, H.; Wang, X. R.

    Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less

  5. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  6. Selection Devices for User of an Electronic Encyclopedia: An Empirical Comparison of Four Possibilities.

    ERIC Educational Resources Information Center

    Ostroff, Daniel; Shneiderman, Ben

    1988-01-01

    Describes a study that measured the speed, error rates, and subjective evaluation of arrow jump keys, a jump mouse, number keys, and a touch screen in an interactive encyclopedia. The results of previous studies are discussed as well as the findings of this study. Improvements in selection devices are suggested. (41 references) (Author/CLB)

  7. Underground Mining Method Selection Using WPM and PROMETHEE

    NASA Astrophysics Data System (ADS)

    Balusa, Bhanu Chander; Singam, Jayanthu

    2018-04-01

    The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.

  8. Nondestructive prediction of pork freshness parameters using multispectral scattering images

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu

    2012-05-01

    Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.

  9. Thermo-optic characteristics and switching power limit of slow-light photonic crystal structures on a silicon-on-insulator platform.

    PubMed

    Chahal, Manjit; Celler, George K; Jaluria, Yogesh; Jiang, Wei

    2012-02-13

    Employing a semi-analytic approach, we study the influence of key structural and optical parameters on the thermo-optic characteristics of photonic crystal waveguide (PCW) structures on a silicon-on-insulator (SOI) platform. The power consumption and spatial temperature profile of such structures are given as explicit functions of various structural, thermal and optical parameters, offering physical insight not available in finite-element simulations. Agreement with finite-element simulations and experiments is demonstrated. Thermal enhancement of the air-bridge structure is analyzed. The practical limit of thermo-optic switching power in slow light PCWs is discussed, and the scaling with key parameters is analyzed. Optical switching with sub-milliwatt power is shown viable.

  10. Inside the Mind of a Medicinal Chemist: The Role of Human Bias in Compound Prioritization during Drug Discovery

    PubMed Central

    Kutchukian, Peter S.; Vasilyeva, Nadya Y.; Xu, Jordan; Lindvall, Mika K.; Dillon, Michael P.; Glick, Meir; Coley, John D.; Brooijmans, Natasja

    2012-01-01

    Medicinal chemists’ “intuition” is critical for success in modern drug discovery. Early in the discovery process, chemists select a subset of compounds for further research, often from many viable candidates. These decisions determine the success of a discovery campaign, and ultimately what kind of drugs are developed and marketed to the public. Surprisingly little is known about the cognitive aspects of chemists’ decision-making when they prioritize compounds. We investigate 1) how and to what extent chemists simplify the problem of identifying promising compounds, 2) whether chemists agree with each other about the criteria used for such decisions, and 3) how accurately chemists report the criteria they use for these decisions. Chemists were surveyed and asked to select chemical fragments that they would be willing to develop into a lead compound from a set of ∼4,000 available fragments. Based on each chemist’s selections, computational classifiers were built to model each chemist’s selection strategy. Results suggest that chemists greatly simplified the problem, typically using only 1–2 of many possible parameters when making their selections. Although chemists tended to use the same parameters to select compounds, differing value preferences for these parameters led to an overall lack of consensus in compound selections. Moreover, what little agreement there was among the chemists was largely in what fragments were undesirable. Furthermore, chemists were often unaware of the parameters (such as compound size) which were statistically significant in their selections, and overestimated the number of parameters they employed. A critical evaluation of the problem space faced by medicinal chemists and cognitive models of categorization were especially useful in understanding the low consensus between chemists. PMID:23185259

  11. On selecting satellite conjunction filter parameters

    NASA Astrophysics Data System (ADS)

    Alfano, Salvatore; Finkleman, David

    2014-06-01

    This paper extends concepts of signal detection theory to predict the performance of conjunction screening techniques and guiding the selection of keepout and screening thresholds. The most efficient way to identify satellites likely to collide is to employ filters to identify orbiting pairs that should not come close enough over a prescribed time period to be considered hazardous. Such pairings can then be eliminated from further computation to accelerate overall processing time. Approximations inherent in filtering techniques include screening using only unperturbed Newtonian two body astrodynamics and uncertainties in orbit elements. Therefore, every filtering process is vulnerable to including objects that are not threats and excluding some that are threats, Type I and Type II errors. The approach in this paper guides selection of the best operating point for the filters suited to a user's tolerance for false alarms and unwarned threats. We demonstrate the approach using three archetypal filters with an initial three-day span, select filter parameters based on performance, and then test those parameters using eight historical snapshots of the space catalog. This work provides a mechanism for selecting filter parameters but the choices depend on the circumstances.

  12. Effect of selective vagal nerve stimulation on blood pressure, heart rate and respiratory rate in rats under metoprolol medication.

    PubMed

    Gierthmuehlen, Mortimer; Plachta, Dennis T T

    2016-02-01

    Selective vagal nerve stimulation (sVNS) has been shown to reduce blood pressure without major side effects in rats. This technology might be the key to non-medical antihypertensive treatment in patients with therapy-resistant hypertension. β-blockers are the first-line therapy of hypertension and have in general a bradycardic effect. As VNS itself can also promote bradycardia, it was the aim of this study to investigate the influence of the β1-selective blocker Metoprolol on the effect of sVNS especially with respect to the heart rate. In 10 male Wistar rats, a polyimide multichannel-cuff electrode was placed around the vagal nerve bundle to selectively stimulate the aortic depressor nerve fibers. The stimulation parameters were adapted to the thresholds of individual animals and were in the following ranges: frequency 30-50 Hz, amplitude 0.3-1.8 mA and pulse width 0.3-1.3 ms. Blood pressure responses were detected with a microtip transducer in the carotid artery, and electrocardiography was recorded with s.c. chest electrodes. After IV administration of Metoprolol (2 mg kg(-1) body weight), the animals' mean arterial blood pressure (MAP) and heart rate (HR) decreased significantly. Although the selective electrical stimulation of the baroreceptive fibers reduced MAP and HR, both effects were significantly alleviated by Metoprolol. As a side effect, the rate of stimulation-induced apnea significantly increased after Metoprolol administration. sVNS can lower the MAP under Metoprolol without causing severe bradycardia.

  13. Fast adaptive optical system for the high-power laser beam correction in atmosphere

    NASA Astrophysics Data System (ADS)

    Kudryashov, Alexis; Lylova, Anna; Samarkin, Vadim; Sheldakova, Julia; Alexandrov, Alexander

    2017-09-01

    Key elements of the fast adaptive optical system (AOS), having correction frequency of 1400 Hz, for atmospheric turbulence compensation, are described in this paper. A water-cooled bimorph deformable mirror with 46 electrodes, as well as stacked actuator deformable mirror with 81 piezoactuators and 2000 Hz Shack-Hartmann wavefront sensor were considered to be used to control the light beam. The parameters of the turbulence at the 1.2 km path of the light propagation were measured and analyzed. The key parameters for such an adaptive system were worked out.

  14. Key aspects of cost effective collector and solar field design

    NASA Astrophysics Data System (ADS)

    von Reeken, Finn; Nicodemo, Dario; Keck, Thomas; Weinrebe, Gerhard; Balz, Markus

    2016-05-01

    A study has been performed where different key parameters influencing solar field cost are varied. By using levelised cost of energy as figure of merit it is shown that parameters like GoToStow wind speed, heliostat stiffness or tower height should be adapted to respective site conditions from an economical point of view. The benchmark site Redstone (Northern Cape Province, South Africa) has been compared to an alternate site close to Phoenix (AZ, USA) regarding site conditions and their effect on cost-effective collector and solar field design.

  15. Using GA-Ridge regression to select hydro-geological parameters influencing groundwater pollution vulnerability.

    PubMed

    Ahn, Jae Joon; Kim, Young Min; Yoo, Keunje; Park, Joonhong; Oh, Kyong Joo

    2012-11-01

    For groundwater conservation and management, it is important to accurately assess groundwater pollution vulnerability. This study proposed an integrated model using ridge regression and a genetic algorithm (GA) to effectively select the major hydro-geological parameters influencing groundwater pollution vulnerability in an aquifer. The GA-Ridge regression method determined that depth to water, net recharge, topography, and the impact of vadose zone media were the hydro-geological parameters that influenced trichloroethene pollution vulnerability in a Korean aquifer. When using these selected hydro-geological parameters, the accuracy was improved for various statistical nonlinear and artificial intelligence (AI) techniques, such as multinomial logistic regression, decision trees, artificial neural networks, and case-based reasoning. These results provide a proof of concept that the GA-Ridge regression is effective at determining influential hydro-geological parameters for the pollution vulnerability of an aquifer, and in turn, improves the AI performance in assessing groundwater pollution vulnerability.

  16. A low-cost fabrication method for sub-millimeter wave GaAs Schottky diode

    NASA Astrophysics Data System (ADS)

    Jenabi, Sarvenaz; Deslandes, Dominic; Boone, Francois; Charlebois, Serge A.

    2017-10-01

    In this paper, a submillimeter-wave Schottky diode is designed and simulated. Effect of Schottky layer thickness on cut-off frequency is studied. A novel microfabrication process is proposed and implemented. The presented microfabrication process avoids electron-beam (e-beam) lithography which reduces the cost. Also, this process provides more flexibility in selection of design parameters and allows significant reduction in the device parasitic capacitance. A key feature of the process is that the Schottky contact, the air-bridges, and the transmission lines, are fabricated in a single lift-off step. This process relies on a planarization method that is suitable for trenches of 1-10 μm deep and is tolerant to end-point variations. The fabricated diode is measured and results are compared with simulations. A very good agreement between simulation and measurement results are observed.

  17. Factors Influencing Renewable Energy Production & Supply - A Global Analysis

    NASA Astrophysics Data System (ADS)

    Ali, Anika; Saqlawi, Juman Al

    2016-04-01

    Renewable energy is one of the key technologies through which the energy needs of the future can be met in a sustainable and carbon-neutral manner. Increasing the share of renewable energy in the total energy mix of each country is therefore a critical need. While different countries have approached this in different ways, there are some common aspects which influence the pace and effectiveness of renewable energy incorporation. This presentation looks at data and information from 34 selected countries, analyses the patterns, compares the different parameters and identifies the common factors which positively influence renewable energy incorporation. The most successful countries are analysed for their renewable energy performance against their GDP, policy/regulatory initiatives in the field of renewables, landmass, climatic conditions and population to identify the most influencing factors to bring about positive change in renewable energy share.

  18. Non-whole beat correlation method for the identification of an unbalance response of a dual-rotor system with a slight rotating speed difference

    NASA Astrophysics Data System (ADS)

    Zhang, Z. X.; Wang, L. Z.; Jin, Z. J.; Zhang, Q.; Li, X. L.

    2013-08-01

    The efficient identification of the unbalanced responses in the inner and outer rotors from the beat vibration is the key step in the dynamic balancing of a dual-rotor system with a slight rotating speed difference. This paper proposes a non-whole beat correlation method to identify the unbalance responses whose integral time is shorter than the whole beat correlation method. The principle, algorithm and parameter selection of the proposed method is emphatically demonstrated in this paper. From the numerical simulation and balancing experiment conducted on horizontal decanter centrifuge, conclusions can be drawn that the proposed approach is feasible and practicable. This method makes important sense in developing the field balancing equipment based on portable Single Chip Microcomputer (SCMC) with low expense.

  19. Ensemble method: Community detection based on game theory

    NASA Astrophysics Data System (ADS)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  20. A FEROS Survey of Hot Subdwarf Stars

    NASA Astrophysics Data System (ADS)

    Vennes, Stéphane; Németh, Péter; Kawka, Adela

    2018-02-01

    We have completed a survey of twenty-two ultraviolet-selected hot subdwarfs using the Fiber-fed Extended Range Optical Spectrograph (FEROS) and the 2.2-m telescope at La Silla. The sample includes apparently single objects as well as hot subdwarfs paired with a bright, unresolved companion. The sample was extracted from our GALEX catalogue of hot subdwarf stars. We identified three new short-period systems (P = 3.5 hours to 5 days) and determined the orbital parameters of a long-period (P = 62d.66) sdO plus G III system. This particular system should evolve into a close double degenerate system following a second common envelope phase.We also conducted a chemical abundance study of the subdwarfs: Some objects show nitrogen and argon abundance excess with respect to oxygen. We present key results of this programme.

Top