Sample records for specific parameter settings

  1. The Dynamics of Phonological Planning

    ERIC Educational Resources Information Center

    Roon, Kevin D.

    2013-01-01

    This dissertation proposes a dynamical computational model of the timecourse of phonological parameter setting. In the model, phonological representations embrace phonetic detail, with phonetic parameters represented as activation fields that evolve over time and determine the specific parameter settings of a planned utterance. Existing models of…

  2. VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.

    2017-05-01

    The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).

  3. Electronegativity Equalization Method: Parameterization and Validation for Large Sets of Organic, Organohalogene and Organometal Molecule

    PubMed Central

    Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav

    2007-01-01

    The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.

  4. Optimizing detection and analysis of slow waves in sleep EEG.

    PubMed

    Mensen, Armand; Riedner, Brady; Tononi, Giulio

    2016-12-01

    Analysis of individual slow waves in EEG recording during sleep provides both greater sensitivity and specificity compared to spectral power measures. However, parameters for detection and analysis have not been widely explored and validated. We present a new, open-source, Matlab based, toolbox for the automatic detection and analysis of slow waves; with adjustable parameter settings, as well as manual correction and exploration of the results using a multi-faceted visualization tool. We explore a large search space of parameter settings for slow wave detection and measure their effects on a selection of outcome parameters. Every choice of parameter setting had some effect on at least one outcome parameter. In general, the largest effect sizes were found when choosing the EEG reference, type of canonical waveform, and amplitude thresholding. Previously published methods accurately detect large, global waves but are conservative and miss the detection of smaller amplitude, local slow waves. The toolbox has additional benefits in terms of speed, user-interface, and visualization options to compare and contrast slow waves. The exploration of parameter settings in the toolbox highlights the importance of careful selection of detection METHODS: The sensitivity and specificity of the automated detection can be improved by manually adding or deleting entire waves and or specific channels using the toolbox visualization functions. The toolbox standardizes the detection procedure, sets the stage for reliable results and comparisons and is easy to use without previous programming experience. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Is site-specific APEX calibration necessary for field scale BMP assessment?

    USDA-ARS?s Scientific Manuscript database

    The possibility of extending parameter sets obtained at one site to sites with similar characteristics is appealing. This study was undertaken to test model performance and compare the effectiveness of best management practices (BMPs) using three parameters sets obtained from three watersheds when a...

  6. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  7. Environment parameters and basic functions for floating-point computation

    NASA Technical Reports Server (NTRS)

    Brown, W. S.; Feldman, S. I.

    1978-01-01

    A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.

  8. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  9. Optimization of multilayer neural network parameters for speaker recognition

    NASA Astrophysics Data System (ADS)

    Tovarek, Jaromir; Partila, Pavol; Rozhon, Jan; Voznak, Miroslav; Skapa, Jan; Uhrin, Dominik; Chmelikova, Zdenka

    2016-05-01

    This article discusses the impact of multilayer neural network parameters for speaker identification. The main task of speaker identification is to find a specific person in the known set of speakers. It means that the voice of an unknown speaker (wanted person) belongs to a group of reference speakers from the voice database. One of the requests was to develop the text-independent system, which means to classify wanted person regardless of content and language. Multilayer neural network has been used for speaker identification in this research. Artificial neural network (ANN) needs to set parameters like activation function of neurons, steepness of activation functions, learning rate, the maximum number of iterations and a number of neurons in the hidden and output layers. ANN accuracy and validation time are directly influenced by the parameter settings. Different roles require different settings. Identification accuracy and ANN validation time were evaluated with the same input data but different parameter settings. The goal was to find parameters for the neural network with the highest precision and shortest validation time. Input data of neural networks are a Mel-frequency cepstral coefficients (MFCC). These parameters describe the properties of the vocal tract. Audio samples were recorded for all speakers in a laboratory environment. Training, testing and validation data set were split into 70, 15 and 15 %. The result of the research described in this article is different parameter setting for the multilayer neural network for four speakers.

  10. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, B.; Wood, R.T.

    1997-04-22

    A method is described for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical model. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system. 1 fig.

  11. Automated method for the systematic interpretation of resonance peaks in spectrum data

    DOEpatents

    Damiano, Brian; Wood, Richard T.

    1997-01-01

    A method for spectral signature interpretation. The method includes the creation of a mathematical model of a system or process. A neural network training set is then developed based upon the mathematical model. The neural network training set is developed by using the mathematical model to generate measurable phenomena of the system or process based upon model input parameter that correspond to the physical condition of the system or process. The neural network training set is then used to adjust internal parameters of a neural network. The physical condition of an actual system or process represented by the mathematical model is then monitored by extracting spectral features from measured spectra of the actual process or system. The spectral features are then input into said neural network to determine the physical condition of the system or process represented by the mathematical. More specifically, the neural network correlates the spectral features (i.e. measurable phenomena) of the actual process or system with the corresponding model input parameters. The model input parameters relate to specific components of the system or process, and, consequently, correspond to the physical condition of the process or system.

  12. Changes in optical properties of electroporated cells as revealed by digital holographic microscopy

    PubMed Central

    Calin, Violeta L.; Mihailescu, Mona; Mihale, Nicolae; Baluta, Alexandra V.; Kovacs, Eugenia; Savopol, Tudor; Moisescu, Mihaela G.

    2017-01-01

    Changes in optical and shape-related characteristics of B16F10 cells after electroporation were investigated using digital holographic microscopy (DHM). Bipolar rectangular pulses specific for electrochemotherapy were used. Electroporation was performed in an “off-axis” DHM set-up without using exogenous markers. Two types of cell parameters were monitored seconds and minutes after pulse train application: parameters addressing a specifically defined area of the cell (refractive index and cell height) and global cell parameters (projected area, optical phase shift profile and dry mass). The biphasic behavior of cellular parameters was explained by water and mannitol dynamics through the electropermeabilized cell membrane. PMID:28736667

  13. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  14. Invariant polarimetric contrast parameters of light with Gaussian fluctuations in three dimensions.

    PubMed

    Réfrégier, Philippe; Roche, Muriel; Goudail, François

    2006-01-01

    We propose a rigorous definition of the minimal set of parameters that characterize the difference between two partially polarized states of light whose electric fields vary in three dimensions with Gaussian fluctuations. Although two such states are a priori defined by eighteen parameters, we demonstrate that the performance of processing tasks such as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by three scalar functions of these parameters. These functions define a "polarimetric contrast" that simplifies the analysis and the specification of processing techniques on polarimetric signals and images. This result can also be used to analyze the definition of the degree of polarization of a three-dimensional state of light with Gaussian fluctuations in comparison, with respect to its polarimetric contrast parameters, with a totally depolarized light. We show that these contrast parameters are a simple function of the degrees of polarization previously proposed by Barakat [Opt. Acta 30, 1171 (1983)] and Setälä et al. [Phys. Rev. Lett. 88, 123902 (2002)]. Finally, we analyze the dimension of the set of contrast parameters in different particular situations.

  15. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  16. Age and gender specific biokinetic model for strontium in humans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shagina, N. B.; Tolstykh, E. I.; Degteva, M. O.

    A biokinetic model for strontium in humans is necessary for quantification of internal doses due to strontium radioisotopes. The ICRP-recommended biokinetic model for strontium has limitation for use in a population study, because it is not gender specific and does not cover all age ranges. The extensive Techa River data set on 90Sr in humans (tens of thousands of measurements) is a unique source of data on long-term strontium retention for men and women of all ages at intake. These, as well as published data, were used for evaluation of age- and gender-specific parameters for a new compartment biokinetic modelmore » for strontium (Sr-AGe model). The Sr-AGe model has similar structure as the ICRP model for the alkaline earth elements. The following parameters were mainly reevaluated: gastro-intestinal absorption and parameters related to the processes of bone formation and resorption defining calcium and strontium transfers in skeletal compartments. The Sr-AGe model satisfactorily describes available data sets on strontium retention for different kinds of intake (dietary and intravenous) at different ages (0–80 years old) and demonstrates good agreement with data sets for different ethnic groups. The Sr-AGe model can be used for dose assessment in epidemiological studies of general population exposed to ingested strontium radioisotopes.« less

  17. [Comparison of tone burst evoked auditory brainstem responses with different filter settings for referral infants after hearing screening].

    PubMed

    Diao, Wen-wen; Ni, Dao-feng; Li, Feng-rong; Shang, Ying-ying

    2011-03-01

    Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency, figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Thresholds with filter settings of 30 - 1500 Hz and 30 - 3000 Hz in children aged 2 - 33 months were recorded by click, tone burst ABR. A total of 18 patients (8 male/10 female), 22 ears were included. The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0.5 kHz and 2.0 kHz (t values were 2.238 and 2.217, P < 0.05), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.

  18. New approaches in the indirect quantification of thermal rock properties in sedimentary basins: the well-log perspective

    NASA Astrophysics Data System (ADS)

    Fuchs, Sven; Balling, Niels; Förster, Andrea

    2016-04-01

    Numerical temperature models generated for geodynamic studies as well as for geothermal energy solutions heavily depend on rock thermal properties. Best practice for the determination of those parameters is the measurement of rock samples in the laboratory. Given the necessity to enlarge databases of subsurface rock parameters beyond drill core measurements an approach for the indirect determination of these parameters is developed, for rocks as well a for geological formations. We present new and universally applicable prediction equations for thermal conductivity, thermal diffusivity and specific heat capacity in sedimentary rocks derived from data provided by standard geophysical well logs. The approach is based on a data set of synthetic sedimentary rocks (clastic rocks, carbonates and evaporates) composed of mineral assemblages with variable contents of 15 major rock-forming minerals and porosities varying between 0 and 30%. Petrophysical properties are assigned to both the rock-forming minerals and the pore-filling fluids. Using multivariate statistics, relationships then were explored between each thermal property and well-logged petrophysical parameters (density, sonic interval transit time, hydrogen index, volume fraction of shale and photoelectric absorption index) on a regression sub set of data (70% of data) (Fuchs et al., 2015). Prediction quality was quantified on the remaining test sub set (30% of data). The combination of three to five well-log parameters results in predictions on the order of <15% for thermal conductivity and thermal diffusivity, and of <10% for specific heat capacity. Comparison of predicted and benchmark laboratory thermal conductivity from deep boreholes of the Norwegian-Danish Basin, the North German Basin, and the Molasse Basin results in 3 to 5% larger uncertainties with regard to the test data set. With regard to temperature models, the use of calculated TC borehole profiles approximate measured temperature logs with an error of <3°C along a 4 km deep profile. A benchmark comparison for thermal diffusivity and specific heat capacity is pending. Fuchs, Sven; Balling, Niels; Förster, Andrea (2015): Calculation of thermal conductivity, thermal diffusivity and specific heat capacity of sedimentary rocks using petrophysical well logs, Geophysical Journal International 203, 1977-2000, doi: 10.1093/gji/ggv403

  19. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  20. Assessing locomotor skills development in childhood using wearable inertial sensor devices: the running paradigm.

    PubMed

    Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio

    2013-04-01

    Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. A Model of Objective Weighting for EIA.

    ERIC Educational Resources Information Center

    Ying, Long Gen; Liu, You Ci

    1995-01-01

    In the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not yet been properly solved. Presents an approach of objective weighting by using a procedure of Pij principal component-factor analysis (Pij PCFA), which suits specifically those parameters measured directly by physical…

  2. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  3. A linear parameter-varying multiobjective control law design based on youla parametrization for a flexible blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Demourant, F.; Ferreres, G.

    2013-12-01

    This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.

  4. Hands-on parameter search for neural simulations by a MIDI-controller.

    PubMed

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.

  5. Hands-On Parameter Search for Neural Simulations by a MIDI-Controller

    PubMed Central

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027

  6. Rapid assessment of antimicrobial resistance prevalence using a Lot Quality Assurance sampling approach.

    PubMed

    van Leth, Frank; den Heijer, Casper; Beerepoot, Mariëlle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance

    2017-04-01

    Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates from surveys of community-acquired urinary tract infection in women, by assessing operating curves, sensitivity and specificity. Sensitivity and specificity of any set of LQAS parameters was above 99% and between 79 and 90%, respectively. Operating curves showed high concordance of the LQAS classification with true AMR prevalence estimates. LQAS-based AMR surveillance is a feasible approach that provides timely and locally relevant estimates, and the necessary information to formulate and evaluate guidelines for empirical treatment.

  7. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  8. Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities

    NASA Technical Reports Server (NTRS)

    Richter, Hanz

    2004-01-01

    A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.

  9. Subject-specific cardiovascular system model-based identification and diagnosis of septic shock with a minimally invasive data set: animal experiments and proof of concept.

    PubMed

    Chase, J Geoffrey; Lambermont, Bernard; Starfinger, Christina; Hann, Christopher E; Shaw, Geoffrey M; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas

    2011-01-01

    A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis-a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications-a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20-30 kg received a 0.5 mg kg(-1) endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the identification process for validation. Importantly, all identified parameter trends match physiologically and clinically and experimentally expected changes, indicating that no diagnostic power is lost. This work represents a further with human subjects validation for this model-based approach to cardiovascular diagnosis and therapy guidance in monitoring endotoxic disease states. The results and methods obtained can be readily extended from this case study to the other animal model results presented previously. Overall, these results provide further support for prospective, proof of concept clinical testing with humans.

  10. Association between power law coefficients of the anatomical noise power spectrum and lesion detectability in breast imaging modalities

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Abbey, Craig K.; Boone, John M.

    2013-03-01

    Previous research has demonstrated that a parameter extracted from a power function fit to the anatomical noise power spectrum, β, may be predictive of breast mass lesion detectability in x-ray based medical images of the breast. In this investigation, the value of β was compared with a number of other more widely used parameters, in order to determine the relationship between β and these other parameters. This study made use of breast CT data sets, acquired on two breast CT systems developed in our laboratory. A total of 185 breast data sets in 183 women were used, and only the unaffected breast was used (where no lesion was suspected). The anatomical noise power spectrum computed from two-dimensional region of interests (ROIs), was fit to a power function (NPS(f) = α f-β), and the exponent parameter (β) was determined using log/log linear regression. Breast density for each of the volume data sets was characterized in previous work. The breast CT data sets analyzed in this study were part of a previous study which evaluated the receiver operating characteristic (ROC) curve performance using simulated spherical lesions and a pre-whitened matched filter computer observer. This ROC information was used to compute the detectability index as well as the sensitivity at 95% specificity. The fractal dimension was computed from the same ROIs which were used for the assessment of β. The value of β was compared to breast density, detectability index, sensitivity, and fractal dimension, and the slope of these relationships was investigated to assess statistical significance from zero slope. A statistically significant non-zero slope was considered to be a positive association in this investigation. All comparisons between β and breast density, detectability index, sensitivity at 95% specificity, and fractal dimension demonstrated statistically significant association with p < 0.001 in all cases. The value of β was also found to be associated with patient age and breast diameter, parameters both related to breast density. In all associations between other parameters, lower values of β were associated with increased breast cancer detection performance. Specifically, lower values of β were associated with lower breast density, higher detectability index, higher sensitivity, and lower fractal dimension values. While causality was not and probably cannot be demonstrated, the strong, statistically significant association between the β metric and the other more widely used parameters suggest that β may be considered as a surrogate measure for breast cancer detection performance. These findings are specific to breast parenchymal patterns and mass lesions only.

  11. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  12. Screen for intracranial dural arteriovenous fistulae with carotid duplex sonography.

    PubMed

    Tsai, L-K; Yeh, S-J; Chen, Y-C; Liu, H-M; Jeng, J-S

    2009-11-01

    Early diagnosis and management of intracranial dural arteriovenous fistulae (DAVF) may prevent the occurrence of stroke. This study aimed to identify the best carotid duplex sonography (CDS) parameters for screening DAVF. 63 DAVF patients and 170 non-DAVF patients received both CDS and conventional angiography. The use of seven CDS haemodynamic parameter sets related to the resistance index (RI) of the external carotid artery (ECA) for the diagnosis of DAVF was validated and the applicability of the best CDS parameter set in 20 400 patients was tested. The CDS parameter set (ECA RI (cut-off point = 0.7) and internal carotid artery (ICA) to ECA RI ratio (cut-off point = 0.9)) had the highest specificity (99%) for diagnosis of DAVF with moderate sensitivity (51%). Location of the DAVF was a significant determinant of sensitivity of detection, which was 70% for non-cavernous DAVF and 0% for cavernous sinus DAVF (p<0.001). The above parameter set detected abnormality in 92 of 20 400 patients. These abnormalities included DAVF (n = 25), carotid stenosis (n = 32), vertebral artery stenosis (n = 7), intracranial arterial stenosis (n = 6), head and neck tumour (n = 3) and unknown aetiology (n = 19). Combined CDS parameters of ECA RI and ICA to ECA RI ratio can be used as a screening tool for the diagnosis of DAVF.

  13. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    PubMed

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  14. Parallelization and visual analysis of multidimensional fields: Application to ozone production, destruction, and transport in three dimensions

    NASA Technical Reports Server (NTRS)

    Schwan, Karsten

    1994-01-01

    Atmospheric modeling is a grand challenge problem for several reasons, including its inordinate computational requirements and its generation of large amounts of data concurrent with its use of very large data sets derived from measurement instruments like satellites. In addition, atmospheric models are typically run several times, on new data sets or to reprocess existing data sets, to investigate or reinvestigate specific chemical or physical processes occurring in the earth's atmosphere, to understand model fidelity with respect to observational data, or simply to experiment with specific model parameters or components.

  15. When to Make Mountains out of Molehills: The Pros and Cons of Simple and Complex Model Calibration Procedures

    NASA Astrophysics Data System (ADS)

    Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.

    2017-12-01

    Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.

  16. Direct optimization, affine gap costs, and node stability.

    PubMed

    Aagesen, Lone

    2005-09-01

    The outcome of a phylogenetic analysis based on DNA sequence data is highly dependent on the homology-assignment step and may vary with alignment parameter costs. Robustness to changes in parameter costs is therefore a desired quality of a data set because the final conclusions will be less dependent on selecting a precise optimal cost set. Here, node stability is explored in relationship to separate versus combined analysis in three different data sets, all including several data partitions. Robustness to changes in cost sets is measured as number of successive changes that can be made in a given cost set before a specific clade is lost. The changes are in all cases base change cost, gap penalties, and adding/removing/changing affine gap costs. When combining data partitions, the number of clades that appear in the entire parameter space is not remarkably increased, in some cases this number even decreased. However, when combining data partitions the trees from cost sets including affine gap costs were always more similar than the trees were from cost sets without affine gap costs. This was not the case when the data partitions were analyzed independently. When data sets were combined approximately 80% of the clades found under cost sets including affine gap costs resisted at least one change to the cost set.

  17. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    USGS Publications Warehouse

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.

  18. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  19. Hydrogen from coal cost estimation guidebook

    NASA Technical Reports Server (NTRS)

    Billings, R. E.

    1981-01-01

    In an effort to establish baseline information whereby specific projects can be evaluated, a current set of parameters which are typical of coal gasification applications was developed. Using these parameters a computer model allows researchers to interrelate cost components in a sensitivity analysis. The results make possible an approximate estimation of hydrogen energy economics from coal, under a variety of circumstances.

  20. Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.

    PubMed

    Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing

    2016-06-01

    The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Parameter motivated mutual correlation analysis: Application to the study of currency exchange rates based on intermittency parameter and Hurst exponent

    NASA Astrophysics Data System (ADS)

    Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.

    2012-04-01

    We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.

  2. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  3. Support-vector-machines-based multidimensional signal classification for fetal activity characterization

    NASA Astrophysics Data System (ADS)

    Ribes, S.; Voicu, I.; Girault, J. M.; Fournier, M.; Perrotin, F.; Tranquart, F.; Kouamé, D.

    2011-03-01

    Electronic fetal monitoring may be required during the whole pregnancy to closely monitor specific fetal and maternal disorders. Currently used methods suffer from many limitations and are not sufficient to evaluate fetal asphyxia. Fetal activity parameters such as movements, heart rate and associated parameters are essential indicators of the fetus well being, and no current device gives a simultaneous and sufficient estimation of all these parameters to evaluate the fetus well-being. We built for this purpose, a multi-transducer-multi-gate Doppler system and developed dedicated signal processing techniques for fetal activity parameter extraction in order to investigate fetus's asphyxia or well-being through fetal activity parameters. To reach this goal, this paper shows preliminary feasibility of separating normal and compromised fetuses using our system. To do so, data set consisting of two groups of fetal signals (normal and compromised) has been established and provided by physicians. From estimated parameters an instantaneous Manning-like score, referred to as ultrasonic score was introduced and was used together with movements, heart rate and associated parameters in a classification process using Support Vector Machines (SVM) method. The influence of the fetal activity parameters and the performance of the SVM were evaluated using the computation of sensibility, specificity, percentage of support vectors and total classification accuracy. We showed our ability to separate the data into two sets : normal fetuses and compromised fetuses and obtained an excellent matching with the clinical classification performed by physician.

  4. Seat pan and backrest pressure distribution while sitting in office chairs.

    PubMed

    Zemp, Roland; Taylor, William R; Lorenzetti, Silvio

    2016-03-01

    Nowadays, an increasing amount of time is spent seated, especially in office environments, where sitting comfort and support are increasingly important due to the prevalence of musculoskeletal disorders. The aim of this study was to develop a methodology for chair-specific sensor mat calibration, to evaluate the interconnections between specific pressure parameters and to establish those that are most meaningful and significant in order to differentiate pressure distribution measures between office chairs. The shape of the exponential calibration function was highly influenced by the material properties and geometry of the office chairs, and therefore a chair-specific calibration proved to be essential. High correlations were observed between the eight analysed pressure parameters, whereby the pressure parameters could be reduced to a set of four and three parameters for the seat pan and the backrest respectively. In order to find significant differences between office chairs, gradient parameters should be analysed for the seat pan, whereas for the backrest almost all parameters are suitable. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  6. DOE Waste Treatability Group Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkpatrick, T.D.

    1995-01-01

    This guidance presents a method and definitions for aggregating U.S. Department of Energy (DOE) waste into streams and treatability groups based on characteristic parameters that influence waste management technology needs. Adaptable to all DOE waste types (i.e., radioactive waste, hazardous waste, mixed waste, sanitary waste), the guidance establishes categories and definitions that reflect variations within the radiological, matrix (e.g., bulk physical/chemical form), and regulated contaminant characteristics of DOE waste. Beginning at the waste container level, the guidance presents a logical approach to implementing the characteristic parameter categories as part of the basis for defining waste streams and as the solemore » basis for assigning streams to treatability groups. Implementation of this guidance at each DOE site will facilitate the development of technically defined, site-specific waste stream data sets to support waste management planning and reporting activities. Consistent implementation at all of the sites will enable aggregation of the site-specific waste stream data sets into comparable national data sets to support these activities at a DOE complex-wide level.« less

  7. Development and application of computer assisted optimal method for treatment of femoral neck fracture.

    PubMed

    Wang, Monan; Zhang, Kai; Yang, Ning

    2018-04-09

    To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.

  8. Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.

    PubMed

    Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K

    2014-03-01

    Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.

  9. Application of Bayesian population physiologically based pharmacokinetic (PBPK) modeling and Markov chain Monte Carlo simulations to pesticide kinetics studies in protected marine mammals: DDT, DDE, and DDD in harbor porpoises.

    PubMed

    Weijs, Liesbeth; Yang, Raymond S H; Das, Krishna; Covaci, Adrian; Blust, Ronny

    2013-05-07

    Physiologically based pharmacokinetic (PBPK) modeling in marine mammals is a challenge because of the lack of parameter information and the ban on exposure experiments. To minimize uncertainty and variability, parameter estimation methods are required for the development of reliable PBPK models. The present study is the first to develop PBPK models for the lifetime bioaccumulation of p,p'-DDT, p,p'-DDE, and p,p'-DDD in harbor porpoises. In addition, this study is also the first to apply the Bayesian approach executed with Markov chain Monte Carlo simulations using two data sets of harbor porpoises from the Black and North Seas. Parameters from the literature were used as priors for the first "model update" using the Black Sea data set, the resulting posterior parameters were then used as priors for the second "model update" using the North Sea data set. As such, PBPK models with parameters specific for harbor porpoises could be strengthened with more robust probability distributions. As the science and biomonitoring effort progress in this area, more data sets will become available to further strengthen and update the parameters in the PBPK models for harbor porpoises as a species anywhere in the world. Further, such an approach could very well be extended to other protected marine mammals.

  10. Multi-Criteria Optimization of Regulation in Metabolic Networks

    PubMed Central

    Higuera, Clara; Villaverde, Alejandro F.; Banga, Julio R.; Ross, John; Morán, Federico

    2012-01-01

    Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different “environments” (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks. PMID:22848435

  11. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  12. Construction of optimal 3-node plate bending triangles by templates

    NASA Astrophysics Data System (ADS)

    Felippa, C. A.; Militello, C.

    A finite element template is a parametrized algebraic form that reduces to specific finite elements by setting numerical values to the free parameters. The present study concerns Kirchhoff Plate-Bending Triangles (KPT) with 3 nodes and 9 degrees of freedom. A 37-parameter template is constructed using the Assumed Natural Deviatoric Strain (ANDES). Specialization of this template includes well known elements such as DKT and HCT. The question addressed here is: can these parameters be selected to produce high performance elements? The study is carried out by staged application of constraints on the free parameters. The first stage produces element families satisfying invariance and aspect ratio insensitivity conditions. Application of energy balance constraints produces specific elements. The performance of such elements in benchmark tests is presently under study.

  13. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  14. iCARE

    Cancer.gov

    The iCARE R Package allows researchers to quickly build models for absolute risk, and apply them to estimate an individual's risk of developing disease during a specifed time interval, based on a set of user defined input parameters.

  15. Preference heterogeneity in a count data model of demand for off-highway vehicle recreation

    Treesearch

    Thomas P Holmes; Jeffrey E Englin

    2010-01-01

    This paper examines heterogeneity in the preferences for OHV recreation by applying the random parameters Poisson model to a data set of off-highway vehicle (OHV) users at four National Forest sites in North Carolina. The analysis develops estimates of individual consumer surplus and finds that estimates are systematically affected by the random parameter specification...

  16. Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage

    PubMed Central

    Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.

    2010-01-01

    MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467

  17. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    PubMed

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  18. An Experimental Study of Dependence of Optimum TBM Cutter Spacing on Pre-set Penetration Depth in Sandstone Fragmentation

    NASA Astrophysics Data System (ADS)

    Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.

    2017-12-01

    Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.

  19. A multi-objective approach to improve SWAT model calibration in alpine catchments

    NASA Astrophysics Data System (ADS)

    Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele

    2018-04-01

    Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.

  20. Modeling clear-sky solar radiation across a range of elevations in Hawai‘i: Comparing the use of input parameters at different temporal resolutions

    NASA Astrophysics Data System (ADS)

    Longman, Ryan J.; Giambelluca, Thomas W.; Frazier, Abby G.

    2012-01-01

    Estimates of clear sky global solar irradiance using the parametric model SPCTRAL2 were tested against clear sky radiation observations at four sites in Hawai`i using daily, mean monthly, and 1 year mean model parameter settings. Atmospheric parameters in SPCTRAL2 and similar models are usually set at site-specific values and are not varied to represent the effects of fluctuating humidity, aerosol amount and type, or ozone concentration, because time-dependent atmospheric parameter estimates are not available at most sites of interest. In this study, we sought to determine the added value of using time dependent as opposed to fixed model input parameter settings. At the AERONET site, Mauna Loa Observatory (MLO) on the island of Hawai`i, where daily measurements of atmospheric optical properties and hourly solar radiation observations are available, use of daily rather than 1 year mean aerosol parameter values reduced mean bias error (MBE) from 18 to 10 W m-2 and root mean square error from 25 to 17 W m-2. At three stations in the HaleNet climate network, located at elevations of 960, 1640, and 2590 m on the island of Maui, where aerosol-related parameter settings were interpolated from observed values for AERONET sites at MLO (3397 m) and Lāna`i (20 m), and precipitable water was estimated using radiosonde-derived humidity profiles from nearby Hilo, the model performed best when using constant 1 year mean parameter values. At HaleNet Station 152, for example, MBE was 18, 10, and 8 W m-2 for daily, monthly, and 1 year mean parameters, respectively.

  1. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  2. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.

    PubMed

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  3. Development of a parameter optimization technique for the design of automatic control systems

    NASA Technical Reports Server (NTRS)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  4. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  5. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience.

    PubMed

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.

  6. Effect of software version and parameter settings on the marginal and internal adaptation of crowns fabricated with the CAD/CAM system.

    PubMed

    Shim, Ji Suk; Lee, Jin Sook; Lee, Jeong Yol; Choi, Yeon Jo; Shin, Sang Wan; Ryu, Jae Jun

    2015-10-01

    This study investigated the marginal and internal adaptation of individual dental crowns fabricated using a CAD/CAM system (Sirona's BlueCam), also evaluating the effect of the software version used, and the specific parameter settings in the adaptation of crowns. Forty digital impressions of a master model previously prepared were acquired using an intraoral scanner and divided into four groups based on the software version and on the spacer settings used. The versions 3.8 and 4.2 of the software were used, and the spacer parameter was set at either 40 μm or 80 μm. The marginal and internal fit of the crowns were measured using the replica technique, which uses a low viscosity silicone material that simulates the thickness of the cement layer. The data were analyzed using a Friedman two-way analysis of variance (ANOVA) and paired t-tests with significance level set at p<0.05. The two-way ANOVA analysis showed the software version (p<0.05) and the spacer parameter (p<0.05) significantly affected the crown adaptation. The crowns designed with the version 4.2 of the software showed a better fit than those designed with the version 3.8, particularly in the axial wall and in the inner margin. The spacer parameter was more accurately represented in the version 4.2 of the software than in the version 3.8. In addition, the use of the version 4.2 of the software combined with the spacer parameter set at 80 μm showed the least variation. On the other hand, the outer margin was not affected by the variables. Compared to the version 3.8 of the software, the version 4.2 can be recommended for the fabrication of well-fitting crown restorations, and for the appropriate regulation of the spacer parameter.

  7. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters

    PubMed Central

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830

  8. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  9. From global to local: exploring the relationship between parameters and behaviors in models of electrical excitability.

    PubMed

    Fletcher, Patrick; Bertram, Richard; Tabak, Joel

    2016-06-01

    Models of electrical activity in excitable cells involve nonlinear interactions between many ionic currents. Changing parameters in these models can produce a variety of activity patterns with sometimes unexpected effects. Further more, introducing new currents will have different effects depending on the initial parameter set. In this study we combined global sampling of parameter space and local analysis of representative parameter sets in a pituitary cell model to understand the effects of adding K (+) conductances, which mediate some effects of hormone action on these cells. Global sampling ensured that the effects of introducing K (+) conductances were captured across a wide variety of contexts of model parameters. For each type of K (+) conductance we determined the types of behavioral transition that it evoked. Some transitions were counterintuitive, and may have been missed without the use of global sampling. In general, the wide range of transitions that occurred when the same current was applied to the model cell at different locations in parameter space highlight the challenge of making accurate model predictions in light of cell-to-cell heterogeneity. Finally, we used bifurcation analysis and fast/slow analysis to investigate why specific transitions occur in representative individual models. This approach relies on the use of a graphics processing unit (GPU) to quickly map parameter space to model behavior and identify parameter sets for further analysis. Acceleration with modern low-cost GPUs is particularly well suited to exploring the moderate-sized (5-20) parameter spaces of excitable cell and signaling models.

  10. Computer Simulations of Polytetrafluoroethylene in the Solid State

    NASA Astrophysics Data System (ADS)

    Holt, D. B.; Farmer, B. L.; Eby, R. K.; Macturk, K. S.

    1996-03-01

    Force field parameters (Set I) for fluoropolymers were previously derived from MOPAC AM1 semiempirical data on model molecules. A second set (Set II) was derived from the AM1 results augmented by ab initio calculations. Both sets yield reasonable helical and phase II packing structures for polytetrafluoroethylene (PTFE) chains. However, Set I and Set II differ in the strength of van der Waals interactions, with Set II having deeper potential wells (order of magnitude). To differentiate which parameter set provides a better description of PTFE behavior, molecular dynamics simulations have been performed with Biosym Discover on clusters of PTFE chains which begin in a phase II packing environment. Added to the model are artificial constraints which allow the simulation of thermal expansion without having to define periodic boundary conditions for each specific temperature of interest. The preliminary dynamics simulations indicate that the intra- and intermolecular interactions provided by Set I are too weak. The degree of helical disorder and chain motion are high even at temperatures well below the phase II-phase IV transition temperature (19 C). Set II appears to yield a better description of PTFE in the solid state.

  11. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  12. Hearing AIDS and music.

    PubMed

    Chasin, Marshall; Russo, Frank A

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.

  13. Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A

    PubMed Central

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2012-01-01

    Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768

  14. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C 7H 10O 2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  15. Automatic classification of transiently evoked otoacoustic emissions using an artificial neural network.

    PubMed

    Buller, G; Lutman, M E

    1998-08-01

    The increasing use of transiently evoked otoacoustic emissions (TEOAE) in large neonatal hearing screening programmes makes a standardized method of response classification desirable. Until now methods have been either subjective or based on arbitrary response characteristics. This study takes an expert system approach to standardize the subjective judgements of an experienced scorer. The method that is developed comprises three stages. First, it transforms TEOAEs from waveforms in the time domain into a simplified parameter set. Second, the parameter set is classified by an artificial neural network that has been taught on a large database TEOAE waveforms and corresponding expert scores. Third, additional fuzzy logic rules automatically detect probable artefacts in the waveforms and synchronized spontaneous emission components. In this way, the knowledge of the experienced scorer is encapsulated in the expert system software and thereafter can be accessed by non-experts. Teaching and evaluation of the neural network was based on TEOAEs from a database totalling 2190 neonatal hearing screening tests. The database was divided into learning and test groups with 820 and 1370 waveforms respectively. From each recorded waveform a set of 12 parameters was calculated, representing signal static and dynamic properties. The artifical network was taught with parameter sets of only the learning groups. Reproduction of the human scorer classification by the neural net in the learning group showed a sensitivity for detecting screen fails of 99.3% (299 from 301 failed results on subjective scoring) and a specificity for detecting screen passes of 81.1% (421 of 519 pass results). To quantify the post hoc performance of the net (generalization), the test group was then presented to the network input. Sensitivity was 99.4% (474 from 477) and specificity was 87.3% (780 from 893). To check the efficiency of the classification method, a second learning group was selected out of the previous test group, and the previous learning group was used as the test group. Repeating learning and test procedures yielded 99.3% sensitivity and 80.7% specificity for reproduction, and 99.4% sensitivity and 86.7% specificity for generalization. In all respects, performance was better than for a previously optimized method based simply on cross-correlation between replicate non-linear waveforms. It is concluded that classification methods based on neural networks show promise for application to large neonatal screening programmes utilizing TEOAEs.

  16. Effect of Small Numbers of Test Results on Accuracy of Hoek-Brown Strength Parameter Estimations: A Statistical Simulation Study

    NASA Astrophysics Data System (ADS)

    Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.

    2017-12-01

    The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.

  17. Quantitative microscopy of the lung: a problem-based approach. Part 2: stereological parameters and study designs in various diseases of the respiratory tract.

    PubMed

    Mühlfeld, Christian; Ochs, Matthias

    2013-08-01

    Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.

  18. Characteristics of enzyme-linked immunosorbent assay for detection of IgG antibodies specific to Сhlamydia trachomatis heat shock protein (HSP-60)

    PubMed

    Galkin, O Yu; Besarab, A B; Lutsenko, T N

    2017-01-01

    The goal of this work was to study sensitivity and specificity of the developed ELISA set for the identification of IgG antibodies against Chlamydia trachomatis HSP-60 (using biotinylated tyramine-based signal amplification system). The study was conducted using a panel of characterized sera, as well as two reference ELISA sets of similar purpose. According to the results of ELISA informative value parameters, the ELISA we have developed showed the highest specificity and sensitivity parameters (no false negative or false positive results were registered). In 4 out of 15 intralaboratory panel serum samples initially identified as negative, anti-HSP-60 IgG-antibodies test result in reference ELISA sets upon dilution changed from negative to positive. The nature of titration curves of false negative sera and commercial monoclonal antibodies А57-В9 against C. trachomatis HSP-60 after incubation for 24 h was indicative of the presence of anti-idiotypic antibodies in these samples. Upon sera dilution, idiotypic-anti-idiotypic complexes dissociated, which caused the change of test result. High informative value of the developed ELISA set for identification of IgG antibodies against C. trachomatis HSP-60 has been proven. Anti-idiotypic antibodies possessing C. trachomatis anti-HSP-60 activity and being one of the causes of false negative results of the relevant ELISA-based tests have been identified in blood sera of individuals infected with chlamydial genitourinary infection agents.

  19. A set of hypotheses on tribology of mammalian herbivore teeth

    NASA Astrophysics Data System (ADS)

    Kaiser, Thomas M.; Clauss, Marcus; Schulz-Kornas, Ellen

    2016-03-01

    Once erupted, mammal cheek teeth molars are continuously worn. Contact of molar surfaces with ingesta and with other teeth contribute to this wear. Microscopic wear features (dental surface texture) change continuously as new wear overprints old texture features. These features have been debated to indicate diet. The general assumption in relating occlusal textures to diet is that they are independent of masticatory movements and forces. If this assumption is not accepted, one needs to propose that occlusal textures comprise signals not only from the ‘last supper’ but also from masticatory events that represent ecological, species- or taxon-specific adaptations, and that occlusal textures therefore give a rather unspecific, somehow diet-related signal that is functionally inadequately understood. In order to test for mechanical mechanisms of wear, we created a hypothesis matrix that related sampled individuals with six tribological variables. Three variables represent mechanically relevant ingesta properties, and three represent animal-specific characteristics of the masticatory system. Three groups of mammal species (free ranging Cetartiodactyla and Perissodactyla, free ranging primates, and artificially fed rabbits) were investigated in terms of their 3D dental surface textures, which were quantified employing ten ISO 25178 surface texture parameters. We first formulated a set of specific predictions based on theoretical reflections on the effects of diet properties and animal characteristics, and subsequently performed discriminant analysis to test which parameters actually followed these predictions. We found that parameters Vvc, Vmc, Sp, Sq allowed the prediction of both, ingesta properties and properties of the masticatory system, if combined with other parameters. Sha, Sda and S5v had little predictive power in our dataset. Spd seemed rather unrelated to ingesta properties and made this parameter a suitable indicator of masticatory system properties.

  20. Monitoring the injured brain: registered, patient specific atlas models to improve accuracy of recovered brain saturation values

    NASA Astrophysics Data System (ADS)

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J. E.; Su, Zhangjie; Dehghani, Hamid

    2015-07-01

    The subject of superficial contamination and signal origins remains a widely debated topic in the field of Near Infrared Spectroscopy (NIRS), yet the concept of using the technology to monitor an injured brain, in a clinical setting, poses additional challenges concerning the quantitative accuracy of recovered parameters. Using high density diffuse optical tomography probes, quantitatively accurate parameters from different layers (skin, bone and brain) can be recovered from subject specific reconstruction models. This study assesses the use of registered atlas models for situations where subject specific models are not available. Data simulated from subject specific models were reconstructed using the 8 registered atlas models implementing a regional (layered) parameter recovery in NIRFAST. A 3-region recovery based on the atlas model yielded recovered brain saturation values which were accurate to within 4.6% (percentage error) of the simulated values, validating the technique. The recovered saturations in the superficial regions were not quantitatively accurate. These findings highlight differences in superficial (skin and bone) layer thickness between the subject and atlas models. This layer thickness mismatch was propagated through the reconstruction process decreasing the parameter accuracy.

  1. United States Air Force Summer Research Program -- 1993 Summer Research Program Final Reports. Volume 11. Arnold Engineering Development Center, Frank J. Seiler Research Laboratory, Wilford Hall Medical Center

    DTIC Science & Technology

    1993-01-01

    external parameters such as airflow, temperature, pressure, etc, are measured. Turbine Engine testing generates massive volumes of data at very high...a form that describes the signal flow graph topology as well as specific parameters of the processing blocks in the diagram. On multiprocessor...provides an interface to the symbolic builder and control functions such that parameters may be set during the build operation that will affect the

  2. Latent transition models with latent class predictors: attention deficit hyperactivity disorder subtypes and high school marijuana use

    PubMed Central

    Reboussin, Beth A.; Ialongo, Nicholas S.

    2011-01-01

    Summary Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder which is most often diagnosed in childhood with symptoms often persisting into adulthood. Elevated rates of substance use disorders have been evidenced among those with ADHD, but recent research focusing on the relationship between subtypes of ADHD and specific drugs is inconsistent. We propose a latent transition model (LTM) to guide our understanding of how drug use progresses, in particular marijuana use, while accounting for the measurement error that is often found in self-reported substance use data. We extend the LTM to include a latent class predictor to represent empirically derived ADHD subtypes that do not rely on meeting specific diagnostic criteria. We begin by fitting two separate latent class analysis (LCA) models by using second-order estimating equations: a longitudinal LCA model to define stages of marijuana use, and a cross-sectional LCA model to define ADHD subtypes. The LTM model parameters describing the probability of transitioning between the LCA-defined stages of marijuana use and the influence of the LCA-defined ADHD subtypes on these transition rates are then estimated by using a set of first-order estimating equations given the LCA parameter estimates. A robust estimate of the LTM parameter variance that accounts for the variation due to the estimation of the two sets of LCA parameters is proposed. Solving three sets of estimating equations enables us to determine the underlying latent class structures independently of the model for the transition rates and simplifying assumptions about the correlation structure at each stage reduces the computational complexity. PMID:21461139

  3. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    NASA Astrophysics Data System (ADS)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  4. Derivation of site-specific relationships between hydraulic parameters and p-wave velocities based on hydraulic and seismic tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brauchler, R.; Doetsch, J.; Dietrich, P.

    2012-01-10

    In this study, hydraulic and seismic tomographic measurements were used to derive a site-specific relationship between the geophysical parameter p-wave velocity and the hydraulic parameters, diffusivity and specific storage. Our field study includes diffusivity tomograms derived from hydraulic travel time tomography, specific storage tomograms, derived from hydraulic attenuation tomography, and p-wave velocity tomograms, derived from seismic tomography. The tomographic inversion was performed in all three cases with the SIRT (Simultaneous Iterative Reconstruction Technique) algorithm, using a ray tracing technique with curved trajectories. The experimental set-up was designed such that the p-wave velocity tomogram overlaps the hydraulic tomograms by half. Themore » experiments were performed at a wellcharacterized sand and gravel aquifer, located in the Leine River valley near Göttingen, Germany. Access to the shallow subsurface was provided by direct-push technology. The high spatial resolution of hydraulic and seismic tomography was exploited to derive representative site-specific relationships between the hydraulic and geophysical parameters, based on the area where geophysical and hydraulic tests were performed. The transformation of the p-wave velocities into hydraulic properties was undertaken using a k-means cluster analysis. Results demonstrate that the combination of hydraulic and geophysical tomographic data is a promising approach to improve hydrogeophysical site characterization.« less

  5. Selected aspects of prior and likelihood information for a Bayesian classifier in a road safety analysis.

    PubMed

    Nowakowska, Marzena

    2017-04-01

    The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.

    PubMed

    Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew

    2014-12-26

    Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.

  7. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine's performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  8. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2007-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs, such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends on knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined that accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least-squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  9. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  10. An Optimal Orthogonal Decomposition Method for Kalman Filter-Based Turbofan Engine Thrust Estimation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.

    2005-01-01

    A new linear point design technique is presented for the determination of tuning parameters that enable the optimal estimation of unmeasured engine outputs such as thrust. The engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters related to each major engine component. Accurate thrust reconstruction depends upon knowledge of these health parameters, but there are usually too few sensors to be able to estimate their values. In this new technique, a set of tuning parameters is determined which accounts for degradation by representing the overall effect of the larger set of health parameters as closely as possible in a least squares sense. The technique takes advantage of the properties of the singular value decomposition of a matrix to generate a tuning parameter vector of low enough dimension that it can be estimated by a Kalman filter. A concise design procedure to generate a tuning vector that specifically takes into account the variables of interest is presented. An example demonstrates the tuning parameters ability to facilitate matching of both measured and unmeasured engine outputs, as well as state variables. Additional properties of the formulation are shown to lend themselves well to diagnostics.

  11. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  12. Comparison of Spatial Correlation Parameters between Full and Model Scale Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Kenny, Jeremy; Giacomoni, Clothilde

    2016-01-01

    The current vibro-acoustic analysis tools require specific spatial correlation parameters as input to define the liftoff acoustic environment experienced by the launch vehicle. Until recently these parameters have not been very well defined. A comprehensive set of spatial correlation data were obtained during a scale model acoustic test conducted in 2014. From these spatial correlation data, several parameters were calculated: the decay coefficient, the diffuse to propagating ratio, and the angle of incidence. Spatial correlation data were also collected on the EFT-1 flight of the Delta IV vehicle which launched on December 5th, 2014. A comparison of the spatial correlation parameters from full scale and model scale data will be presented.

  13. URPD: a specific product primer design tool

    PubMed Central

    2012-01-01

    Background Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. Findings URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. Conclusions URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/. PMID:22713312

  14. URPD: a specific product primer design tool.

    PubMed

    Chuang, Li-Yeh; Cheng, Yu-Huei; Yang, Cheng-Hong

    2012-06-19

    Polymerase chain reaction (PCR) plays an important role in molecular biology. Primer design fundamentally determines its results. Here, we present a currently available software that is not located in analyzing large sequence but used for a rather straight-forward way of visualizing the primer design process for infrequent users. URPD (yoUR Primer Design), a web-based specific product primer design tool, combines the NCBI Reference Sequences (RefSeq), UCSC In-Silico PCR, memetic algorithm (MA) and genetic algorithm (GA) primer design methods to obtain specific primer sets. A friendly user interface is accomplished by built-in parameter settings. The incorporated smooth pipeline operations effectively guide both occasional and advanced users. URPD contains an automated process, which produces feasible primer pairs that satisfy the specific needs of the experimental design with practical PCR amplifications. Visual virtual gel electrophoresis and in silico PCR provide a simulated PCR environment. The comparison of Practical gel electrophoresis comparison to virtual gel electrophoresis facilitates and verifies the PCR experiment. Wet-laboratory validation proved that the system provides feasible primers. URPD is a user-friendly tool that provides specific primer design results. The pipeline design path makes it easy to operate for beginners. URPD also provides a high throughput primer design function. Moreover, the advanced parameter settings assist sophisticated researchers in performing experiential PCR. Several novel functions, such as a nucleotide accession number template sequence input, local and global specificity estimation, primer pair redesign, user-interactive sequence scale selection, and virtual and practical PCR gel electrophoresis discrepancies have been developed and integrated into URPD. The URPD program is implemented in JAVA and freely available at http://bio.kuas.edu.tw/urpd/.

  15. A Verification-Driven Approach to Control Analysis and Tuning

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2008-01-01

    This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..

  16. Effect of the cation model on the equilibrium structure of poly-L-glutamate in aqueous sodium chloride solution

    NASA Astrophysics Data System (ADS)

    Marchand, Gabriel; Soetens, Jean-Christophe; Jacquemin, Denis; Bopp, Philippe A.

    2015-12-01

    We demonstrate that different sets of Lennard-Jones parameters proposed for the Na+ ion, in conjunction with the empirical combining rules routinely used in simulation packages, can lead to essentially different equilibrium structures for a deprotonated poly-L-glutamic acid molecule (poly-L-glutamate) dissolved in a 0.3M aqueous NaCl solution. It is, however, difficult to discriminate a priori between these model potentials; when investigating the structure of the Na+-solvation shell in bulk NaCl solution, all parameter sets lead to radial distribution functions and solvation numbers in broad agreement with the available experimental data. We do not find any such dependency of the equilibrium structure on the parameters associated with the Cl- ion. This work does not aim at recommending a particular set of parameters for any particular purpose. Instead, it stresses the model dependence of simulation results for complex systems such as biomolecules in solution and thus the difficulties if simulations are to be used for unbiased predictions, or to discriminate between contradictory experiments. However, this opens the possibility of validating a model specifically in view of analyzing experimental data believed to be reliable.

  17. User guide to the Magellan synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Wall, Stephen D.; Mcconnell, Shannon L.; Leff, Craig E.; Austin, Richard S.; Beratan, Kathi K.; Rokey, Mark J.

    1995-01-01

    The Magellan radar-mapping mission collected a large amount of science and engineering data. Now available to the general scientific community, this data set can be overwhelming to someone who is unfamiliar with the mission. This user guide outlines the mission operations and data set so that someone working with the data can understand the mapping and data-processing techniques used in the mission. Radar-mapping parameters as well as data acquisition issues are discussed. In addition, this user guide provides information on how the data set is organized and where specific elements of the set can be located.

  18. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience

    PubMed Central

    Van Geit, Werner; Gevaert, Michael; Chindemi, Giuseppe; Rössert, Christian; Courcol, Jean-Denis; Muller, Eilif B.; Schürmann, Felix; Segev, Idan; Markram, Henry

    2016-01-01

    At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases. PMID:27375471

  19. A novel field generator for magnetic stimulation in cell culture experiments.

    PubMed

    Vogt, G; Schrefl, A; Mitteregger, R; Falkenhagen, D

    1997-06-01

    A novel field generator specially designed to examine the influence of low frequency magnetic fields on specific cell material was constructed and characterized. The exposure unit described in this paper consists of a controller unit and three sets of coils. The field generator permits a precious definition of the revelant signal parameters and allows the superposition of alternating current (AC) and direct current (DC) magnetic fields. Critical system parameters were monitored continuously. The three sets of coils, each arranged in the Helmholtz Configuration were characterized. After data processing and visualization the results showed a constant and homogeneous field within the experimental area. The special coil design also allows their use in an incubator.

  20. Effect of electric potential and current on mandibular linear measurements in cone beam CT.

    PubMed

    Panmekiate, S; Apinhasmit, W; Petersson, A

    2012-10-01

    The purpose of this study was to compare mandibular linear distances measured from cone beam CT (CBCT) images produced by different radiographic parameter settings (peak kilovoltage and milliampere value). 20 cadaver hemimandibles with edentulous ridges posterior to the mental foramen were embedded in clear resin blocks and scanned by a CBCT machine (CB MercuRay(TM); Hitachi Medico Technology Corp., Chiba-ken, Japan). The radiographic parameters comprised four peak kilovoltage settings (60 kVp, 80 kVp, 100 kVp and 120 kVp) and two milliampere settings (10 mA and 15 mA). A 102.4 mm field of view was chosen. Each hemimandible was scanned 8 times with 8 different parameter combinations resulting in 160 CBCT data sets. On the cross-sectional images, six linear distances were measured. To assess the intraobserver variation, the 160 data sets were remeasured after 2 weeks. The measurement precision was calculated using Dahlberg's formula. With the same peak kilovoltage, the measurements yielded by different milliampere values were compared using the paired t-test. With the same milliampere value, the measurements yielded by different peak kilovoltage were compared using analysis of variance. A significant difference was considered when p < 0.05. Measurement precision varied from 0.03 mm to 0.28 mm. No significant differences in the distances were found among the different radiographic parameter combinations. Based upon the specific machine in the present study, low peak kilovoltage and milliampere value might be used for linear measurements in the posterior mandible.

  1. Learning the manifold of quality ultrasound acquisition.

    PubMed

    El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo

    2013-01-01

    Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.

  2. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    PubMed Central

    Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun

    2016-01-01

    Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650

  3. SVM-based automatic diagnosis method for keratoconus

    NASA Astrophysics Data System (ADS)

    Gao, Yuhong; Wu, Qiang; Li, Jing; Sun, Jiande; Wan, Wenbo

    2017-06-01

    Keratoconus is a progressive cornea disease that can lead to serious myopia and astigmatism, or even to corneal transplantation, if it becomes worse. The early detection of keratoconus is extremely important to know and control its condition. In this paper, we propose an automatic diagnosis algorithm for keratoconus to discriminate the normal eyes and keratoconus ones. We select the parameters obtained by Oculyzer as the feature of cornea, which characterize the cornea both directly and indirectly. In our experiment, 289 normal cases and 128 keratoconus cases are divided into training and test sets respectively. Far better than other kernels, the linear kernel of SVM has sensitivity of 94.94% and specificity of 97.87% with all the parameters training in the model. In single parameter experiment of linear kernel, elevation with 92.03% sensitivity and 98.61% specificity and thickness with 97.28% sensitivity and 97.82% specificity showed their good classification abilities. Combining elevation and thickness of the cornea, the proposed method can reach 97.43% sensitivity and 99.19% specificity. The experiments demonstrate that the proposed automatic diagnosis method is feasible and reliable.

  4. q-deformed Einstein's model to describe specific heat of solid

    NASA Astrophysics Data System (ADS)

    Guha, Atanu; Das, Prasanta Kumar

    2018-04-01

    Realistic phenomena can be described more appropriately using generalized canonical ensemble, with proper parameter sets involved. We have generalized the Einstein's theory for specific heat of solid in Tsallis statistics, where the temperature fluctuation is introduced into the theory via the fluctuation parameter q. At low temperature the Einstein's curve of the specific heat in the nonextensive Tsallis scenario exactly lies on the experimental data points. Consequently this q-modified Einstein's curve is found to be overlapping with the one predicted by Debye. Considering only the temperature fluctuation effect(even without considering more than one mode of vibration is being triggered) we found that the CV vs T curve is as good as obtained by considering the different modes of vibration as suggested by Debye. Generalizing the Einstein's theory in Tsallis statistics we found that a unique value of the Einstein temperature θE along with a temperature dependent deformation parameter q(T) , can well describe the phenomena of specific heat of solid i.e. the theory is equivalent to Debye's theory with a temperature dependent θD.

  5. Hearing Aids and Music

    PubMed Central

    Chasin, Marshall; Russo, Frank A.

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues—both compression ratio and knee-points—and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a “music program,” unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions. PMID:15497032

  6. Electronegativity equalization method: parameterization and validation for organic molecules using the Merz-Kollman-Singh charge distribution scheme.

    PubMed

    Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav

    2009-05-01

    The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.

  7. 40 CFR 63.751 - Monitoring requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...

  8. 40 CFR 63.751 - Monitoring requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...

  9. 40 CFR 63.751 - Monitoring requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...

  10. 40 CFR 63.751 - Monitoring requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...

  11. 40 CFR 63.751 - Monitoring requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) National Emission Standards for Aerospace Manufacturing and Rework Facilities § 63.751 Monitoring... specific monitoring procedures; (ii) Set the operating parameter value, or range of values, that... monitoring data. (1) The data may be recorded in reduced or nonreduced form (e.g., parts per million (ppm...

  12. Validation of the Predictive Value of Modeled Human Chorionic Gonadotrophin Residual Production in Low-Risk Gestational Trophoblastic Neoplasia Patients Treated in NRG Oncology/Gynecologic Oncology Group-174 Phase III Trial.

    PubMed

    You, Benoit; Deng, Wei; Hénin, Emilie; Oza, Amit; Osborne, Raymond

    2016-01-01

    In low-risk gestational trophoblastic neoplasia, chemotherapy effect is monitored and adjusted with serum human chorionic gonadotrophin (hCG) levels. Mathematical modeling of hCG kinetics may allow prediction of methotrexate (MTX) resistance, with production parameter "hCGres." This approach was evaluated using the GOG-174 (NRG Oncology/Gynecologic Oncology Group-174) trial database, in which weekly MTX (arm 1) was compared with dactinomycin (arm 2). Database (210 patients, including 78 with resistance) was split into 2 sets. A 126-patient training set was initially used to estimate model parameters. Patient hCG kinetics from days 7 to 45 were fit to: [hCG(time)] = hCG7 * exp(-k * time) + hCGres, where hCGres is residual hCG tumor production, hCG7 is the initial hCG level, and k is the elimination rate constant. Receiver operating characteristic (ROC) analyses defined putative hCGRes predictor of resistance. An 84-patient test set was used to assess prediction validity. The hCGres was predictive of outcome in both arms, with no impact of treatment arm on unexplained variability of kinetic parameter estimates. The best hCGres cutoffs to discriminate resistant versus sensitive patients were 7.7 and 74.0 IU/L in arms 1 and 2, respectively. By combining them, 2 predictive groups were defined (ROC area under the curve, 0.82; sensitivity, 93.8%; specificity, 70.5%). The predictive value of hCGres-based groups regarding resistance was reproducible in test set (ROC area under the curve, 0.81; sensitivity, 88.9%; specificity, 73.1%). Both hCGres and treatment arm were associated with resistance by logistic regression analysis. The early predictive value of the modeled kinetic parameter hCGres regarding resistance seems promising in the GOG-174 study. This is the second positive evaluation of this approach. Prospective validation is warranted.

  13. DD3MAT - a code for yield criteria anisotropy parameters identification.

    NASA Astrophysics Data System (ADS)

    Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2016-08-01

    This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.

  14. Genetic Algorithm Application in Optimization of Wireless Sensor Networks

    PubMed Central

    Norouzi, Ali; Zaim, A. Halim

    2014-01-01

    There are several applications known for wireless sensor networks (WSN), and such variety demands improvement of the currently available protocols and the specific parameters. Some notable parameters are lifetime of network and energy consumption for routing which play key role in every application. Genetic algorithm is one of the nonlinear optimization methods and relatively better option thanks to its efficiency for large scale applications and that the final formula can be modified by operators. The present survey tries to exert a comprehensive improvement in all operational stages of a WSN including node placement, network coverage, clustering, and data aggregation and achieve an ideal set of parameters of routing and application based WSN. Using genetic algorithm and based on the results of simulations in NS, a specific fitness function was achieved, optimized, and customized for all the operational stages of WSNs. PMID:24693235

  15. Aerodynamic configuration design using response surface methodology analysis

    NASA Technical Reports Server (NTRS)

    Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit

    1993-01-01

    An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.

  16. Architecture and settings optimization procedure of a TES frequency domain multiplexed readout firmware

    NASA Astrophysics Data System (ADS)

    Clenet, A.; Ravera, L.; Bertrand, B.; den Hartog, R.; Jackson, B.; van Leeuwen, B.-J.; van Loon, D.; Parot, Y.; Pointecouteau, E.; Sournac, A.

    2014-11-01

    IRAP is developing the readout electronics of the SPICA-SAFARI's TES bolometer arrays. Based on the frequency domain multiplexing technique the readout electronics provides the AC-signals to voltage-bias the detectors; it demodulates the data; and it computes a feedback to linearize the detection chain. The feedback is computed with a specific technique, so called baseband feedback (BBFB) which ensures that the loop is stable even with long propagation and processing delays (i.e. several μ s) and with fast signals (i.e. frequency carriers of the order of 5 MHz). To optimize the power consumption we took advantage of the reduced science signal bandwidth to decouple the signal sampling frequency and the data processing rate. This technique allowed a reduction of the power consumption of the circuit by a factor of 10. Beyond the firmware architecture the optimization of the instrument concerns the characterization routines and the definition of the optimal parameters. Indeed, to operate an array TES one has to properly define about 21000 parameters. We defined a set of procedures to automatically characterize these parameters and find out the optimal settings.

  17. Acute Perioperative Comparison of Patient-Specific Instrumentation versus Conventional Instrumentation Utilization during Bilateral Total Knee Arthroplasty.

    PubMed

    Steimle, Jerrod A; Groover, Michael T; Webb, Brad A; Ceccarelli, Brian J

    2018-01-01

    Utilizing patient-specific instrumentation during total knee arthroplasty has gained popularity in recent years with theoretical advantages in blood loss, intraoperative time, length of stay, postoperative alignment, and functional outcome, amongst others. No study has compared acute perioperative measures between patient-specific instrumentation and conventional instrumentation in the bilateral total knee arthroplasty setting. We compared patient-specific instrumentation versus conventional instrumentation in the setting of bilateral total knee arthroplasty to determine any benefits in the immediate perioperative period including surgical time, blood loss, pain medication use, length of stay, and discharge disposition. A total of 49 patients with standard instrumentation and 31 patients with patient-specific instrumentation were retrospectively reviewed in a two-year period at one facility. At baseline, the groups were comparable with respect to age, ASA, BMI, and comorbid conditions. We analyzed data on operative time, blood loss, hemoglobin change, need for transfusion, pain medication use, length of stay, and discharge disposition. There was no statistically significant difference between groups in regards to these parameters. Patient-specific instrumentation in the setting of bilateral total knee arthroplasty did not provide any immediate perioperative benefit compared to conventional instrumentation.

  18. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  19. Hyperspectral data discrimination methods

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Chen, Xuewen

    2000-12-01

    Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.

  20. Population of computational rabbit-specific ventricular action potential models for investigating sources of variability in cellular repolarisation.

    PubMed

    Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander

    2014-01-01

    Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.

  1. Population of Computational Rabbit-Specific Ventricular Action Potential Models for Investigating Sources of Variability in Cellular Repolarisation

    PubMed Central

    Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander

    2014-01-01

    Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229

  2. Leaf photosynthesis and respiration of three bioenergy crops in relation to temperature and leaf nitrogen: how conserved are biochemical model parameters among crop species?

    PubMed Central

    Archontoulis, S. V.; Yin, X.; Vos, J.; Danalatos, N. G.; Struik, P. C.

    2012-01-01

    Given the need for parallel increases in food and energy production from crops in the context of global change, crop simulation models and data sets to feed these models with photosynthesis and respiration parameters are increasingly important. This study provides information on photosynthesis and respiration for three energy crops (sunflower, kenaf, and cynara), reviews relevant information for five other crops (wheat, barley, cotton, tobacco, and grape), and assesses how conserved photosynthesis parameters are among crops. Using large data sets and optimization techniques, the C3 leaf photosynthesis model of Farquhar, von Caemmerer, and Berry (FvCB) and an empirical night respiration model for tested energy crops accounting for effects of temperature and leaf nitrogen were parameterized. Instead of the common approach of using information on net photosynthesis response to CO2 at the stomatal cavity (An–Ci), the model was parameterized by analysing the photosynthesis response to incident light intensity (An–Iinc). Convincing evidence is provided that the maximum Rubisco carboxylation rate or the maximum electron transport rate was very similar whether derived from An–Ci or from An–Iinc data sets. Parameters characterizing Rubisco limitation, electron transport limitation, the degree to which light inhibits leaf respiration, night respiration, and the minimum leaf nitrogen required for photosynthesis were then determined. Model predictions were validated against independent sets. Only a few FvCB parameters were conserved among crop species, thus species-specific FvCB model parameters are needed for crop modelling. Therefore, information from readily available but underexplored An–Iinc data should be re-analysed, thereby expanding the potential of combining classical photosynthetic data and the biochemical model. PMID:22021569

  3. Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model

    PubMed Central

    Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.

    2013-01-01

    One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874

  4. A SIMPLE MODEL FOR THE UPTAKE, TRANSLOCATION, AND ACCUMULATION OF PERCHLORATE IN TOBACCO PLANTS

    EPA Science Inventory

    A simple mathematical model is being developed to describe the uptake, translocation, and accumulation of perchlorate in tobacco plants. The model defines a plant as a set of compartments, consisting of mass balance differential equations and plant-specific physiological paramet...

  5. Improving Upon an Empirical Procedure for Characterizing Magnetospheric States

    NASA Astrophysics Data System (ADS)

    Fung, S. F.; Neufeld, J.; Shao, X.

    2012-12-01

    Work is being performed to improve upon an empirical procedure for describing and predicting the states of the magnetosphere [Fung and Shao, 2008]. We showed in our previous paper that the state of the magnetosphere can be described by a quantity called the magnetospheric state vector (MS vector) consisting of a concatenation of a set of driver-state and a set of response-state parameters. The response state parameters are time-shifted individually to account for their nominal response times so that time does not appear as an explicit parameter in the MS prescription. The MS vector is thus conceptually analogous to the set of vital signs for describing the state of health of a human body. In that previous study, we further demonstrated that since response states are results of driver states, then there should be a correspondence between driver and response states. Such correspondence can be used to predict the subsequent response state from any known driver state with a few hours' lead time. In this paper, we investigate a few possible ways to improve the magnetospheric state descriptions and prediction efficiency by including additional driver state parameters, such as solar activity, IMF-Bx and -By, and optimizing parameter bin sizes. Fung, S. F. and X. Shao, Specification of multiple geomagnetic responses to variable solar wind and IMF input, Ann. Geophys., 26, 639-652, 2008.

  6. Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments

    NASA Astrophysics Data System (ADS)

    Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel

    2017-03-01

    This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.

  7. SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel

    NASA Astrophysics Data System (ADS)

    Nolle, Lars

    The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.

  8. GBAS GAST D availability analysis for business aircraft

    NASA Astrophysics Data System (ADS)

    Dvorska, J.; Lipp, A.; Podivin, L.; Duchet, D.

    This paper analyzes Initial GBAS GAST D availability at a set of current ILS CAT III airports. Eurocontrol Pegasus Availability Tool, designed for SESAR projects is used for the assessment. Overall availability of the GBAS GAST D system is considered, focusing on business aircraft specifics where applicable. Nominal as well as adverse scenarios are presented in order to determine whether GAST D can reach the required availability for business aircraft at CAT III airport locations and under which conditions. The availability target was set at 99.9% availability when considering satellite outages in a given constellation and 99.997% when no outages are included. Sensitivity simulations were run for different scenarios and impacts of geometry screening thresholds, scale heights, aircraft mask, ground mask, sigma pseudorange ground, sigma ionospheric gradient, simulated year and different approach point (decision height) were analyzed. Some were run for a limited set of ILS CAT III airports and most of them for an almost complete set of nominal airports. Business aircraft specific assumptions, as well as aircraft type independent parameters (constellations, satellite outages, etc.) are examined in the paper. Conclusion summarizes the overall outcome of the simulations, showing that Initial GBAS CAT II/III can provide sufficient availability for all or almost all ILS CAT III capable airports considered in this study; and under which conditions. Recommendations for parameters that can be influenced (e.g. antenna location) if necessary are provided. It can also be expected that the availability will increase with the increasing amount of GNSS satellites. The work shows how different parameters impact availability of initial GBAS GAST D service for business aircraft, and that sufficient availability of GAST D service can be expected at most airports.

  9. First-order kinetic gas generation model parameters for wet landfills.

    PubMed

    Faour, Ayman A; Reinhart, Debra R; You, Huaxin

    2007-01-01

    Landfill gas collection data from wet landfill cells were analyzed and first-order gas generation model parameters were estimated for the US EPA landfill gas emissions model (LandGEM). Parameters were determined through statistical comparison of predicted and actual gas collection. The US EPA LandGEM model appeared to fit the data well, provided it is preceded by a lag phase, which on average was 1.5 years. The first-order reaction rate constant, k, and the methane generation potential, L(o), were estimated for a set of landfills with short-term waste placement and long-term gas collection data. Mean and 95% confidence parameter estimates for these data sets were found using mixed-effects model regression followed by bootstrap analysis. The mean values for the specific methane volume produced during the lag phase (V(sto)), L(o), and k were 33 m(3)/Megagrams (Mg), 76 m(3)/Mg, and 0.28 year(-1), respectively. Parameters were also estimated for three full scale wet landfills where waste was placed over many years. The k and L(o) estimated for these landfills were 0.21 year(-1), 115 m(3)/Mg, 0.11 year(-1), 95 m(3)/Mg, and 0.12 year(-1) and 87 m(3)/Mg, respectively. A group of data points from wet landfills cells with short-term data were also analyzed. A conservative set of parameter estimates was suggested based on the upper 95% confidence interval parameters as a k of 0.3 year(-1) and a L(o) of 100 m(3)/Mg if design is optimized and the lag is minimized.

  10. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  11. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  12. Reference intervals for 24 laboratory parameters determined in 24-hour urine collections.

    PubMed

    Curcio, Raffaele; Stettler, Helen; Suter, Paolo M; Aksözen, Jasmin Barman; Saleh, Lanja; Spanaus, Katharina; Bochud, Murielle; Minder, Elisabeth; von Eckardstein, Arnold

    2016-01-01

    Reference intervals for many laboratory parameters determined in 24-h urine collections are either not publicly available or based on small numbers, not sex specific or not from a representative sample. Osmolality and concentrations or enzymatic activities of sodium, potassium, chloride, glucose, creatinine, citrate, cortisol, pancreatic α-amylase, total protein, albumin, transferrin, immunoglobulin G, α1-microglobulin, α2-macroglobulin, as well as porphyrins and their precursors (δ-aminolevulinic acid and porphobilinogen) were determined in 241 24-h urine samples of a population-based cohort of asymptomatic adults (121 men and 120 women). For 16 of these 24 parameters creatinine-normalized ratios were calculated based on 24-h urine creatinine. The reference intervals for these parameters were calculated according to the CLSI C28-A3 statistical guidelines. By contrast to most published reference intervals, which do not stratify for sex, reference intervals of 12 of 24 laboratory parameters in 24-h urine collections and of eight of 16 parameters as creatinine-normalized ratios differed significantly between men and women. For six parameters calculated as 24-h urine excretion and four parameters calculated as creatinine-normalized ratios no reference intervals had been published before. For some parameters we found significant and relevant deviations from previously reported reference intervals, most notably for 24-h urine cortisol in women. Ten 24-h urine parameters showed weak or moderate sex-specific correlations with age. By applying up-to-date analytical methods and clinical chemistry analyzers to 24-h urine collections from a large population-based cohort we provide as yet the most comprehensive set of sex-specific reference intervals calculated according to CLSI guidelines for parameters determined in 24-h urine collections.

  13. Accurate and Fully Automatic Hippocampus Segmentation Using Subject-Specific 3D Optimal Local Maps Into a Hybrid Active Contour Model

    PubMed Central

    Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-01-01

    Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866

  14. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  15. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  16. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  17. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  18. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  19. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  20. Straussian Grounded-Theory Method: An Illustration

    ERIC Educational Resources Information Center

    Thai, Mai Thi Thanh; Chong, Li Choy; Agrawal, Narendra M.

    2012-01-01

    This paper demonstrates the benefits and application of Straussian Grounded Theory method in conducting research in complex settings where parameters are poorly defined. It provides a detailed illustration on how this method can be used to build an internationalization theory. To be specific, this paper exposes readers to the behind-the-scene work…

  1. Development of a Standard Set of Software Indicators for Aeronautical Systems Center.

    DTIC Science & Technology

    1992-09-01

    29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class

  2. (Im)Politeness in Cross-Cultural Encounters

    ERIC Educational Resources Information Center

    House, Juliane

    2012-01-01

    In this article I will first discuss the notions of "politeness" and "impoliteness" including a multilevel model of politeness and impoliteness that relates universal levels to culture- and language-specific ones. Given this framework and my earlier postulation of a set of parameters along which members of two linguacultures differ in terms of…

  3. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  4. Chirality-specific lift forces of helix under shear flows: Helix perpendicular to shear plane.

    PubMed

    Zhang, Qi-Yi

    2017-02-01

    Chiral objects in shear flow experience a chirality-specific lift force. Shear flows past helices in a low Reynolds number regime were studied using slender-body theory. The chirality-specific lift forces in the vorticity direction experienced by helices are dominated by a set of helix geometry parameters: helix radius, pitch length, number of turns, and helix phase angle. Its analytical formula is given. The chirality-specific forces are the physical reasons for the chiral separation of helices in shear flow. Our results are well supported by the latest experimental observations. © 2016 Wiley Periodicals, Inc.

  5. Intravoxel Incoherent Motion and Quantitative Non-Gaussian Diffusion MR Imaging: Evaluation of the Diagnostic and Prognostic Value of Several Markers of Malignant and Benign Breast Lesions.

    PubMed

    Iima, Mami; Kataoka, Masako; Kanao, Shotaro; Onishi, Natsuko; Kawai, Makiko; Ohashi, Akane; Sakaguchi, Rena; Toi, Masakazu; Togashi, Kaori

    2018-05-01

    Purpose To investigate the performance of integrated approaches that combined intravoxel incoherent motion (IVIM) and non-Gaussian diffusion parameters compared with the Breast Imaging and Reporting Data System (BI-RADS) to establish multiparameter thresholds scores or probabilities by using Bayesian analysis to distinguish malignant from benign breast lesions and their correlation with molecular prognostic factors. Materials and Methods Between May 2013 and March 2015, 411 patients were prospectively enrolled and 199 patients (allocated to training [n = 99] and validation [n = 100] sets) were included in this study. IVIM parameters (flowing blood volume fraction [fIVIM] and pseudodiffusion coefficient [D*]) and non-Gaussian diffusion parameters (theoretical apparent diffusion coefficient [ADC] at b value of 0 sec/mm 2 [ADC 0 ] and kurtosis [K]) by using IVIM and kurtosis models were estimated from diffusion-weighted image series (16 b values up to 2500 sec/mm 2 ), as well as a synthetic ADC (sADC) calculated by using b values of 200 and 1500 (sADC 200-1500 ) and a standard ADC calculated by using b values of 0 and 800 sec/mm 2 (ADC 0-800 ). The performance of two diagnostic approaches (combined parameter thresholds and Bayesian analysis) combining IVIM and diffusion parameters was evaluated and compared with BI-RADS performance. The Mann-Whitney U test and a nonparametric multiple comparison test were used to compare their performance to determine benignity or malignancy and as molecular prognostic biomarkers and subtypes of breast cancer. Results Significant differences were found between malignant and benign breast lesions for IVIM and non-Gaussian diffusion parameters (ADC 0 , K, fIVIM, fIVIM · D*, sADC 200-1500, and ADC 0-800 ; P < .05). Sensitivity and specificity for the validation set by radiologists A and B were as follows: sensitivity, 94.7% and 89.5%, and specificity, 75.0% and 79.2% for sADC 200-1500 , respectively; sensitivity, 94.7% and 96.1%, and specificity, 75.0% and 66.7%, for the combined thresholds approach, respectively; sensitivity, 92.1% and 92.1%, and specificity, 83.3% and 66.7%, for Bayesian analysis, respectively; and sensitivity and specificity, 100% and 79.2%, for BI-RADS, respectively. The significant difference in values of sADC 200-1500 in progesterone receptor status (P = .002) was noted. sADC 200-1500 was significantly different between histologic subtypes (P = .006). Conclusion Approaches that combined various IVIM and non-Gaussian diffusion MR imaging parameters may provide BI-RADS-equivalent scores almost comparable to BI-RADS categories without the use of contrast agents. Non-Gaussian diffusion parameters also differed by biologic prognostic factors. © RSNA, 2017 Online supplemental material is available for this article.

  6. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  7. Deriving amplification factors from simple site parameters using generalized regression neural networks: implications for relevant site proxies

    NASA Astrophysics Data System (ADS)

    Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène

    2017-07-01

    Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other more convenient parameters are preferred, especially the couple ( {V_{{{s}30}} ,f0 } ) that leads to a variance reduction in at least 60%. From a code perspective, equations and plots are provided describing the dependence of the short- and mid-period amplification factors Fa and Fv on these two parameters. The robustness of the results is analyzed by performing a similar analysis for two alternative sets of velocity profiles, for which the bedrock velocity is constrained to have the same value for all velocity profiles, which is not the case in the original set.[Figure not available: see fulltext.

  8. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth.

    PubMed

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-10-05

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a "post process" method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of "in situ" monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yieded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid.

  9. Using Noise and Fluctuations for In Situ Measurements of Nitrogen Diffusion Depth

    PubMed Central

    Samoila, Cornel; Ursutiu, Doru; Schleer, Walter-Harald; Jinga, Vlad; Nascov, Victor

    2016-01-01

    In manufacturing processes involving diffusion (of C, N, S, etc.), the evolution of the layer depth is of the utmost importance: the success of the entire process depends on this parameter. Currently, nitriding is typically either calibrated using a “post process” method or controlled via indirect measurements (H2, O2, H2O + CO2). In the absence of “in situ” monitoring, any variation in the process parameters (gas concentration, temperature, steel composition, distance between sensors and furnace chamber) can cause expensive process inefficiency or failure. Indirect measurements can prevent process failure, but uncertainties and complications may arise in the relationship between the measured parameters and the actual diffusion process. In this paper, a method based on noise and fluctuation measurements is proposed that offers direct control of the layer depth evolution because the parameters of interest are measured in direct contact with the nitrided steel (represented by the active electrode). The paper addresses two related sets of experiments. The first set of experiments consisted of laboratory tests on nitrided samples using Barkhausen noise and yielded a linear relationship between the frequency exponent in the Hooge equation and the nitriding time. For the second set, a specific sensor based on conductivity noise (at the nitriding temperature) was built for shop-floor experiments. Although two different types of noise were measured in these two sets of experiments, the use of the frequency exponent to monitor the process evolution remained valid. PMID:28773941

  10. End-of-fabrication CMOS process monitor

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Allen, R. A.; Blaes, B. R.; Hannaman, D. J.; Lieneweg, U.; Lin, Y.-S.; Sayah, H. R.

    1990-01-01

    A set of test 'modules' for verifying the quality of a complementary metal oxide semiconductor (CMOS) process at the end of the wafer fabrication is documented. By electrical testing of specific structures, over thirty parameters are collected characterizing interconnects, dielectrics, contacts, transistors, and inverters. Each test module contains a specification of its purpose, the layout of the test structure, the test procedures, the data reduction algorithms, and exemplary results obtained from 3-, 2-, or 1.6-micrometer CMOS/bulk processes. The document is intended to establish standard process qualification procedures for Application Specific Integrated Circuits (ASIC's).

  11. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    PubMed Central

    Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas

    2009-01-01

    Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170

  12. Single Object & Time Series Spectroscopy with JWST NIRCam

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Schlawin, Everett A.

    2017-01-01

    JWST will enable high signal-to-noise spectroscopic observations of the atmospheres of transiting planets with high sensitivity at wavelengths that are inaccessible with HST or other existing facilities. We plan to exploit this by measuring abundances, chemical compositions, cloud properties, and temperature-pressure parameters of a set of mostly warm (T 600 - 1200 K) and low mass (14 -200 Earth mass) planets in our guaranteed time program. These planets are expected to have significant molecular absorptions of H2O, CH4, CO2, CO, and other molecules that are key for determining these parameters and illuminating how and where the planets formed. We describe how we will use the NIRCam grisms to observe slitless transmission and emission spectra of these planets over 2.4 - 5.0 microns wavelength and how well these observations can measure our desired parameters. This will include how we set integration times, exposure parameters, and obtain simultaneous shorter wavelength images to track telescope pointing and stellar variability. We will illustrate this with specific examples showing model spectra, simulated observations, expected information retrieval results, completed Astronomer's Proposal Tools observing templates, target visibility, and other considerations.

  13. A Pipeline for Constructing a Catalog of Multi-method Models of Interacting Galaxies

    NASA Astrophysics Data System (ADS)

    Holincheck, Anthony

    Galaxies represent a fundamental unit of matter for describing the large-scale structure of the universe. One of the major processes affecting the formation and evolution of galaxies are mutual interactions. These interactions can including gravitational tidal distortion, mass transfer, and even mergers. In any hierarchical model, mergers are the key mechanism in galaxy formation and evolution. Computer simulations of interacting galaxies have evolved in the last four decades from simple restricted three-body algorithms to full n-body gravity models. These codes often included sophisticated physical mechanisms such as gas dynamics, supernova feedback, and central blackholes. As the level of complexity, and perhaps realism, increases so does the amount of computational resources needed. These advanced simulations are often used in parameter studies of interactions. They are usually only employed in an ad hoc fashion to recreate the dynamical history of specific sets of interacting galaxies. These specific models are often created with only a few dozen or at most few hundred sets of simulation parameters being attempted. This dissertation presents a prototype pipeline for modeling specific pairs of interacting galaxies in bulk. The process begins with a simple image of the current disturbed morphology and an estimate of distance to the system and mass of the galaxies. With the use of an updated restricted three-body simulation code and the help of Citizen Scientists, the pipeline is able to sample hundreds of thousands of points in parameter space for each system. Through the use of a convenient interface and innovative scoring algorithm, the pipeline aids researchers in identifying the best set of simulation parameters. This dissertation demonstrates a successful recreation of the disturbed morphologies of 62 pairs of interacting galaxies. The pipeline also provides for examining the level of convergence and uniqueness of the dynamical properties of each system. By creating a population of models for actual systems, the current research is able to compare simulation-based and observational values on a larger scale than previous efforts. Several potential relationships between star formation rate and dynamical time since closest approach are presented.

  14. Estimation of hydraulic parameters from an unconfined aquifer test conducted in a glacial outwash deposit, Cape Cod, Massachusetts

    USGS Publications Warehouse

    Moench, A.F.; Garabedian, Stephen P.; LeBlanc, Denis R.

    2000-01-01

    An aquifer test conducted in a sand and gravel, glacial outwash deposit on Cape Cod, Massachusetts was analyzed by means of a model for flow to a partially penetrating well in a homogeneous, anisotropic unconfined aquifer. The model is designed to account for all significant mechanisms expected to influence drawdown in observation piezometers and in the pumped well. In addition to the usual fluid-flow and storage processes, additional processes include effects of storage in the pumped well, storage in observation piezometers, effects of skin at the pumped-well screen, and effects of drainage from the zone above the water table. The aquifer was pumped at a rate of 320 gallons per minute for 72-hours and drawdown measurements were made in the pumped well and in 20 piezometers located at various distances from the pumped well and depths below the land surface. To facilitate the analysis, an automatic parameter estimation algorithm was used to obtain relevant unconfined aquifer parameters, including the saturated thickness and a set of empirical parameters that relate to gradual drainage from the unsaturated zone. Drainage from the unsaturated zone is treated in this paper as a finite series of exponential terms, each of which contains one empirical parameter that is to be determined. It was necessary to account for effects of gradual drainage from the unsaturated zone to obtain satisfactory agreement between measured and simulated drawdown, particularly in piezometers located near the water table. The commonly used assumption of instantaneous drainage from the unsaturated zone gives rise to large discrepancies between measured and predicted drawdown in the intermediate-time range and can result in inaccurate estimates of aquifer parameters when automatic parameter estimation procedures are used. The values of the estimated hydraulic parameters are consistent with estimates from prior studies and from what is known about the aquifer at the site. Effects of heterogeneity at the site were small as measured drawdowns in all piezometers and wells were very close to the simulated values for a homogeneous porous medium. The estimated values are: specific yield, 0.26; saturated thickness, 170 feet; horizontal hydraulic conductivity, 0.23 feet per minute; vertical hydraulic conductivity, 0.14 feet per minute; and specific storage, 1.3x10-5 per foot. It was found that drawdown in only a few piezometers strategically located at depth near the pumped well yielded parameter estimates close to the estimates obtained for the entire data set analyzed simultaneously. If the influence of gradual drainage from the unsaturated zone is not taken into account, specific yield is significantly underestimated even in these deep-seated piezometers. This helps to explain the low values of specific yield often reported for granular aquifers in the literature. If either the entire data set or only the drawdown in selected deep-seated piezometers was used, it was found unnecessary to conduct the test for the full 72-hours to obtain accurate estimates of the hydraulic parameters. For some piezometer groups, practically identical results would be obtained for an aquifer test conducted for only 8-hours. Drawdowns measured in the pumped well and piezometers at distant locations were diagnostic only of aquifer transmissivity.

  15. A framework for scalable parameter estimation of gene circuit models using structural information.

    PubMed

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  16. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  17. Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.

    PubMed

    Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S

    2012-11-01

    One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.

  18. Applying Dynamic Energy Budget (DEB) theory to simulate growth and bio-energetics of blue mussels under low seston conditions

    NASA Astrophysics Data System (ADS)

    Rosland, R.; Strand, Ø.; Alunno-Bruscia, M.; Bacher, C.; Strohmeier, T.

    2009-08-01

    A Dynamic Energy Budget (DEB) model for simulation of growth and bioenergetics of blue mussels ( Mytilus edulis) has been tested in three low seston sites in southern Norway. The observations comprise four datasets from laboratory experiments (physiological and biometrical mussel data) and three datasets from in situ growth experiments (biometrical mussel data). Additional in situ data from commercial farms in southern Norway were used for estimation of biometrical relationships in the mussels. Three DEB parameters (shape coefficient, half saturation coefficient, and somatic maintenance rate coefficient) were estimated from experimental data, and the estimated parameters were complemented with parameter values from literature to establish a basic parameter set. Model simulations based on the basic parameter set and site specific environmental forcing matched fairly well with observations, but the model was not successful in simulating growth at the extreme low seston regimes in the laboratory experiments in which the long period of negative growth caused negative reproductive mass. Sensitivity analysis indicated that the model was moderately sensitive to changes in the parameter and initial conditions. The results show the robust properties of the DEB model as it manages to simulate mussel growth in several independent datasets from a common basic parameter set. However, the results also demonstrate limitations of Chl a as a food proxy for blue mussels and limitations of the DEB model to simulate long term starvation. Future work should aim at establishing better food proxies and improving the model formulations of the processes involved in food ingestion and assimilation. The current DEB model should also be elaborated to allow shrinking in the structural tissue in order to produce more realistic growth simulations during long periods of starvation.

  19. Simulating the behavior of volatiles belonging to the C-O-H-S system in silicate melts under magmatic conditions with the software D-Compress

    NASA Astrophysics Data System (ADS)

    Burgisser, Alain; Alletti, Marina; Scaillet, Bruno

    2015-06-01

    Modeling magmatic degassing, or how the volatile distribution between gas and melt changes at pressure varies, is a complex task that involves a large number of thermodynamical relationships and that requires dedicated software. This article presents the software D-Compress, which computes the gas and melt volatile composition of five element sets in magmatic systems (O-H, S-O-H, C-S-O-H, C-S-O-H-Fe, and C-O-H). It has been calibrated so as to simulate the volatiles coexisting with three common types of silicate melts (basalt, phonolite, and rhyolite). Operational temperatures depend on melt composition and range from 790 to 1400 °C. A specificity of D-Compress is the calculation of volatile composition as pressure varies along a (de)compression path between atmospheric and 3000 bars. This software was prepared so as to maximize versatility by proposing different sets of input parameters. In particular, whenever new solubility laws on specific melt compositions are available, the model parameters can be easily tuned to run the code on that composition. Parameter gaps were minimized by including sets of chemical species for which calibration data were available over a wide range of pressure, temperature, and melt composition. A brief description of the model rationale is followed by the presentation of the software capabilities. Examples of use are then presented with outputs comparisons between D-Compress and other currently available thermodynamical models. The compiled software and the source code are available as electronic supplementary materials.

  20. The Rangeland Hydrology and Erosion Model: A Dynamic Approach for Predicting Soil Loss on Rangelands

    NASA Astrophysics Data System (ADS)

    Hernandez, Mariano; Nearing, Mark A.; Al-Hamdan, Osama Z.; Pierson, Frederick B.; Armendariz, Gerardo; Weltz, Mark A.; Spaeth, Kenneth E.; Williams, C. Jason; Nouwakpo, Sayjro K.; Goodrich, David C.; Unkrich, Carl L.; Nichols, Mary H.; Holifield Collins, Chandra D.

    2017-11-01

    In this study, we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed against data collected from 23 runoff and sediment events in a shrub-dominated semiarid watershed in Arizona, USA. To evaluate the model, two sets of primary model parameters were determined using the RHEM V2.3 and RHEM V1.0 parameter estimation equations. Testing of the parameters indicated that RHEM V2.3 parameter estimation equations provided a 76% improvement over RHEM V1.0 parameter estimation equations. Second, the RHEM V2.3 model was calibrated to measurements from the watershed. The parameters estimated by the new equations were within the lowest and highest values of the calibrated parameter set. These results suggest that the new parameter estimation equations can be applied for this environment to predict sediment yield at the hillslope scale. Furthermore, we also applied the RHEM V2.3 to demonstrate the response of the model as a function of foliar cover and ground cover for 124 data points across Arizona and New Mexico. The dependence of average sediment yield on surface ground cover was moderately stronger than that on foliar cover. These results demonstrate that RHEM V2.3 predicts runoff volume, peak runoff, and sediment yield with sufficient accuracy for broad application to assess and manage rangeland systems.

  1. Interactive Database of Pulsar Flux Density Measurements

    NASA Astrophysics Data System (ADS)

    Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.

    2012-12-01

    The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.

  2. A strain-, cow-, and herd-specific bio-economic simulation model of intramammary infections in dairy cattle herds.

    PubMed

    Gussmann, Maya; Kirkeby, Carsten; Græsbøll, Kaare; Farre, Michael; Halasa, Tariq

    2018-07-14

    Intramammary infections (IMI) in dairy cattle lead to economic losses for farmers, both through reduced milk production and disease control measures. We present the first strain-, cow- and herd-specific bio-economic simulation model of intramammary infections in a dairy cattle herd. The model can be used to investigate the cost-effectiveness of different prevention and control strategies against IMI. The objective of this study was to describe a transmission framework, which simulates spread of IMI causing pathogens through different transmission modes. These include the traditional contagious and environmental spread and a new opportunistic transmission mode. In addition, the within-herd transmission dynamics of IMI causing pathogens were studied. Sensitivity analysis was conducted to investigate the influence of input parameters on model predictions. The results show that the model is able to represent various within-herd levels of IMI prevalence, depending on the simulated pathogens and their parameter settings. The parameters can be adjusted to include different combinations of IMI causing pathogens at different prevalence levels, representing herd-specific situations. The model is most sensitive to varying the transmission rate parameters and the strain-specific recovery rates from IMI. It can be used for investigating both short term operational and long term strategic decisions for the prevention and control of IMI in dairy cattle herds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. High-frequency ground-motion parameters from weak-motion data in the Sicily Channel and surrounding regions

    NASA Astrophysics Data System (ADS)

    D'Amico, Sebastiano; Akinci, Aybige; Pischiutta, Marta

    2018-07-01

    In this paper we characterize the high-frequency (1.0-10 Hz) seismic wave crustal attenuation and the source excitation in the Sicily Channel and surrounding regions using background seismicity from weak-motion database. The data set includes 15 995 waveforms related to earthquakes having local magnitude ranging from 2.0 to 4.5 recorded between 2006 and 2012. The observed and predicted ground motions form the weak-motion data are evaluated in several narrow frequency bands from 0.25 to 20.0 Hz. The filtered observed peaks are regressed to specify a proper functional form for the regional attenuation, excitation and site specific term separately. The results are then used to calibrate effective theoretical attenuation and source excitation models using the random vibration theory. In the log-log domain, the regional seismic wave attenuation and the geometrical spreading coefficient are modelled together. The geometrical spreading coefficient, g(r), modelled with a bilinear piecewise functional form and given as g(r) ∝ r-1.0 for the short distances (r < 50 km) and as g(r) ∝ r-0.8 for the larger distances (r < 50 km). A frequency-dependent quality factor, inverse of the seismic attenuation parameter, Q(f)=160f/fref0. 35 (where fref = 1.0 Hz), is combined to the geometrical spreading. The source excitation terms are defined at a selected reference distance with a magnitude-independent roll-off spectral parameter, κ 0.04 s and with a Brune stress drop parameter increasing with moment magnitude, from Δσ = 2 MPa for Mw = 2.0 to Δσ = 13 MPa for Mw = 4.5. For events M ≤ 4.5 (being Mwmax = 4.5 available in the data set) the stress parameters are obtained by correlating the empirical/excitation source spectra with the Brune spectral model as function of magnitude. For the larger magnitudes (Mw>4.5) outside the range available in the calibration data set where we do not have recorded data, we extrapolate our results through the calibration of the stress parameters of the Brune source spectrum over the Bindi et al.ground-motion prediction equation selected as a reference model (hereafter also ITA10). Finally, the weak-motion-based model parameters are used through a stochastic approach in order to predict a set of region specific spectral ground-motion parameters (peak ground acceleration, peak ground velocity, and 0.3 and 1.0 Hz spectral acceleration) relative to the generic rock site as a function of distance between 10 and 250 km and magnitude between M 2.0 and M 7.0.

  4. SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Southern Medical University, Guangzhou; Yan, H

    Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less

  5. Optimization of commercial scale photonuclear production of radioisotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bindu, K. C.; Harmon, Frank; Starovoitova, Valeriia N.

    2013-04-19

    Photonuclear production of radioisotopes driven by bremsstrahlung photons using a linear electron accelerator in the suitable energy range is a promising method for producing radioisotopes. The photonuclear production method is capable of making radioisotopes more conveniently, cheaply and with much less radioactive waste compared to existing methods. Historically, photo-nuclear reactions have not been exploited for isotope production because of the low specific activity that is generally associated with this production process, although the technique is well-known to be capable of producing large quantities of certain radioisotopes. We describe an optimization technique for a set of parameters to maximize specific activitymore » of the final product. This set includes the electron beam energy and current, the end station design (an integrated converter and target as well as cooling system), the purity of materials used, and the activation time. These parameters are mutually dependent and thus their optimization is not trivial. {sup 67}Cu photonuclear production via {sup 68}Zn({gamma}p){sup 67}Cu reaction was used as an example of such an optimization process.« less

  6. Interaction of cadmium with phosphate on goethite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venema, P.; Hiemstra, T.; Riemsdijk, W.H. van

    1997-08-01

    Interactions between different ions are of importance in understanding chemical processes in natural systems. In this study simultaneous adsorption of phosphate and cadmium on goethite is studied in detail. The charge distribution (CD)-multisite complexation (MUSIC) model has been successful in describing extended data sets of cadmium adsorption and phosphate adsorption on goethite. In this study, the parameters of this model for these two data sets were combined to describe a new data set of simultaneous adsorption of cadmium and phosphate on goethite. Attention is focused on the surface speciation of cadmium. With the extra information that can be obtained frommore » the interaction experiments, the cadmium adsorption model is refined. For a perfect description of the data, the singly coordinated surface groups at the 110 face of goethite were assumed to form both monodentate and bidentate surface species with cadmium. The CD-MUSIC model is able to describe data sets of both simultaneous and single adsorption of cadmium and phosphate with the same parameters. The model calculations confirmed the idea that only singly coordinated surface groups are reactive for specific ion binding.« less

  7. Detection of the toughest: Pedestrian injury risk as a smooth function of age.

    PubMed

    Niebuhr, Tobias; Junge, Mirko

    2017-07-04

    Though it is common to refer to age-specific groups (e.g., children, adults, elderly), smooth trends conditional on age are mainly ignored in the literature. The present study examines the pedestrian injury risk in full-frontal pedestrian-to-passenger car accidents and incorporates age-in addition to collision speed and injury severity-as a plug-in parameter. Recent work introduced a model for pedestrian injury risk functions using explicit formulae with easily interpretable model parameters. This model is expanded by pedestrian age as another model parameter. Using the German In-Depth Accident Study (GIDAS) to obtain age-specific risk proportions, the model parameters are fitted to the raw data and then smoothed by broken-line regression. The approach supplies explicit probabilities for pedestrian injury risk conditional on pedestrian age, collision speed, and injury severity under investigation. All results yield consistency to each other in the sense that risks for more severe injuries are less probable than those for less severe injuries. As a side product, the approach indicates specific ages at which the risk behavior fundamentally changes. These threshold values can be interpreted as the most robust ages for pedestrians. The obtained age-wise risk functions can be aggregated and adapted to any population. The presented approach is formulated in such general terms that in can be directly used for other data sets or additional parameters; for example, the pedestrian's sex. Thus far, no other study using age as a plug-in parameter can be found.

  8. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  9. Predictive process simulation of cryogenic implants for leading edge transistor design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gossmann, Hans-Joachim; Zographos, Nikolas; Park, Hugh

    2012-11-06

    Two cryogenic implant TCAD-modules have been developed: (i) A continuum-based compact model targeted towards a TCAD production environment calibrated against an extensive data-set for all common dopants. Ion-specific calibration parameters related to damage generation and dynamic annealing were used and resulted in excellent fits to the calibration data-set. (ii) A Kinetic Monte Carlo (kMC) model including the full time dependence of ion-exposure that a particular spot on the wafer experiences, as well as the resulting temperature vs. time profile of this spot. It was calibrated by adjusting damage generation and dynamic annealing parameters. The kMC simulations clearly demonstrate the importancemore » of the time-structure of the beam for the amorphization process: Assuming an average dose-rate does not capture all of the physics and may lead to incorrect conclusions. The model enables optimization of the amorphization process through tool parameters such as scan speed or beam height.« less

  10. Concentration dependence and self-similarity of photodarkening losses induced in Yb-doped fibers by comparable excitation.

    PubMed

    Taccheo, Stefano; Gebavi, Hrvoje; Monteville, Achille; Le Goffic, Olivier; Landais, David; Mechin, David; Tregoat, Denis; Cadier, Benoit; Robin, Thierry; Milanese, Daniel; Durrant, Tim

    2011-09-26

    We report on an extensive investigation of photodarkening in Yb-doped silica fibers. A set of similar fibers, covering a large Yb concentration range, was made so as to compare the photodarkening induced losses. Careful measurements were made to ensure equal and uniform inversion for all the tested fibers. The results show that, with the specific set-up, the stretching parameter obtained through fitting has a very limited variation. This gives more meaning to the fitting parameters. Results tend to indicate a square law dependence of the concentration of excited ions on the final saturated loss. We also demonstrate self-similarity of loss evolution when experimental curves are simply normalized to fitting parameters. This evidence of self-similarity also supports the possibility of introducing a preliminary figure of merit for Yb-doped fiber. This will allow the impact of photodarkening on laser/amplifier devices to be evaluated. © 2011 Optical Society of America

  11. Design parameters for the separation of fat from natural whole milk in an ultrasonic litre-scale vessel.

    PubMed

    Leong, Thomas; Johansson, Linda; Juliano, Pablo; Mawson, Raymond; McArthur, Sally; Manasseh, Richard

    2014-07-01

    The separation of milk fat from natural whole milk has been achieved by applying ultrasonic standing waves (1 MHz and/or 2 MHz) in a litre-scale (5L capacity) batch system. Various design parameters were tested such as power input level, process time, specific energy, transducer-reflector distance and the use of single and dual transducer set-ups. It was found that the efficacy of the treatment depended on the specific energy density input into the system. In this case, a plateau in fat concentration of ∼20% w/v was achieved in the creamed top layer after applying a minimum specific energy of 200 kJ/kg. In addition, the fat separation was enhanced by reducing the transducer reflector distance in the vessel, operating two transducers in a parallel set-up, or by increasing the duration of insonation, resulting in skimmed milk with a fat concentration as low as 1.7% (w/v) using raw milk after 20 min insonation. Dual mode operation with both transducers in parallel as close as 30 mm apart resulted in the fastest creaming and skimming in this study at ∼1.6 g fat/min. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Elastic-viscoplastic modeling of soft biological tissues using a mixed finite element formulation based on the relative deformation gradient.

    PubMed

    Weickenmeier, J; Jabareen, M

    2014-11-01

    The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.

  13. From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses

    PubMed Central

    Zenker, Sven; Rubin, Jonathan; Clermont, Gilles

    2007-01-01

    The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590

  14. Design of an optical PPM communication link in the presence of component tolerances

    NASA Technical Reports Server (NTRS)

    Chen, C.-C.

    1988-01-01

    A systematic approach is described for estimating the performance of an optical direct detection pulse position modulation (PPM) communication link in the presence of parameter tolerances. This approach was incorporated into the JPL optical link analysis program to provide a useful tool for optical link design. Given a set of system parameters and their tolerance specifications, the program will calculate the nominal performance margin and its standard deviation. Through use of these values, the optical link can be designed to perform adequately even under adverse operating conditions.

  15. Assessing household health expenditure with Box-Cox censoring models.

    PubMed

    Chaze, Jean-Paul

    2005-09-01

    In order to assess the combined presence of zero expenditures and a heavily skewed distribution of positive expenditures, the Box-Cox transformation with location parameter is used to define a set of models generalising the standard Tobit, Heckman selection and double-hurdle models. Extended flexibility with respect to previous specifications is introduced, notably regarding negative transformation parameters, which may prove necessary for medical expenditures, and corner-solution outcomes. An illustration is provided by the analysis of household health expenditure in Switzerland. Copyright (c) 2005 John Wiley & Sons, Ltd.

  16. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

    NASA Technical Reports Server (NTRS)

    Mohr, R. L.

    1975-01-01

    A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

  17. Clinical Use of an Optical Coherence Tomography Linear Discriminant Function for Differentiating Glaucoma From Normal Eyes.

    PubMed

    Choi, Yun Jeong; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung

    2016-03-01

    To determine and validate the diagnostic ability of a linear discriminant function (LDF) based on retinal nerve fiber layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) thickness obtained using high-definition optical coherence tomography (Cirrus HD-OCT) for discriminating between healthy controls and early glaucoma subjects. We prospectively selected 214 healthy controls and 152 glaucoma subjects (teaching set) and another independent sample of 86 healthy controls and 71 glaucoma subjects (validating set). Two scans, including 1 macular and 1 peripapillary RNFL scan, were obtained. After calculating the LDF in the teaching set using the binary logistic regression analysis, receiver operating characteristic curves were plotted and compared between the OCT-provided parameters and LDF in the validating set. The proposed LDF was 16.529-(0.132×superior RNFL)-(0.064×inferior RNFL)+(0.039×12 o'clock RNFL)+(0.038×1 o'clock RNFL)+(0.084×superior GCIPL)-(0.144×minimum GCIPL). The highest area under the receiver operating characteristic (AUROC) curve was obtained for LDF in both sets (AUROC=0.95 and 0.96). In the validating set, the LDF showed significantly higher AUROC than the best RNFL (inferior RNFL=0.91) and GCIPL parameter (minimum GCIPL=0.88). The LDF yielded a sensitivity of 93.0% at a fixed specificity of 85.0%. The LDF showed better diagnostic ability for differentiating between healthy and early glaucoma subjects than individual OCT parameters. A classification algorithm based on the LDF can be used in the OCT analysis for glaucoma diagnosis.

  18. Assessing the accuracy of subject-specific, muscle-model parameters determined by optimizing to match isometric strength.

    PubMed

    DeSmitt, Holly J; Domire, Zachary J

    2016-12-01

    Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.

  19. Comparison of parameters of modern cooled and uncooled thermal cameras

    NASA Astrophysics Data System (ADS)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  20. Improvement and further development of SSM/I overland parameter algorithms using the WetNet workstation

    NASA Technical Reports Server (NTRS)

    Neale, Christopher M. U.; Mcdonnell, Jeffrey J.; Ramsey, Douglas; Hipps, Lawrence; Tarboton, David

    1993-01-01

    Since the launch of the DMSP Special Sensor Microwave/Imager (SSM/I), several algorithms have been developed to retrieve overland parameters. These include the present operational algorithms resulting from the Navy calibration/validation effort such as land surface type (Neale et al. 1990), land surface temperature (McFarland et al. 1990), surface moisture (McFarland and Neale, 1991) and snow parameters (McFarland and Neale, 1991). In addition, other work has been done including the classification of snow cover and precipitation using the SSM/I (Grody, 1991). Due to the empirical nature of most of the above mentioned algorithms, further research is warranted and improvements can probably be obtained through a combination of radiative transfer modelling to study the physical processes governing the microwave emissions at the SSM/I frequencies, and the incorporation of additional ground truth data and special cases into the regression data sets. We have proposed specifically to improve the retrieval of surface moisture and snow parameters using the WetNet SSM/I data sets along with ground truth information namely climatic variables from the NOAA cooperative network of weather stations as well as imagery from other satellite sensors such as the AVHRR and Thematic Mapper. In the case of surface moisture retrievals the characterization of vegetation density is of primary concern. The higher spatial resolution satellite imagery collected at concurrent periods will be used to characterize vegetation types and amounts which, along with radiative transfer modelling should lead to more physically based retrievals. Snow parameter retrieval algorithm improvement will initially concentrate on the classification of snowpacks (dry snow, wet snow, refrozen snow) and later on specific products such as snow water equivalent. Significant accomplishments in the past year are presented.

  1. Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set

    NASA Technical Reports Server (NTRS)

    Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping

    1997-01-01

    A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm.

  2. Manufacturing Methods and Technology (MM&T) program. 10.6 micrometer carbon dioxide TEA (Transverely Excited Atmospheric) lasers

    NASA Astrophysics Data System (ADS)

    Luck, C. F.

    1983-06-01

    This report documents the efforts of Raytheon Company to conduct a manufacturing methods and technology (MM&T) program for 10.6 micrometer carbon dioxide TEA lasers. A set of laser parameters is given and a conforming tube design is described. Results of thermal and mechanical stress analyses are detailed along with a procedure for assembling and testing the laser tube. Also provided are purchase specifications for optics and process specifications for some of the essential operations.

  3. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  4. An automatic identification procedure to promote the use of FES-cycling training for hemiparetic patients.

    PubMed

    Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra

    2014-01-01

    Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  5. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    PubMed Central

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790

  6. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning.

    PubMed

    Baykal, Cenk; Torres, Luis G; Alterovitz, Ron

    2015-09-28

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.

  7. Optimization of tissue physical parameters for accurate temperature estimation from finite-element simulation of radiofrequency ablation.

    PubMed

    Subramanian, Swetha; Mast, T Douglas

    2015-10-07

    Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.

  8. Cost-effective technology advancement directions for electric propulsion transportation systems in earth-orbital missions

    NASA Technical Reports Server (NTRS)

    Regetz, J. D., Jr.; Terwilliger, C. H.

    1979-01-01

    The directions that electric propulsion technology should take to meet the primary propulsion requirements for earth-orbital missions in the most cost effective manner are determined. The mission set requirements, state of the art electric propulsion technology and the baseline system characterized by it, adequacy of the baseline system to meet the mission set requirements, cost optimum electric propulsion system characteristics for the mission set, and sensitivities of mission costs and design points to system level electric propulsion parameters are discussed. The impact on overall costs than specific masses or costs of propulsion and power systems is evaluated.

  9. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  10. Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

    NASA Astrophysics Data System (ADS)

    Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam

    2017-11-01

    Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

  11. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Enamel surface topography analysis for diet discrimination. A methodology to enhance and select discriminative parameters

    NASA Astrophysics Data System (ADS)

    Francisco, Arthur; Blondel, Cécile; Brunetière, Noël; Ramdarshan, Anusha; Merceron, Gildas

    2018-03-01

    Tooth wear and, more specifically, dental microwear texture is a dietary proxy that has been used for years in vertebrate paleoecology and ecology. DMTA, dental microwear texture analysis, relies on a few parameters related to the surface complexity, anisotropy and heterogeneity of the enamel facets at the micrometric scale. Working with few but physically meaningful parameters helps in comparing published results and in defining levels for classification purposes. Other dental microwear approaches are based on ISO parameters and coupled with statistical tests to find the more relevant ones. The present study roughly utilizes most of the aforementioned parameters in their more or less modified form. But more than parameters, we here propose a new approach: instead of a single parameter characterizing the whole surface, we sample the surface and thus generate 9 derived parameters in order to broaden the parameter set. The identification of the most discriminative parameters is performed with an automated procedure which is an extended and refined version of the workflows encountered in some studies. The procedure in its initial form includes the most common tools, like the ANOVA and the correlation analysis, along with the required mathematical tests. The discrimination results show that a simplified form of the procedure is able to more efficiently identify the desired number of discriminative parameters. Also highlighted are some trends like the relevance of working with both height and spatial parameters, as well as the potential benefits of dimensionless surfaces. On a set of 45 surfaces issued from 45 specimens of three modern ruminants with differences in feeding preferences (grazing, leaf-browsing and fruit-eating), it is clearly shown that the level of wear discrimination is improved with the new methodology compared to the other ones.

  13. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  14. Fatigue reassessment for lifetime extension of offshore wind monopile substructures

    NASA Astrophysics Data System (ADS)

    Ziegler, Lisa; Muskulus, Michael

    2016-09-01

    Fatigue reassessment is required to decide about lifetime extension of aging offshore wind farms. This paper presents a methodology to identify important parameters to monitor during the operational phase of offshore wind turbines. An elementary effects method is applied to analyze the global sensitivity of residual fatigue lifetimes to environmental, structural and operational parameters. Therefore, renewed lifetime simulations are performed for a case study which consists of a 5 MW turbine with monopile substructure in 20 m water depth. Results show that corrosion, turbine availability, and turbulence intensity are the most influential parameters. This can vary strongly for other settings (water depth, turbine size, etc.) making case-specific assessments necessary.

  15. Parameter Stability of the Functional–Structural Plant Model GREENLAB as Affected by Variation within Populations, among Seasons and among Growth Stages

    PubMed Central

    Ma, Yuntao; Li, Baoguo; Zhan, Zhigang; Guo, Yan; Luquet, Delphine; de Reffye, Philippe; Dingkuhn, Michael

    2007-01-01

    Background and Aims It is increasingly accepted that crop models, if they are to simulate genotype-specific behaviour accurately, should simulate the morphogenetic process generating plant architecture. A functional–structural plant model, GREENLAB, was previously presented and validated for maize. The model is based on a recursive mathematical process, with parameters whose values cannot be measured directly and need to be optimized statistically. This study aims at evaluating the stability of GREENLAB parameters in response to three types of phenotype variability: (1) among individuals from a common population; (2) among populations subjected to different environments (seasons); and (3) among different development stages of the same plants. Methods Five field experiments were conducted in the course of 4 years on irrigated fields near Beijing, China. Detailed observations were conducted throughout the seasons on the dimensions and fresh biomass of all above-ground plant organs for each metamer. Growth stage-specific target files were assembled from the data for GREENLAB parameter optimization. Optimization was conducted for specific developmental stages or the entire growth cycle, for individual plants (replicates), and for different seasons. Parameter stability was evaluated by comparing their CV with that of phenotype observation for the different sources of variability. A reduced data set was developed for easier model parameterization using one season, and validated for the four other seasons. Key Results and Conclusions The analysis of parameter stability among plants sharing the same environment and among populations grown in different environments indicated that the model explains some of the inter-seasonal variability of phenotype (parameters varied less than the phenotype itself), but not inter-plant variability (parameter and phenotype variability were similar). Parameter variability among developmental stages was small, indicating that parameter values were largely development-stage independent. The authors suggest that the high level of parameter stability observed in GREENLAB can be used to conduct comparisons among genotypes and, ultimately, genetic analyses. PMID:17158141

  16. CR softcopy display presets based on optimum visualization of specific findings

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.

    1999-07-01

    The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  17. THE RELATIVE IMPORTANCE OF THE VADOSE ZONE IN MULTIMEDIA RISK ASSESSMENT MODELING APPLIED AT A NATIONAL SCALE: AN ANALYSIS OF BENZENE USING 3MRA

    EPA Science Inventory

    Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidab...

  18. A Selection System and Catalog for Instructional Media and Devices.

    ERIC Educational Resources Information Center

    Boucher, Brian G.; And Others

    A system is presented which facilitates the selection of training media and devices based on the requirements of specific learning objectives. The system consists of the use of a set of descriptive parameters which are common to both learning objectives and media. The system allows the essential intent of learning objectives to be analyzed in…

  19. What can the CMB tell about the microphysics of cosmic reheating?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewes, Marco, E-mail: marcodrewes@googlemail.com

    In inflationary cosmology, cosmic reheating after inflation sets the initial conditions for the hot big bang. We investigate how CMB data can be used to study the effective potential and couplings of the inflaton during reheating to constrain the underlying microphysics. If there is a phase of preheating that is driven by a parametric resonance or other instability, then the thermal history and expansion history during the reheating era depend on a large number of microphysical parameters in a complicated way. In this case the connection between CMB observables and microphysical parameters can only established with intense numerical studies. Suchmore » studies can help to improve CMB constraints on the effective inflaton potential in specific models, but parameter degeneracies usually make it impossible to extract meaningful best-fit values for individual microphysical parameters. If, on the other hand, reheating is driven by perturbative processes, then it can be possible to constrain the inflaton couplings and the reheating temperature from CMB data. This provides an indirect probe of fundamental microphysical parameters that most likely can never be measured directly in the laboratory, but have an immense impact on the evolution of the cosmos by setting the stage for the hot big bang.« less

  20. Maximum permissible exposure of the retina in the human eye in optical coherence tomography systems using a confocal scanning laser ophthalmoscopy platform

    NASA Astrophysics Data System (ADS)

    Rees, Sian; Dobre, George

    2014-01-01

    When using scanning laser ophthalmoscopy to produce images of the eye fundus, maximum permissible exposure (MPE) limits must be considered. These limits are set out in international standards such as the National Standards Institute ANSI Z136.1 Safe Use of Lasers (USA) and BS EN 60825-1: 1994 (UK) and corresponding Euro norms but these documents do not explicitly consider the case of scanned beams. Our study aims to show how MPE values can be calculated for the specific case of retinal scanning by taking into account an array of parameters, such as wavelength, exposure duration, type of scanning, line rate and field size, and how each set of initial parameters results in MPE values that correspond to thermal or photochemical damage to the retina.

  1. Estimating the Expected Value of Sample Information Using the Probabilistic Sensitivity Analysis Sample

    PubMed Central

    Oakley, Jeremy E.; Brennan, Alan; Breeze, Penny

    2015-01-01

    Health economic decision-analytic models are used to estimate the expected net benefits of competing decision options. The true values of the input parameters of such models are rarely known with certainty, and it is often useful to quantify the value to the decision maker of reducing uncertainty through collecting new data. In the context of a particular decision problem, the value of a proposed research design can be quantified by its expected value of sample information (EVSI). EVSI is commonly estimated via a 2-level Monte Carlo procedure in which plausible data sets are generated in an outer loop, and then, conditional on these, the parameters of the decision model are updated via Bayes rule and sampled in an inner loop. At each iteration of the inner loop, the decision model is evaluated. This is computationally demanding and may be difficult if the posterior distribution of the model parameters conditional on sampled data is hard to sample from. We describe a fast nonparametric regression-based method for estimating per-patient EVSI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method avoids the need to sample from the posterior distributions of the parameters and avoids the need to rerun the model. The only requirement is that sample data sets can be generated. The method is applicable with a model of any complexity and with any specification of model parameter distribution. We demonstrate in a case study the superior efficiency of the regression method over the 2-level Monte Carlo method. PMID:25810269

  2. Validation of a mathematical model of the bovine estrous cycle for cows with different estrous cycle characteristics.

    PubMed

    Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H

    2017-11-01

    A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.

  3. Speaker verification system using acoustic data and non-acoustic data

    DOEpatents

    Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA

    2006-03-21

    A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.

  4. Density-Functional Theory with Dispersion-Correcting Potentials for Methane: Bridging the Efficiency and Accuracy Gap between High-Level Wave Function and Classical Molecular Mechanics Methods.

    PubMed

    Torres, Edmanuel; DiLabio, Gino A

    2013-08-13

    Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.

  5. Analysis of Air Traffic Track Data with the AutoBayes Synthesis System

    NASA Technical Reports Server (NTRS)

    Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.

    2010-01-01

    The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.

    In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less

  7. Model of head-neck joint fast movements in the frontal plane.

    PubMed

    Pedrocchi, A; Ferrigno, G

    2004-06-01

    The objective of this work is to develop a model representing the physiological systems driving fast head movements in frontal plane. All the contributions occurring mechanically in the head movement are considered: damping, stiffness, physiological limit of range of motion, gravitational field, and muscular torques due to voluntary activation as well as to stretch reflex depending on fusal afferences. Model parameters are partly derived from the literature, when possible, whereas undetermined block parameters are determined by optimising the model output, fitting to real kinematics data acquired by a motion capture system in specific experimental set-ups. The optimisation for parameter identification is performed by genetic algorithms. Results show that the model represents very well fast head movements in the whole range of inclination in the frontal plane. Such a model could be proposed as a tool for transforming kinematics data on head movements in 'neural equivalent data', especially for assessing head control disease and properly planning the rehabilitation process. In addition, the use of genetic algorithms seems to fit well the problem of parameter identification, allowing for the use of a very simple experimental set-up and granting model robustness.

  8. Complete set of homogeneous isotropic analytic solutions in scalar-tensor cosmology with radiation and curvature

    NASA Astrophysics Data System (ADS)

    Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil

    2012-10-01

    We study a model of a scalar field minimally coupled to gravity, with a specific potential energy for the scalar field, and include curvature and radiation as two additional parameters. Our goal is to obtain analytically the complete set of configurations of a homogeneous and isotropic universe as a function of time. This leads to a geodesically complete description of the Universe, including the passage through the cosmological singularities, at the classical level. We give all the solutions analytically without any restrictions on the parameter space of the model or initial values of the fields. We find that for generic solutions the Universe goes through a singular (zero-size) bounce by entering a period of antigravity at each big crunch and exiting from it at the following big bang. This happens cyclically again and again without violating the null-energy condition. There is a special subset of geodesically complete nongeneric solutions which perform zero-size bounces without ever entering the antigravity regime in all cycles. For these, initial values of the fields are synchronized and quantized but the parameters of the model are not restricted. There is also a subset of spatial curvature-induced solutions that have finite-size bounces in the gravity regime and never enter the antigravity phase. These exist only within a small continuous domain of parameter space without fine-tuning the initial conditions. To obtain these results, we identified 25 regions of a 6-parameter space in which the complete set of analytic solutions are explicitly obtained.

  9. Effect of music on surgical skill during simulated intraocular surgery.

    PubMed

    Kyrillos, Ralph; Caissie, Mathieu

    2017-12-01

    To evaluate the effect of Mozart music compared to silence on anterior segment surgical skill in the context of simulated intraocular surgery. Prospective stratified and randomized noninferiority trial. Fourteen ophthalmologists and 12 residents in ophthalmology. All participants were asked to perform 4 sets of predetermined tasks on the EyeSI surgical simulator (VRmagic, Mannheim, Germany). The participants completed 1 Capsulorhexis task and 1 Anti-Tremor task during 3 separate visits. The first 2 sets determined the basic level on day 1. Then, the participants were stratified by surgical experience and randomized to be exposed to music (Mozart sonata for 2 pianos in D-K448) during either the third or the fourth set of tasks (day 2 or 3). Surgical skill was evaluated using the parameters recorded by the simulator such as "Total score" and "Time" for both tasks and task-specific parameters such as "Out of tolerance percentage" for the Anti-Tremor task and "Deviation of rhexis radius from 2.5 mm," "Roundness," and "Centering" for the Capsulorhexis task. The data were analyzed using the Wilcoxon signed-rank test. No statistically significant differences were noted between exposure and nonexposure for all the Anti-Tremor task parameters as well as most parameters for the Capsulorhexis task. Two parameters for the Capsulorhexis task showed a strong trend for improvement with exposure to music ("Total score" +23.3%, p = 0.025; "Roundness" +33.0%, p = 0.037). Exposure to music did not negatively impact surgical skills. Moreover, a trend for improvement was shown while listening to Mozart music. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  10. Effects and dose–response relationships of resistance training on physical performance in youth athletes: a systematic review and meta-analysis

    PubMed Central

    Lesinski, Melanie; Prieske, Olaf; Granacher, Urs

    2016-01-01

    Objectives To quantify age, sex, sport and training type-specific effects of resistance training on physical performance, and to characterise dose–response relationships of resistance training parameters that could maximise gains in physical performance in youth athletes. Design Systematic review and meta-analysis of intervention studies. Data sources Studies were identified by systematic literature search in the databases PubMed and Web of Science (1985–2015). Weighted mean standardised mean differences (SMDwm) were calculated using random-effects models. Eligibility criteria for selecting studies Only studies with an active control group were included if these investigated the effects of resistance training in youth athletes (6–18 years) and tested at least one physical performance measure. Results 43 studies met the inclusion criteria. Our analyses revealed moderate effects of resistance training on muscle strength and vertical jump performance (SMDwm 0.8–1.09), and small effects on linear sprint, agility and sport-specific performance (SMDwm 0.58–0.75). Effects were moderated by sex and resistance training type. Independently computed dose–response relationships for resistance training parameters revealed that a training period of >23 weeks, 5 sets/exercise, 6–8 repetitions/set, a training intensity of 80–89% of 1 repetition maximum (RM), and 3–4 min rest between sets were most effective to improve muscle strength (SMDwm 2.09–3.40). Summary/conclusions Resistance training is an effective method to enhance muscle strength and jump performance in youth athletes, moderated by sex and resistance training type. Dose–response relationships for key training parameters indicate that youth coaches should primarily implement resistance training programmes with fewer repetitions and higher intensities to improve physical performance measures of youth athletes. PMID:26851290

  11. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  12. Gestation-Specific Changes in the Anatomy and Physiology of Healthy Pregnant Women: An Extended Repository of Model Parameters for Physiologically Based Pharmacokinetic Modeling in Pregnancy.

    PubMed

    Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg

    2017-11-01

    In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.

  13. Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design

    NASA Technical Reports Server (NTRS)

    Anderson, B. J.; Justus, C. G.; Batts, G. W.

    2001-01-01

    Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.

  14. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  15. Biological Movement and Laws of Physics.

    PubMed

    Latash, Mark L

    2017-07-01

    Living systems may be defined as systems able to organize new, biology-specific, laws of physics and modify their parameters for specific tasks. Examples include the force-length muscle dependence mediated by the stretch reflex, and the control of movements with modification of the spatial referent coordinates for salient performance variables. Low-dimensional sets of referent coordinates at a task level are transformed to higher-dimensional sets at lower hierarchical levels in a way that ensures stability of performance. Stability of actions can be controlled independently of the actions (e.g., anticipatory synergy adjustments). Unintentional actions reflect relaxation processes leading to drifts of corresponding referent coordinates in the absence of changes in external load. Implications of this general framework for movement disorders, motor development, motor skill acquisition, and even philosophy are discussed.

  16. Friction spinning - Twist phenomena and the capability of influencing them

    NASA Astrophysics Data System (ADS)

    Lossen, Benjamin; Homberg, Werner

    2016-10-01

    The friction spinning process can be allocated to the incremental forming techniques. The process consists of process elements from both metal spinning and friction welding. The selective combination of process elements from these two processes results in the integration of friction sub-processes in a spinning process. This implies self-induced heat generation with the possibility of manufacturing functionally graded parts from tube and sheets. Compared with conventional spinning processes, this in-process heat treatment permits the extension of existing forming limits and also the production of more complex geometries. Furthermore, the defined adjustment of part properties like strength, grain size/orientation and surface conditions can be achieved through the appropriate process parameter settings and consequently by setting a specific temperature profile in combination with the degree of deformation. The results presented from tube forming start with an investigation into the resulting twist phenomena in flange processing. In this way, the influence of the main parameters, such as rotation speed, feed rate, forming paths and tool friction surface, and their effects on temperature, forces and finally the twist behavior are analyzed. Following this, the significant correlations with the parameters and a new process strategy are set out in order to visualize the possibility of achieving a defined grain texture orientation.

  17. Structural identifiability analysis of a cardiovascular system model.

    PubMed

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort.

    PubMed

    Caramia, Carlotta; Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking-in some specific conditions-might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use.

  19. Gait parameters are differently affected by concurrent smartphone-based activities with scaled levels of cognitive effort

    PubMed Central

    Bernabucci, Ivan; D'Anna, Carmen; De Marchis, Cristiano; Schmid, Maurizio

    2017-01-01

    The widespread and pervasive use of smartphones for sending messages, calling, and entertainment purposes, mainly among young adults, is often accompanied by the concurrent execution of other tasks. Recent studies have analyzed how texting, reading or calling while walking–in some specific conditions–might significantly influence gait parameters. The aim of this study is to examine the effect of different smartphone activities on walking, evaluating the variations of several gait parameters. 10 young healthy students (all smartphone proficient users) were instructed to text chat (with two different levels of cognitive load), call, surf on a social network or play with a math game while walking in a real-life outdoor setting. Each of these activities is characterized by a different cognitive load. Using an inertial measurement unit on the lower trunk, spatio-temporal gait parameters, together with regularity, symmetry and smoothness parameters, were extracted and grouped for comparison among normal walking and different dual task demands. An overall significant effect of task type on the aforementioned parameters group was observed. The alterations in gait parameters vary as a function of cognitive effort. In particular, stride frequency, step length and gait speed show a decrement, while step time increases as a function of cognitive effort. Smoothness, regularity and symmetry parameters are significantly altered for specific dual task conditions, mainly along the mediolateral direction. These results may lead to a better understanding of the possible risks related to walking and concurrent smartphone use. PMID:29023456

  20. Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.

    PubMed

    Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver

    2016-01-01

    The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.

  1. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  2. Is it the time to rethink clinical decision-making strategies? From a single clinical outcome evaluation to a Clinical Multi-criteria Decision Assessment (CMDA).

    PubMed

    Migliore, Alberto; Integlia, Davide; Bizzi, Emanuele; Piaggio, Tomaso

    2015-10-01

    There are plenty of different clinical, organizational and economic parameters to consider in order having a complete assessment of the total impact of a pharmaceutical treatment. In the attempt to follow, a holistic approach aimed to provide an evaluation embracing all clinical parameters in order to choose the best treatments, it is necessary to compare and weight multiple criteria. Therefore, a change is required: we need to move from a decision-making context based on the assessment of one single criteria towards a transparent and systematic framework enabling decision makers to assess all relevant parameters simultaneously in order to choose the best treatment to use. In order to apply the MCDA methodology to clinical decision making the best pharmaceutical treatment (or medical devices) to use to treat a specific pathology, we suggest a specific application of the Multiple Criteria Decision Analysis for the purpose, like a Clinical Multi-criteria Decision Assessment CMDA. In CMDA, results from both meta-analysis and observational studies are used by a clinical consensus after attributing weights to specific domains and related parameters. The decision will result from a related comparison of all consequences (i.e., efficacy, safety, adherence, administration route) existing behind the choice to use a specific pharmacological treatment. The match will yield a score (in absolute value) that link each parameter with a specific intervention, and then a final score for each treatment. The higher is the final score; the most appropriate is the intervention to treat disease considering all criteria (domain an parameters). The results will allow the physician to evaluate the best clinical treatment for his patients considering at the same time all relevant criteria such as clinical effectiveness for all parameters and administration route. The use of CMDA model will yield a clear and complete indication of the best pharmaceutical treatment to use for patients, helping physicians to choose drugs with a complete set of information, imputed in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Cure fraction model with random effects for regional variation in cancer survival.

    PubMed

    Seppä, Karri; Hakulinen, Timo; Kim, Hyon-Jung; Läärä, Esa

    2010-11-30

    Assessing regional differences in the survival of cancer patients is important but difficult when separate regions are small or sparsely populated. In this paper, we apply a mixture cure fraction model with random effects to cause-specific survival data of female breast cancer patients collected by the population-based Finnish Cancer Registry. Two sets of random effects were used to capture the regional variation in the cure fraction and in the survival of the non-cured patients, respectively. This hierarchical model was implemented in a Bayesian framework using a Metropolis-within-Gibbs algorithm. To avoid poor mixing of the Markov chain, when the variance of either set of random effects was close to zero, posterior simulations were based on a parameter-expanded model with tailor-made proposal distributions in Metropolis steps. The random effects allowed the fitting of the cure fraction model to the sparse regional data and the estimation of the regional variation in 10-year cause-specific breast cancer survival with a parsimonious number of parameters. Before 1986, the capital of Finland clearly stood out from the rest, but since then all the 21 hospital districts have achieved approximately the same level of survival. Copyright © 2010 John Wiley & Sons, Ltd.

  4. Accounting for baryonic effects in cosmic shear tomography: Determining a minimal set of nuisance parameters using PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eifler, Tim; Krause, Elisabeth; Dodelson, Scott

    2014-05-28

    Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less

  5. Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis

    PubMed Central

    Beato, M.

    2013-01-01

    Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101

  6. Novel histopathologic feature identified through image analysis augments stage II colorectal cancer clinical reporting

    PubMed Central

    Caie, Peter D.; Zhou, Ying; Turnbull, Arran K.; Oniscu, Anca; Harrison, David J.

    2016-01-01

    A number of candidate histopathologic factors show promise in identifying stage II colorectal cancer (CRC) patients at a high risk of disease-specific death, however they can suffer from low reproducibility and none have replaced classical pathologic staging. We developed an image analysis algorithm which standardized the quantification of specific histopathologic features and exported a multi-parametric feature-set captured without bias. The image analysis algorithm was executed across a training set (n = 50) and the resultant big data was distilled through decision tree modelling to identify the most informative parameters to sub-categorize stage II CRC patients. The most significant, and novel, parameter identified was the ‘sum area of poorly differentiated clusters’ (AreaPDC). This feature was validated across a second cohort of stage II CRC patients (n = 134) (HR = 4; 95% CI, 1.5– 11). Finally, the AreaPDC was integrated with the significant features within the clinical pathology report, pT stage and differentiation, into a novel prognostic index (HR = 7.5; 95% CI, 3–18.5) which improved upon current clinical staging (HR = 4.26; 95% CI, 1.7– 10.3). The identification of poorly differentiated clusters as being highly significant in disease progression presents evidence to suggest that these features could be the source of novel targets to decrease the risk of disease specific death. PMID:27322148

  7. OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.

    2017-01-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369

  8. Optimal experiment design for magnetic resonance fingerprinting.

    PubMed

    Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L

    2016-08-01

    Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.

  9. Computational dissection of human episodic memory reveals mental process-specific genetic profiles

    PubMed Central

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G.; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J.-F.

    2015-01-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory. PMID:26261317

  10. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    PubMed

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  11. AMT-200S Motor Glider Parameter and Performance Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2011-01-01

    Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.

  12. Processor design optimization methodology for synthetic vision systems

    NASA Astrophysics Data System (ADS)

    Wren, Bill; Tarleton, Norman G.; Symosek, Peter F.

    1997-06-01

    Architecture optimization requires numerous inputs from hardware to software specifications. The task of varying these input parameters to obtain an optimal system architecture with regard to cost, specified performance and method of upgrade considerably increases the development cost due to the infinitude of events, most of which cannot even be defined by any simple enumeration or set of inequalities. We shall address the use of a PC-based tool using genetic algorithms to optimize the architecture for an avionics synthetic vision system, specifically passive millimeter wave system implementation.

  13. Automatic learning of rules. A practical example of using artificial intelligence to improve computer-based detection of myocardial infarction and left ventricular hypertrophy in the 12-lead ECG.

    PubMed

    Kaiser, W; Faber, T S; Findeis, M

    1996-01-01

    The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.

  14. A simple method for constructing the inhomogeneous quantum group IGLq(n) and its universal enveloping algebra Uq(igl(n))

    NASA Astrophysics Data System (ADS)

    Shariati, A.; Aghamohammadi, A.

    1995-12-01

    We propose a simple and concise method to construct the inhomogeneous quantum group IGLq(n) and its universal enveloping algebra Uq(igl(n)). Our technique is based on embedding an n-dimensional quantum space in an n+1-dimensional one as the set xn+1=1. This is possible only if one considers the multiparametric quantum space whose parameters are fixed in a specific way. The quantum group IGLq(n) is then the subset of GLq(n+1), which leaves the xn+1=1 subset invariant. For the deformed universal enveloping algebra Uq(igl(n)), we will show that it can also be embedded in Uq(gl(n+1)), provided one uses the multiparametric deformation of U(gl(n+1)) with a specific choice of its parameters.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernal, José Luis; Cuesta, Antonio J.; Verde, Licia, E-mail: joseluis.bernal@icc.ub.edu, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu

    We perform an empirical consistency test of General Relativity/dark energy by disentangling expansion history and growth of structure constraints. We replace each late-universe parameter that describes the behavior of dark energy with two meta-parameters: one describing geometrical information in cosmological probes, and the other controlling the growth of structure. If the underlying model (a standard wCDM cosmology with General Relativity) is correct, that is under the null hypothesis, the two meta-parameters coincide. If they do not, it could indicate a failure of the model or systematics in the data. We present a global analysis using state-of-the-art cosmological data sets whichmore » points in the direction that cosmic structures prefer a weaker growth than that inferred by background probes. This result could signify inconsistencies of the model, the necessity of extensions to it or the presence of systematic errors in the data. We examine all these possibilities. The fact that the result is mostly driven by a specific sub-set of galaxy clusters abundance data, points to the need of a better understanding of this probe.« less

  16. An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian

    For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less

  17. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  18. Relaxation Time Distribution (RTD) of Spectral Induced Polarization (SIP) data from environmental studies

    NASA Astrophysics Data System (ADS)

    Ntarlagiannis, D.; Ustra, A.; Slater, L. D.; Zhang, C.; Mendonça, C. A.

    2015-12-01

    In this work we present an alternative formulation of the Debye Decomposition (DD) of complex conductivity spectra, with a new set of parameters that are directly related to the continuous Debye relaxation model. The procedure determines the relaxation time distribution (RTD) and two frequency-independent parameters that modulate the induced polarization spectra. The distribution of relaxation times quantifies the contribution of each distinct relaxation process, which can in turn be associated with specific polarization processes and characterized in terms of electrochemical and interfacial parameters as derived from mechanistic models. Synthetic tests show that the procedure can successfully fit spectral induced polarization (SIP) data and accurately recover the RTD. The procedure was applied to different data sets, focusing on environmental applications. We focus on data of sand-clay mixtures artificially contaminated with toluene, and crude oil-contaminated sands experiencing biodegradation. The results identify characteristic relaxation times that can be associated with distinct polarization processes resulting from either the contaminant itself or transformations associated with biodegradation. The inversion results provide information regarding the relative strength and dominant relaxation time of these polarization processes.

  19. A minimum data set of water quality parameters to assess and compare treatment efficiency of stormwater facilities.

    PubMed

    Ingvertsen, Simon Toft; Jensen, Marina Bergen; Magid, Jakob

    2011-01-01

    Urban stormwater runoff is often of poor quality, impacting aquatic ecosystems and limiting the use of stormwater runoff for recreational purposes. Several stormwater treatment facilities (STFs) are in operation or at the pilot testing stage, but their efficiencies are neither well documented nor easily compared due to the complex contaminant profile of stormwater and the highly variable runoff hydrograph. On the basis of a review of available data sets on urban stormwater quality and environmental contaminant behavior, we suggest a few carefully selected contaminant parameters (the minimum data set) to be obligatory when assessing and comparing the efficiency of STFs. Consistent use of the minimum data set in all future monitoring schemes for STFs will ensure broad-spectrum testing at low costs and strengthen comparability among facilities. The proposed minimum data set includes: (i) fine fraction of suspended solids (<63 μm), (ii) total concentrations of zinc and copper, (iii) total concentrations of phenanthrene, fluoranthene, and benzo(b,k)fluoranthene, and (iv) total concentrations of phosphorus and nitrogen. Indicator pathogens and other specific contaminants (i.e., chromium, pesticides, phenols) may be added if recreational or certain catchment-scale objectives are to be met. Issues that need further investigation have been identified during the iterative process of developing the minimum data set. by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  20. Used Nuclear Fuel-Storage, Transportation & Disposal Analysis Resource and Data System (UNF-ST&DARDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Kaushik; Clarity, Justin B; Cumberland, Riley M

    This will be licensed via RSICC. A new, integrated data and analysis system has been designed to simplify and automate the performance of accurate and efficient evaluations for characterizing the input to the overall nuclear waste management system -UNF-Storage, Transportation & Disposal Analysis Resource and Data System (UNF-ST&DARDS). A relational database within UNF-ST&DARDS provides a standard means by which UNF-ST&DARDS can succinctly store and retrieve modeling and simulation (M&S) parameters for specific spent nuclear fuel analysis. A library of various analysis model templates provides the ability to communicate the various set of M&S parameters to the most appropriate M&S application.more » Interactive visualization capabilities facilitate data analysis and results interpretation. UNF-ST&DARDS current analysis capabilities include (1) assembly-specific depletion and decay, (2) and spent nuclear fuel cask-specific criticality and shielding. Currently, UNF-ST&DARDS uses SCALE nuclear analysis code system for performing nuclear analysis.« less

  1. Growth and laccase production kinetics of Trametes versicolor in a stirred tank reactor.

    PubMed

    Thiruchelvam, A T; Ramsay, Juliana A

    2007-03-01

    White rot fungi are a promising option to treat recalcitrant organic molecules, such as lignin, polycyclic aromatic hydrocarbons, and textile dyes, because of the lignin-modifying enzymes (LMEs) they secrete. Because knowledge of the kinetic parameters is important to better design and operate bioreactors to cultivate these fungi for degradation and/or to produce LME(s), these parameters were determined using Trametes versicolor ATCC 20869 (ATCC, American Type Culture Collection) in a magnetic stir bar reactor. A complete set of kinetic data has not been previously published for this culture. Higher than previously reported growth rates with high laccase production of up to 1,385 U l(-1) occurred during growth without [Formula: see text] or glucose limitation. The maximum specific growth rate averaged 0.94 +/- 0.23 day(-1), whereas the maximum specific substrate consumption rates for glucose and ammonium were 3.37 +/- 1.16 and 0.15 +/- 0.04 day(-1), respectively. The maximum specific oxygen consumption rate was 1.63 +/- 0.36 day(-1).

  2. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  3. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  4. Plasmacytoid dendritic cell and functional HIV Gag p55-specific T cells before treatment interruption can inform set-point plasma HIV viral load after treatment interruption in chronically suppressed HIV-1(+) patients.

    PubMed

    Papasavvas, Emmanouil; Foulkes, Andrea; Yin, Xiangfan; Joseph, Jocelin; Ross, Brian; Azzoni, Livio; Kostman, Jay R; Mounzer, Karam; Shull, Jane; Montaner, Luis J

    2015-07-01

    The identification of immune correlates of HIV control is important for the design of immunotherapies that could support cure or antiretroviral therapy (ART) intensification-related strategies. ART interruptions may facilitate this task through exposure of an ART partially reconstituted immune system to endogenous virus. We investigated the relationship between set-point plasma HIV viral load (VL) during an ART interruption and innate/adaptive parameters before or after interruption. Dendritic cell (DC), natural killer (NK) cell and HIV Gag p55-specific T-cell functional responses were measured in paired cryopreserved peripheral blood mononuclear cells obtained at the beginning (on ART) and at set-point of an open-ended interruption from 31 ART-suppressed chronically HIV-1(+) patients. Spearman correlation and linear regression modeling were used. Frequencies of plasmacytoid DC (pDC), and HIV Gag p55-specific CD3(+)  CD4(-)  perforin(+)  IFN-γ(+) cells at the beginning of interruption associated negatively with set-point plasma VL. Inclusion of both variables with interaction into a model resulted in the best fit (adjusted R(2)  = 0·6874). Frequencies of pDC or HIV Gag p55-specific CD3(+)  CD4(-)  CSFE(lo)  CD107a(+) cells at set-point associated negatively with set-point plasma VL. The dual contribution of pDC and anti-HIV T-cell responses to viral control, supported by our models, suggests that these variables may serve as immune correlates of viral control and could be integrated in cure or ART-intensification strategies. © 2015 John Wiley & Sons Ltd.

  5. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  6. Towards a Decision Support Tool for 3d Visualisation: Application to Selectivity Purpose of Single Object in a 3d City Scene

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Pouliot, J.; Poux, F.; Hallot, P.; De Rudder, L.; Billen, R.

    2017-10-01

    This paper deals with the establishment of a comprehensive methodological framework that defines 3D visualisation rules and its application in a decision support tool. Whilst the use of 3D models grows in many application fields, their visualisation remains challenging from the point of view of mapping and rendering aspects to be applied to suitability support the decision making process. Indeed, there exists a great number of 3D visualisation techniques but as far as we know, a decision support tool that facilitates the production of an efficient 3D visualisation is still missing. This is why a comprehensive methodological framework is proposed in order to build decision tables for specific data, tasks and contexts. Based on the second-order logic formalism, we define a set of functions and propositions among and between two collections of entities: on one hand static retinal variables (hue, size, shape…) and 3D environment parameters (directional lighting, shadow, haze…) and on the other hand their effect(s) regarding specific visual tasks. It enables to define 3D visualisation rules according to four categories: consequence, compatibility, potential incompatibility and incompatibility. In this paper, the application of the methodological framework is demonstrated for an urban visualisation at high density considering a specific set of entities. On the basis of our analysis and the results of many studies conducted in the 3D semiotics, which refers to the study of symbols and how they relay information, the truth values of propositions are determined. 3D visualisation rules are then extracted for the considered context and set of entities and are presented into a decision table with a colour coding. Finally, the decision table is implemented into a plugin developed with three.js, a cross-browser JavaScript library. The plugin consists of a sidebar and warning windows that help the designer in the use of a set of static retinal variables and 3D environment parameters.

  7. Impact of heating on sensory properties of French Protected Designation of Origin (PDO) blue cheeses. Relationships with physicochemical parameters.

    PubMed

    Bord, Cécile; Guerinon, Delphine; Lebecque, Annick

    2016-07-01

    The aim of this study was to measure the impact of heating on the sensory properties of blue-veined cheeses in order to characterise their sensory properties and to identify their specific sensory typology associated with physicochemical parameters. Sensory profiles were performed on a selection of Protected Designation of Origin (PDO) cheeses representing the four blue-veined cheese categories produced in the Massif Central (Fourme d'Ambert, Fourme de Montbrison, Bleu d'Auvergne and Bleu des Causses). At the same time, physicochemical parameters were measured in these cheeses. The relationship between these two sets of data was investigated. Four types of blue-veined cheeses displayed significantly different behaviour after heating and it is possible to discriminate these cheese categories through specific sensory attributes. Fourme d'Ambert and Bleu d'Auvergne exhibited useful culinary properties: they presented good meltability, stretchability and a weak oiling-off. However, basic tastes (salty, bitter and sour) are also sensory attributes which can distinguish heated blue cheeses. The relationship between the sensory and physicochemical data indicated a correlation suggesting that some of these sensory properties may be explained by certain physicochemical parameters of heated cheeses. © The Author(s) 2015.

  8. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  9. Quantitative knowledge acquisition for expert systems

    NASA Technical Reports Server (NTRS)

    Belkin, Brenda L.; Stengel, Robert F.

    1991-01-01

    A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.

  10. BWR station blackout: A RISMC analysis using RAVEN and RELAP5-3D

    DOE PAGES

    Mandelli, D.; Smith, C.; Riley, T.; ...

    2016-01-01

    The existing fleet of nuclear power plants is in the process of extending its lifetime and increasing the power generated from these plants via power uprates and improved operations. In order to evaluate the impact of these factors on the safety of the plant, the Risk-Informed Safety Margin Characterization (RISMC) project aims to provide insights to decision makers through a series of simulations of the plant dynamics for different initial conditions and accident scenarios. This paper presents a case study in order to show the capabilities of the RISMC methodology to assess impact of power uprate of a Boiling Watermore » Reactor system during a Station Black-Out accident scenario. We employ a system simulator code, RELAP5-3D, coupled with RAVEN which perform the stochastic analysis. Furthermore, our analysis is performed by: 1) sampling values from a set of parameters from the uncertainty space of interest, 2) simulating the system behavior for that specific set of parameter values and 3) analyzing the outcomes from the set of simulation runs.« less

  11. A Generalized 2D-Dynamical Mean-Field Ising Model with a Rich Set of Bifurcations (Inspired and Applied to Financial Crises)

    NASA Astrophysics Data System (ADS)

    Smug, Damian; Sornette, Didier; Ashwin, Peter

    We analyze an extended version of the dynamical mean-field Ising model. Instead of classical physical representation of spins and external magnetic field, the model describes traders' opinion dynamics. The external field is endogenized to represent a smoothed moving average of the past state variable. This model captures in a simple set-up the interplay between instantaneous social imitation and past trends in social coordinations. We show the existence of a rich set of bifurcations as a function of the two parameters quantifying the relative importance of instantaneous versus past social opinions on the formation of the next value of the state variable. Moreover, we present a thorough analysis of chaotic behavior, which is exhibited in certain parameter regimes. Finally, we examine several transitions through bifurcation curves and study how they could be understood as specific market scenarios. We find that the amplitude of the corrections needed to recover from a crisis and to push the system back to “normal” is often significantly larger than the strength of the causes that led to the crisis itself.

  12. An intelligent classifier for prognosis of cardiac resynchronization therapy based on speckle-tracking echocardiograms.

    PubMed

    Chao, Pei-Kuang; Wang, Chun-Li; Chan, Hsiao-Lung

    2012-03-01

    Predicting response after cardiac resynchronization therapy (CRT) has been a challenge of cardiologists. About 30% of selected patients based on the standard selection criteria for CRT do not show response after receiving the treatment. This study is aimed to build an intelligent classifier to assist in identifying potential CRT responders by speckle-tracking radial strain based on echocardiograms. The echocardiograms analyzed were acquired before CRT from 26 patients who have received CRT. Sequential forward selection was performed on the parameters obtained by peak-strain timing and phase space reconstruction on speckle-tracking radial strain to find an optimal set of features for creating intelligent classifiers. Support vector machine (SVM) with a linear, quadratic, and polynominal kernel were tested to build classifiers to identify potential responders and non-responders for CRT by selected features. Based on random sub-sampling validation, the best classification performance is correct rate about 95% with 96-97% sensitivity and 93-94% specificity achieved by applying SVM with a quadratic kernel on a set of 3 parameters. The selected 3 parameters contain both indexes extracted by peak-strain timing and phase space reconstruction. An intelligent classifier with an averaged correct rate, sensitivity and specificity above 90% for assisting in identifying CRT responders is built by speckle-tracking radial strain. The classifier can be applied to provide objective suggestion for patient selection of CRT. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Optimization of capillary zone electrophoresis for charge heterogeneity testing of biopharmaceuticals using enhanced method development principles.

    PubMed

    Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan

    2017-12-01

    CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Empirical evaluation of cross-site reproducibility in radiomic features for characterizing prostate MRI

    NASA Astrophysics Data System (ADS)

    Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish

    2018-02-01

    The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.

  15. SPOTting model parameters using a ready-made Python package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Breuer, Lutz

    2015-04-01

    The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.

  16. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  17. Why equal treatment is not always equitable: the impact of existing ethnic health inequalities in cost-effectiveness modeling.

    PubMed

    McLeod, Melissa; Blakely, Tony; Kvizhinadze, Giorgi; Harris, Ricci

    2014-01-01

    A critical first step toward incorporating equity into cost-effectiveness analyses is to appropriately model interventions by population subgroups. In this paper we use a standardized treatment intervention to examine the impact of using ethnic-specific (Māori and non-Māori) data in cost-utility analyses for three cancers. We estimate gains in health-adjusted life years (HALYs) for a simple intervention (20% reduction in excess cancer mortality) for lung, female breast, and colon cancers, using Markov modeling. Base models include ethnic-specific cancer incidence with other parameters either turned off or set to non-Māori levels for both groups. Subsequent models add ethnic-specific cancer survival, morbidity, and life expectancy. Costs include intervention and downstream health system costs. For the three cancers, including existing inequalities in background parameters (population mortality and comorbidities) for Māori attributes less value to a year of life saved compared to non-Māori and lowers the relative health gains for Māori. In contrast, ethnic inequalities in cancer parameters have less predictable effects. Despite Māori having higher excess mortality from all three cancers, modeled health gains for Māori were less from the lung cancer intervention than for non-Māori but higher for the breast and colon interventions. Cost-effectiveness modeling is a useful tool in the prioritization of health services. But there are important (and sometimes counterintuitive) implications of including ethnic-specific background and disease parameters. In order to avoid perpetuating existing ethnic inequalities in health, such analyses should be undertaken with care.

  18. Why equal treatment is not always equitable: the impact of existing ethnic health inequalities in cost-effectiveness modeling

    PubMed Central

    2014-01-01

    Background A critical first step toward incorporating equity into cost-effectiveness analyses is to appropriately model interventions by population subgroups. In this paper we use a standardized treatment intervention to examine the impact of using ethnic-specific (Māori and non-Māori) data in cost-utility analyses for three cancers. Methods We estimate gains in health-adjusted life years (HALYs) for a simple intervention (20% reduction in excess cancer mortality) for lung, female breast, and colon cancers, using Markov modeling. Base models include ethnic-specific cancer incidence with other parameters either turned off or set to non-Māori levels for both groups. Subsequent models add ethnic-specific cancer survival, morbidity, and life expectancy. Costs include intervention and downstream health system costs. Results For the three cancers, including existing inequalities in background parameters (population mortality and comorbidities) for Māori attributes less value to a year of life saved compared to non-Māori and lowers the relative health gains for Māori. In contrast, ethnic inequalities in cancer parameters have less predictable effects. Despite Māori having higher excess mortality from all three cancers, modeled health gains for Māori were less from the lung cancer intervention than for non-Māori but higher for the breast and colon interventions. Conclusions Cost-effectiveness modeling is a useful tool in the prioritization of health services. But there are important (and sometimes counterintuitive) implications of including ethnic-specific background and disease parameters. In order to avoid perpetuating existing ethnic inequalities in health, such analyses should be undertaken with care. PMID:24910540

  19. Bayesian Parameter Estimation for Heavy-Duty Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Eric; Konan, Arnaud; Duran, Adam

    2017-03-28

    Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less

  20. Optical tests for using smartphones inside medical devices

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Acobas, Jennifer K.; Phang, Ye Shang; Hassan, David; Bolton, Frank J.; Levitz, David

    2018-02-01

    Smartphones are currently used in many medical applications and are more frequently being integrated into medical imaging devices. The regulatory requirements in existence today however, particularly the standardization of smartphone imaging through validation and verification testing, only partially cover imaging characteristics with a smartphone. Specifically, it has been shown that smartphone camera specifications are of sufficient quality for medical imaging, and there are devices which comply with the FDA's regulatory requirements for a medical device such as a device's field of view, direction of viewing and optical resolution and optical distortion. However, these regulatory requirements do not call specifically for color testing. Images of the same object using automatic settings or different light sources can show different color composition. Experimental results showing such differences are presented. Under some circumstances, such differences in color composition could potentially lead to incorrect diagnoses. It is therefore critical to control the smartphone camera and illumination parameters properly. This paper examines different smartphone camera settings that affect image quality and color composition. To test and select the correct settings, a test methodology is proposed. It aims at evaluating and testing image color correctness and white balance settings for mobile phones and LED light sources. Emphasis is placed on color consistency and deviation from gray values, specifically by evaluating the ΔC values based on the CIEL*a*b* color space. Results show that such standardization minimizes differences in color composition and thus could reduce the risk of a wrong diagnosis.

  1. Flight motor set 360L006 (STS-34). Volume 1: System overview

    NASA Technical Reports Server (NTRS)

    Garecht, Diane M.

    1990-01-01

    Flight motor set 360L006 was launched at approximately 11:54 a.m. Central Daylight Time (CDT) on 18 October 1989 as part of NASA space shuttle mission STS-34. As with all previous redesigned solid rocket motor launches, overall motor performance was excellent. All ballistic contract end item (CEI) specification parameters were verified with the exceptions of ignition interval and rise rates. Ignition interval and rise rates could not be verified due to the elimination of developmental flight instrumentation from fourth flight and subsequent, but the low sample rate data that were available showed nominal propulsion performance. All ballistic and mass property parameters closely matched the predicted values and were well within the required CEI specification levels that could be assessed, with the exception of the RH-motor vacuum-delivered specific impulse. It exceeds the upper-limit CEI specification due to a bias imposed on the raw data by the OPT/Taber gage measurement differences. Evaluation of the ground environment instrumentation measurements again verified thermal model analysis data and showed agreement with predicted environmental effects. No launch commit criteria thermal violations occurred. Postflight inspection again verified superior performance of the insulation, phenolics, metal parts, and seals. Postflight evaluation indicated both nozzles performed as expected during flight, although splashdown loads tore the left-hand, 45-deg actuator bracket from the nozzle. All combustion gas was contained by insulation in the field and nozzle-to-case joints. Recommendations were made concerning improved thermal modeling and measurements. The rationale for these recommendations, the disposition of all anomalies, and complete result details are contained.

  2. META-X Design Flow Tools

    DTIC Science & Technology

    2013-04-01

    Forces can be computed at specific angular positions, and geometrical parameters can be evaluated. Much higher resolution models are required, along...composition engines (C#, C++, Python, Java ) Desert operates on the CyPhy model, converting from a design space alternative structure to a set of design...consists of scripts to execute dymola, post-processing of results to create metrics, and general management of the job sequence. An earlier version created

  3. Chromosomes 3B and 4D are associated with several milling and baking quality traits in a soft white spring wheat (Triticum aestivum L.) population

    USDA-ARS?s Scientific Manuscript database

    Wheat is marketed based on end-use quality characteristics and better knowledge of the underlying genetics of specific quality parameters is essential to enhance the breeding process. A set of 188 recombinant inbred lines from a ‘Louise’ by ‘Penawawa’ mapping population was grown in two crop years a...

  4. Sensitivity Analysis and Requirements for Temporally and Spatially Resolved Thermometry Using Neutron Resonance Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, Juan Carlos; Barnes, Cris William; Mocko, Michael Jeffrey

    This report is intended to examine the use of neutron resonance spectroscopy (NRS) to make time- dependent and spatially-resolved temperature measurements of materials in extreme conditions. Specifically, the sensitivities of the temperature estimate on neutron-beam and diagnostic parameters is examined. Based on that examination, requirements are set on a pulsed neutron-source and diagnostics to make a meaningful measurement.

  5. Estimating parametric phenotypes that determine anthesis date in Zea mays: Challenges in combining ecophysiological models with genetics

    PubMed Central

    Welch, Stephen M.; White, Jeffrey W.; Thorp, Kelly R.; Bello, Nora M.

    2018-01-01

    Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are more conventional traits. The goal of this study was to investigate the estimation of parameters controlling maize anthesis date with the CERES-Maize model, based on 5,266 maize lines from 11 plantings at locations across the eastern United States. High performance computing was used to develop a database of 356 million simulated anthesis dates in response to four CERES-Maize model parameters. Although the resulting estimates showed high predictive value (R2 = 0.94), three issues presented serious challenges for use of GSP’s as traits. First (expressivity), the model was unable to express the observed data for 168 to 3,339 lines (depending on the combination of site-years), many of which ended up sharing the same parameter value irrespective of genetics. Second, for 2,254 lines, the model reproduced the data, but multiple parameter sets were equally effective (equifinality). Third, parameter values were highly dependent (p<10−6919) on the sets of environments used to estimate them (instability), calling in to question the assumption that they represent fundamental genetic traits. The issues of expressivity, equifinality and instability must be addressed before the genetic mapping of GSP’s becomes a robust means to help solve the genotype-to-phenotype problem in crops. PMID:29672629

  6. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  7. PID Tuning Using Extremum Seeking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Killingsworth, N; Krstic, M

    2005-11-15

    Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less

  8. Understanding the transmission dynamics of respiratory syncytial virus using multiple time series and nested models.

    PubMed

    White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F

    2007-09-01

    The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.

  9. SCOPE: a web server for practical de novo motif discovery.

    PubMed

    Carlson, Jonathan M; Chakravarty, Arijit; DeZiel, Charles E; Gross, Robert H

    2007-07-01

    SCOPE is a novel parameter-free method for the de novo identification of potential regulatory motifs in sets of coordinately regulated genes. The SCOPE algorithm combines the output of three component algorithms, each designed to identify a particular class of motifs. Using an ensemble learning approach, SCOPE identifies the best candidate motifs from its component algorithms. In tests on experimentally determined datasets, SCOPE identified motifs with a significantly higher level of accuracy than a number of other web-based motif finders run with their default parameters. Because SCOPE has no adjustable parameters, the web server has an intuitive interface, requiring only a set of gene names or FASTA sequences and a choice of species. The most significant motifs found by SCOPE are displayed graphically on the main results page with a table containing summary statistics for each motif. Detailed motif information, including the sequence logo, PWM, consensus sequence and specific matching sites can be viewed through a single click on a motif. SCOPE's efficient, parameter-free search strategy has enabled the development of a web server that is readily accessible to the practising biologist while providing results that compare favorably with those of other motif finders. The SCOPE web server is at .

  10. Caenorhabditis elegans vulval cell fate patterning

    NASA Astrophysics Data System (ADS)

    Félix, Marie-Anne

    2012-08-01

    The spatial patterning of three cell fates in a row of competent cells is exemplified by vulva development in the nematode Caenorhabditis elegans. The intercellular signaling network that underlies fate specification is well understood, yet quantitative aspects remain to be elucidated. Quantitative models of the network allow us to test the effect of parameter variation on the cell fate pattern output. Among the parameter sets that allow us to reach the wild-type pattern, two general developmental patterning mechanisms of the three fates can be found: sequential inductions and morphogen-based induction, the former being more robust to parameter variation. Experimentally, the vulval cell fate pattern is robust to stochastic and environmental challenges, and minor variants can be detected. The exception is the fate of the anterior cell, P3.p, which is sensitive to stochastic variation and spontaneous mutation, and is also evolving the fastest. Other vulval precursor cell fates can be affected by mutation, yet little natural variation can be found, suggesting stabilizing selection. Despite this fate pattern conservation, different Caenorhabditis species respond differently to perturbations of the system. In the quantitative models, different parameter sets can reconstitute their response to perturbation, suggesting that network variation among Caenorhabditis species may be quantitative. Network rewiring likely occurred at longer evolutionary scales.

  11. Using simulation to aid trial design: Ring-vaccination trials.

    PubMed

    Hitchings, Matt David Thomas; Grais, Rebecca Freeman; Lipsitch, Marc

    2017-03-01

    The 2014-6 West African Ebola epidemic highlights the need for rigorous, rapid clinical trial methods for vaccines. A challenge for trial design is making sample size calculations based on incidence within the trial, total vaccine effect, and intracluster correlation, when these parameters are uncertain in the presence of indirect effects of vaccination. We present a stochastic, compartmental model for a ring vaccination trial. After identification of an index case, a ring of contacts is recruited and either vaccinated immediately or after 21 days. The primary outcome of the trial is total vaccine effect, counting cases only from a pre-specified window in which the immediate arm is assumed to be fully protected and the delayed arm is not protected. Simulation results are used to calculate necessary sample size and estimated vaccine effect. Under baseline assumptions about vaccine properties, monthly incidence in unvaccinated rings and trial design, a standard sample-size calculation neglecting dynamic effects estimated that 7,100 participants would be needed to achieve 80% power to detect a difference in attack rate between arms, while incorporating dynamic considerations in the model increased the estimate to 8,900. This approach replaces assumptions about parameters at the ring level with assumptions about disease dynamics and vaccine characteristics at the individual level, so within this framework we were able to describe the sensitivity of the trial power and estimated effect to various parameters. We found that both of these quantities are sensitive to properties of the vaccine, to setting-specific parameters over which investigators have little control, and to parameters that are determined by the study design. Incorporating simulation into the trial design process can improve robustness of sample size calculations. For this specific trial design, vaccine effectiveness depends on properties of the ring vaccination design and on the measurement window, as well as the epidemiologic setting.

  12. The usefulness of mobile insulator sheets for the optimisation of deep heating area for regional hyperthermia using a capacitively coupled heating method: phantom, simulation and clinical prospective studies.

    PubMed

    Tomura, Kyosuke; Ohguri, Takayuki; Mulder, Hendrik Thijmen; Murakami, Motohiro; Nakahara, Sota; Yahara, Katsuya; Korogi, Yukunori

    2017-11-20

    To evaluate the feasibility and efficacy of deep regional hyperthermia with the use of mobile insulator sheets in a capacitively coupled heating device. The heat was applied using an 8-MHz radiofrequency-capacitive device. The insulator sheet was inserted between the regular bolus and cooled overlay bolus in each of upper and lower side of the electrode. Several settings using the insulator sheets were investigated in an experimental study using an agar phantom to evaluate the temperature distributions. The specific absorption rate (SAR) distributions in several organs were also computed for the three-dimensional patient model. In a clinical prospective study, a total of five heating sessions were scheduled for the pelvic tumours, to assess the thermal parameters. The conventional setting was used during the first, third and fifth treatment sessions, and insulator sheets were used during the second and fourth treatment sessions. In the phantom study, the higher heating area improved towards the centre when the mobile insulator sheets were used. The subcutaneous fat/target ratios for the averaged SARs in the setting with the mobile insulator (median, 2.5) were significantly improved compared with those in the conventional setting (median, 3.4). In the clinical study, the thermal dose parameters of CEM43°CT90 in the sessions with the mobile insulator sheets (median, 1.9 min) were significantly better than those in the sessions using a conventional setting (median, 1.0 min). Our novel heating method using mobile insulator sheets was thus found to improve the thermal dose parameters. Further investigations are expected.

  13. A Comparison of different learning models used in Data Mining for Medical Data

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Koti, Manjula Sanjay

    2011-12-01

    The present study aims at investigating the different Data mining learning models for different medical data sets and to give practical guidelines to select the most appropriate algorithm for a specific medical data set. In practical situations, it is absolutely necessary to take decisions with regard to the appropriate models and parameters for diagnosis and prediction problems. Learning models and algorithms are widely implemented for rule extraction and the prediction of system behavior. In this paper, some of the well-known Machine Learning(ML) systems are investigated for different methods and are tested on five medical data sets. The practical criteria for evaluating different learning models are presented and the potential benefits of the proposed methodology for diagnosis and learning are suggested.

  14. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  15. A source-specific model for lossless compression of global Earth data

    NASA Astrophysics Data System (ADS)

    Kess, Barbara Lynne

    A Source Specific Model for Global Earth Data (SSM-GED) is a lossless compression method for large images that captures global redundancy in the data and achieves a significant improvement over CALIC and DCXT-BT/CARP, two leading lossless compression schemes. The Global Land 1-Km Advanced Very High Resolution Radiometer (AVHRR) data, which contains 662 Megabytes (MB) per band, is an example of a large data set that requires decompression of regions of the data. For this reason, SSM-GED compresses the AVHRR data as a collection of subwindows. This approach defines the statistical parameters for the model prior to compression. Unlike universal models that assume no a priori knowledge of the data, SSM-GED captures global redundancy that exists among all of the subwindows of data. The overlap in parameters among subwindows of data enables SSM-GED to improve the compression rate by increasing the number of parameters and maintaining a small model cost for each subwindow of data. This lossless compression method is applicable to other large volumes of image data such as video.

  16. Voronoi cell patterns: Theoretical model and applications

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Einstein, T. L.

    2011-11-01

    We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.

  17. Reducing the Volume of NASA Earth-Science Data

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Braverman, Amy J.; Guillaume, Alexandre

    2010-01-01

    A computer program reduces data generated by NASA Earth-science missions into representative clusters characterized by centroids and membership information, thereby reducing the large volume of data to a level more amenable to analysis. The program effects an autonomous data-reduction/clustering process to produce a representative distribution and joint relationships of the data, without assuming a specific type of distribution and relationship and without resorting to domain-specific knowledge about the data. The program implements a combination of a data-reduction algorithm known as the entropy-constrained vector quantization (ECVQ) and an optimization algorithm known as the differential evolution (DE). The combination of algorithms generates the Pareto front of clustering solutions that presents the compromise between the quality of the reduced data and the degree of reduction. Similar prior data-reduction computer programs utilize only a clustering algorithm, the parameters of which are tuned manually by users. In the present program, autonomous optimization of the parameters by means of the DE supplants the manual tuning of the parameters. Thus, the program determines the best set of clustering solutions without human intervention.

  18. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  19. The Total Gaussian Class of Quasiprobabilities and its Relation to Squeezed-State Excitations

    NASA Technical Reports Server (NTRS)

    Wuensche, Alfred

    1996-01-01

    The class of quasiprobabilities obtainable from the Wigner quasiprobability by convolutions with the general class of Gaussian functions is investigated. It can be described by a three-dimensional, in general, complex vector parameter with the property of additivity when composing convolutions. The diagonal representation of this class of quasiprobabilities is connected with a generalization of the displaced Fock states in direction of squeezing. The subclass with real vector parameter is considered more in detail. It is related to the most important kinds of boson operator ordering. The properties of a specific set of discrete excitations of squeezed coherent states are given.

  20. Estimated critical conditions for UO[sub 2]F[sub 2]--H[sub 2]O systems in fully water-reflected spherical geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, W.C.; Turner, J.C.

    1992-12-01

    The purpose of this report is to document reference calculations performed using the SCALE-4.0 code system to determine the critical parameters of UO[sub 2]F[sub 2]-H[sub 2]O spheres. The calculations are an extension of those documented in ORNL/CSD/TM-284. Specifically, the data for low-enriched UO[sub 2]F[sub 2]-H[sub 2]O spheres have been extended to highly enriched uranium. These calculations, together with those reported in ORNL/CSD/TM-284, provide a consistent set of critical parameters (k[sub [infinity

  1. Experimental validation of the RATE tool for inferring HLA restrictions of T cell epitopes.

    PubMed

    Paul, Sinu; Arlehamn, Cecilia S Lindestam; Schulten, Veronique; Westernberg, Luise; Sidney, John; Peters, Bjoern; Sette, Alessandro

    2017-06-21

    The RATE tool was recently developed to computationally infer the HLA restriction of given epitopes from immune response data of HLA typed subjects without additional cumbersome experimentation. Here, RATE was validated using experimentally defined restriction data from a set of 191 tuberculosis-derived epitopes and 63 healthy individuals with MTB infection from the Western Cape Region of South Africa. Using this experimental dataset, the parameters utilized by the RATE tool to infer restriction were optimized, which included relative frequency (RF) of the subjects responding to a given epitope and expressing a given allele as compared to the general test population and the associated p-value in a Fisher's exact test. We also examined the potential for further optimization based on the predicted binding affinity of epitopes to potential restricting HLA alleles, and the absolute number of individuals expressing a given allele and responding to the specific epitope. Different statistical measures, including Matthew's correlation coefficient, accuracy, sensitivity and specificity were used to evaluate performance of RATE as a function of these criteria. Based on our results we recommend selection of HLA restrictions with cutoffs of p-value < 0.01 and RF ≥ 1.3. The usefulness of the tool was demonstrated by inferring new HLA restrictions for epitope sets where restrictions could not be experimentally determined due to lack of necessary cell lines and for an additional data set related to recognition of pollen derived epitopes from allergic patients. Experimental data sets were used to validate RATE tool and the parameters used by the RATE tool to infer restriction were optimized. New HLA restrictions were identified using the optimized RATE tool.

  2. A preliminary evaluation of the relationship between bioconcentration and hydrophobicity for surfactants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolls, J.; Sijm, D.T.H.M.

    1995-10-01

    A statistical analysis was done of the relationship between hydrophobicity and bioconcentration parameters (uptake and elimination rate constants and bioconcentration factor) predicted by the diffusive mass-transfer (DMT) concept of bioconcentration developed previously. The authors employed polychlorinated biphenyls and benzenes (PCB/zs) as model compounds and the octanol/water partition coefficient as hydrophobicity parameter. They conclude that the model is consistent with the data. Subsequently, they applied the DMT concept to a set of preliminary bioconcentration data for surfactants using the critical micelle concentration (CMC) as hydrophobicity parameter. The obtained relationships qualitatively agree with the DMT concept, indicating that hydrophobicity is of greatmore » influence on surfactant bioconcentration. Finally, they investigated the hydrophobicity-bioconcentration relationships of surfactants and PCB/zs using aqueous solubility as common hydrophobicity parameter and found the relationships between the bioconcentration parameters and hydrophobicity to agree with the DMT concept. These findings are based on total radiolabel data. Therefore, they need to be confirmed using compound-specific surfactant bioconcentration data.« less

  3. Rendering of HDR content on LDR displays: an objective approach

    NASA Astrophysics Data System (ADS)

    Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick

    2015-09-01

    Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.

  4. CosmoSIS: A system for MC parameter estimation

    DOE PAGES

    Bridle, S.; Dodelson, S.; Jennings, E.; ...

    2015-12-23

    CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less

  5. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  6. Characterisation of the physico-mechanical parameters of MSW.

    PubMed

    Stoltz, Guillaume; Gourc, Jean-Pierre; Oxarango, Laurent

    2010-01-01

    Following the basics of soil mechanics, the physico-mechanical behaviour of municipal solid waste (MSW) can be defined through constitutive relationships which are expressed with respect to three physical parameters: the dry density, the porosity and the gravimetric liquid content. In order to take into account the complexity of MSW (grain size distribution and heterogeneity larger than for conventional soils), a special oedometer was designed to carry out laboratory experiments. This apparatus allowed a coupled measurement of physical parameters for MSW settlement under stress. The studied material was a typical sample of fresh MSW from a French landfill. The relevant physical parameters were measured using a gas pycnometer. Moreover, the compressibility of MSW was studied with respect to the initial gravimetric liquid content. Proposed methods to assess the set of three physical parameters allow a relevant understanding of the physico-mechanical behaviour of MSW under compression, specifically, the evolution of the limit liquid content. The present method can be extended to any type of MSW. 2010 Elsevier Ltd. All rights reserved.

  7. Determination of Watershed Lag Equation for Philippine Hydrology

    NASA Astrophysics Data System (ADS)

    Cipriano, F. R.; Lagmay, A. M. F. A.; Uichanco, C.; Mendoza, J.; Sabio, G.; Punay, K. N.; Oquindo, M. R.; Horritt, M.

    2014-12-01

    Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are some of the damages caused by flooding and the country's government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps, different types of data were needed and part of that is calculating hydrological components to come up with an accurate output. This paper presents how an important parameter, the time-to-peak of the watershed (Tp) was calculated. Time-to-peak is defined as the time at which the largest discharge of the watershed occurs. This is computed by using a lag time equation that was developed specifically for the Philippine setting. The equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S), and watershed slope (Y). This approach is based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the time-to-peak. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. Values of Tp from the different sensors were generated from the general lag time equation based on the Natural Resource Conservation Management handbook by the US Department of Agriculture. The calculated Tp values were plotted against the values obtained from the equation L0.8(S+1)0.7/Y0.5. Regression analysis was used to obtain the final equation that would be used to calculate the time-to-peak specifically for rivers in the Philippine setting. The calculated values could then be used as a parameter for modeling different flood scenarios in the country.

  8. An investigative comparison of purging and non-purging groundwater sampling methods in Karoo aquifer monitoring wells

    NASA Astrophysics Data System (ADS)

    Gomo, M.; Vermeulen, D.

    2015-03-01

    An investigation was conducted to statistically compare the influence of non-purging and purging groundwater sampling methods on analysed inorganic chemistry parameters and calculated saturation indices. Groundwater samples were collected from 15 monitoring wells drilled in Karoo aquifers before and after purging for the comparative study. For the non-purging method, samples were collected from groundwater flow zones located in the wells using electrical conductivity (EC) profiling. The two data sets of non-purged and purged groundwater samples were analysed for inorganic chemistry parameters at the Institute of Groundwater Studies (IGS) laboratory of the Free University in South Africa. Saturation indices for mineral phases that were found in the data base of PHREEQC hydrogeochemical model were calculated for each data set. Four one-way ANOVA tests were conducted using Microsoft excel 2007 to investigate if there is any statistically significant difference between: (1) all inorganic chemistry parameters measured in the non-purged and purged groundwater samples per each specific well, (2) all mineral saturation indices calculated for the non-purged and purged groundwater samples per each specific well, (3) individual inorganic chemistry parameters measured in the non-purged and purged groundwater samples across all wells and (4) Individual mineral saturation indices calculated for non-purged and purged groundwater samples across all wells. For all the ANOVA tests conducted, the calculated alpha values (p) are greater than 0.05 (significance level) and test statistic (F) is less than the critical value (Fcrit) (F < Fcrit). The results imply that there was no statistically significant difference between the two data sets. With a 95% confidence, it was therefore concluded that the variance between groups was rather due to random chance and not to the influence of the sampling methods (tested factor). It is therefore be possible that in some hydrogeologic conditions, non-purged groundwater samples might be just as representative as the purged ones. The findings of this study can provide an important platform for future evidence oriented research investigations to establish the necessity of purging prior to groundwater sampling in different aquifer systems.

  9. Feasibility of TCP-based dose painting by numbers applied to a prostate case with (18)F-choline PET imaging.

    PubMed

    Dirscherl, Thomas; Rickhey, Mark; Bogner, Ludwig

    2012-02-01

    A biologically adaptive radiation treatment method to maximize the TCP is shown. Functional imaging is used to acquire a heterogeneous dose prescription in terms of Dose Painting by Numbers and to create a patient-specific IMRT plan. Adapted from a method for selective dose escalation under the guidance of spatial biology distribution, a model, which translates heterogeneously distributed radiobiological parameters into voxelwise dose prescriptions, was developed. At the example of a prostate case with (18)F-choline PET imaging, different sets of reported values for the parameters were examined concerning their resulting range of dose values. Furthermore, the influence of each parameter of the linear-quadratic model was investigated. A correlation between PET signal and proliferation as well as cell density was assumed. Using our in-house treatment planning software Direct Monte Carlo Optimization (DMCO), a treatment plan based on the obtained dose prescription was generated. Gafchromic EBT films were irradiated for evaluation. When a TCP of 95% was aimed at, the maximal dose in a voxel of the prescription exceeded 100Gy for most considered parameter sets. One of the parameter sets resulted in a dose range of 87.1Gy to 99.3Gy, yielding a TCP of 94.7%, and was investigated more closely. The TCP of the plan decreased to 73.5% after optimization based on that prescription. The dose difference histogram of optimized and prescribed dose revealed a mean of -1.64Gy and a standard deviation of 4.02Gy. Film verification showed a reasonable agreement of planned and delivered dose. If the distribution of radiobiological parameters within a tumor is known, this model can be used to create a dose-painting by numbers plan which maximizes the TCP. It could be shown, that such a heterogeneous dose distribution is technically feasible. Copyright © 2012. Published by Elsevier GmbH.

  10. Systems and methods for optimal power flow on a radial network

    DOEpatents

    Low, Steven H.; Peng, Qiuyu

    2018-04-24

    Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.

  11. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  12. A validation procedure for a LADAR system radiometric simulation model

    NASA Astrophysics Data System (ADS)

    Leishman, Brad; Budge, Scott; Pack, Robert

    2007-04-01

    The USU LadarSIM software package is a ladar system engineering tool that has recently been enhanced to include the modeling of the radiometry of Ladar beam footprints. This paper will discuss our validation of the radiometric model and present a practical approach to future validation work. In order to validate complicated and interrelated factors affecting radiometry, a systematic approach had to be developed. Data for known parameters were first gathered then unknown parameters of the system were determined from simulation test scenarios. This was done in a way to isolate as many unknown variables as possible, then build on the previously obtained results. First, the appropriate voltage threshold levels of the discrimination electronics were set by analyzing the number of false alarms seen in actual data sets. With this threshold set, the system noise was then adjusted to achieve the appropriate number of dropouts. Once a suitable noise level was found, the range errors of the simulated and actual data sets were compared and studied. Predicted errors in range measurements were analyzed using two methods: first by examining the range error of a surface with known reflectivity and second by examining the range errors for specific detectors with known responsivities. This provided insight into the discrimination method and receiver electronics used in the actual system.

  13. Predicting the partitioning of biological compounds between room-temperature ionic liquids and water by means of the solvation-parameter model.

    PubMed

    Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario

    2011-03-01

    The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.

  14. Optimizing the assessment of pediatric injury severity in low-resource settings: Consensus generation through a modified Delphi analysis.

    PubMed

    St-Louis, Etienne; Deckelbaum, Dan Leon; Baird, Robert; Razek, Tarek

    2017-06-01

    Although a plethora of pediatric injury severity scoring systems is available, many of them present important challenges and limitations in the low resource setting. Our aim is to generate consensus among a group of experts regarding the optimal parameters, outcomes, and methods of estimating injury severity for pediatric trauma patients in low resource settings. A systematic review of the literature was conducted to identify and compare existing injury scores used in pediatric patients. Qualitative data was extracted from the systematic review, including scoring parameters, settings and outcomes. In order to establish consensus regarding which of these elements are most adapted to pediatric patients in low-resource settings, they were subjected to a modified Delphi survey for external validation. The Delphi process is a structured communication technique that relies on a panel of experts to develop a systematic, interactive consensus method. We invited a group of 38 experts, including adult and pediatric surgeons, emergency physicians and anesthesiologists trauma team leaders from a level 1 trauma center in Montreal, Canada, and a pediatric referral trauma hospital in Santiago, Chile to participate in two successive rounds of our survey. Consensus was reached regarding various features of an ideal pediatric trauma score. Specifically, our experts agreed pediatric trauma scoring tool should differ from its adult counterpart, that it can be derived from point of care data available at first assessment, that blood pressure is an important variable to include in a predictive model for pediatric trauma outcomes, that blood pressure is a late but specific marker of shock in pediatric patients, that pulse rate is a more sensitive marker of hemodynamic instability than blood pressure, that an assessment of airway status should be included as a predictive variable for pediatric trauma outcomes, that the AVPU classification of neurologic status is simple and reliable in the acute setting, and more so than GCS at all ages. Therefore, we conclude that an opportunity exists to develop a new pediatric trauma score, combining the above consensus-generating ideas, that would be best adapted for use in low-resource settings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Space Particle Hazard Specification, Forecasting, and Mitigation

    DTIC Science & Technology

    2007-11-30

    Automated FTP scripts permitted users to automatically update their global input parameter data set directly from the National Oceanic and...of CEASE capabilities. The angular field-of-view for CEASE is relatively large and will not allow for pitch angle resolved measurements. However... angular zones spanning 120° in the plane containing the magnetic field with an approximate 4° width in the direction perpendicular to the look-plane

  16. Development of Viscosity Model for Petroleum Industry Applications

    NASA Astrophysics Data System (ADS)

    Motahhari, Hamed reza

    Heavy oil and bitumen are challenging to produce and process due to their very high viscosity, but their viscosity can be reduced either by heating or dilution with a solvent. Given the key role of viscosity, an accurate viscosity model suitable for use with reservoir and process simulators is essential. While there are several viscosity models for natural gases and conventional oils, a compositional model applicable to heavy petroleum and diluents is lacking. The objective of this thesis is to develop a general compositional viscosity model that is applicable to natural gas mixtures, conventional crudes oils, heavy petroleum fluids, and their mixtures with solvents and other crudes. The recently developed Expanded Fluid (EF) viscosity correlation was selected as a suitable compositional viscosity model for petroleum applications. The correlation relates the viscosity of the fluid to its density over a broad range of pressures and temperatures. The other inputs are pressure and the dilute gas viscosity. Each fluid is characterized for the correlation by a set of fluid-specific parameters which are tuned to fit data. First, the applicability of the EF correlation was extended to asymmetric mixtures and liquid mixtures containing dissolved gas components. A new set of mass-fraction based mixing rules was developed to calculate the fluid-specific parameters for mixtures. The EF correlation with the new set of mixing rules predicted the viscosity of over 100 mixtures of hydrocarbon compounds and carbon dioxide with overall average absolute relative deviations (AARD) of less than 10% either with measured densities or densities estimated by Advanced Peng-Robinson equation of state (APR EoS). To improve the viscosity predictions with APR EoS-estimated densities, general correlations were developed for non-zero viscosity binary interaction parameters. The EF correlation was extended to non-hydrocarbon compounds typically encountered in natural gas industry. It was demonstrated that the framework of the correlation is valid for these compounds, except for compounds with strong hydrogen bonding such as water. A temperature dependency was introduced into the correlation for strongly hydrogen bonding compounds. The EF correlation fit the viscosity data of pure non-hydrocarbon compounds with AARDs below 6% and predicted the viscosity of sour and sweet natural gases and aqueous solutions of organic alcohols with overall AARDs less than 9%. An internally consistent estimation method was also developed to calculate the fluid-specific parameters for hydrocarbons when no experimental viscosity data are available. The method correlates the fluid-specific parameters to the molecular weight and specific gravity. The method was evaluated against viscosity data of over 250 pure hydrocarbon compounds and petroleum distillations cuts. The EF correlation predictions were found to be within the same order of magnitude of the measurements with an overall AARD of 31%. A methodology was then proposed to apply the EF viscosity correlation to crude oils characterized as mixtures of the defined components and pseudo-components. The above estimation methods are used to calculate the fluid-specific parameters for pseudo-components. Guidelines are provided for tuning of the correlation to available viscosity data, calculating the dilute gas viscosities, and improving the densities calculated with the Peng-Robinson EoS. The viscosities of over 10 dead and live crude oils and bitumen were predicted within a factor of 3 of the measured values using the measured density of the oils as the input. It was shown that single parameter tuning of the model improved the viscosity prediction to within 30% of the measured values. Finally, the performance of the EF correlation was evaluated for diluted heavy oils and bitumens. The required density and viscosity data were collected for over 20 diluted dead and live bitumen mixtures using an in-house capillary viscometer also equipped with an in-line density-meter at temperatures and pressures up to 175 °C and 10 MPa. The predictions of the correlation were found within the same order of magnitude of the measured values with overall AARDs less than 20%. It was shown that the predictions of the correlation with generalized non-zero interaction parameters for the solvent-oil pairs were improved to overall AARDs less than 10%.

  17. Measuring body structures and body functions from the International Classification of Functioning, Disability, and Health perspective: considerations for biomedical parameters in spinal cord injury research.

    PubMed

    Eriks-Hoogland, Inge E; Brinkhof, Martin W G; Al-Khodairy, Abdul; Baumberger, Michael; Brechbühl, Jörg; Curt, Armin; Mäder, Mark; Stucki, Gerold; Post, Marcel W M

    2011-11-01

    The aims of this study were to provide a selection of biomedical domains based on the comprehensive International Classification of Functioning, Disability, and Health (ICF) core sets for spinal cord injury (SCI) and to present an overview of the corresponding measurement instruments. Based on the Biomedical Domain Set, the SCI literature, the International Spinal Cord Society international data sets, and the Spinal Cord Injury Rehabilitation Evidence project publications were used to derive category specifications for use in SCI research. Expert opinion was used to derive a priority selection. The same sources were used to determine candidate measurement instruments for the specification of body functions and body structures using an example, and guiding principles were applied to select the most appropriate biomedical measurement instrument(s) for use in an SCI research project. Literature searches were performed for 41 second-level ICF body functions categories and for four second-level ICF body structures categories. For some of these categories, only a few candidate measurement instruments were found with limited variation in the type of measurement instruments. An ICF-based measurement set for biomedical aspects of functioning with SCI was established. For some categories of the ICF core sets for SCI, there is a need to develop measurement instruments.

  18. SENIORLAB: a prospective observational study investigating laboratory parameters and their reference intervals in the elderly.

    PubMed

    Risch, Martin; Nydegger, Urs; Risch, Lorenz

    2017-01-01

    In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be considered representative for seniors in a status of age-specific normal health. In addition to the established methods for determining reference intervals, this longitudinal study utilizes a unique approach, in that survival and long-term well-being are taken as indicators of health in seniors. This approach is expected to provide robust and representative reference intervals that are obtained from an adequate reference population and not a collective of highly selected individuals. The present study was registered under International Standard Randomized Controlled Trial Number registry: ISRCTN53778569.

  19. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  20. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  1. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  2. Analysis of energy-based algorithms for RNA secondary structure prediction

    PubMed Central

    2012-01-01

    Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803

  3. Analysis of energy-based algorithms for RNA secondary structure prediction.

    PubMed

    Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H

    2012-02-01

    RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.

  4. Interactive model evaluation tool based on IPython notebook

    NASA Astrophysics Data System (ADS)

    Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet

    2015-04-01

    In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).

  5. Two types of phase diagrams for two-species Bose-Einstein condensates and the combined effect of the parameters

    NASA Astrophysics Data System (ADS)

    Li, Z. B.; Liu, Y. M.; Yao, D. X.; Bao, C. G.

    2017-07-01

    Under the Thomas-Fermi approximation, an approach is proposed to solve the coupled Gross-Pitaevskii equations (CGP) for the two-species Bose-Einstein condensate analytically. The essence of this approach is to find out the building blocks to build the solution. By introducing the weighted strengths, relatively simpler analytical solutions have been obtained. A number of formulae have been deduced to relate the parameters when the system is experimentally tuned at various status. These formulae demonstrate the combined effect of the parameters, and are useful for the evaluation of their magnitudes. The whole parameter space is divided into zones, where each supports a specific phase. All the boundaries separating these zones have analytical expressions. Based on the division, the phase diagrams against any set of parameters can be plotted. In addition, by introducing a model for the asymmetric states, the total energies of the lowest symmetric and asymmetric states have been compared. Thereby, in which case the former will be replaced by the latter has been evaluated. The CGP can be written in a matrix form. For repulsive inter-species interaction V AB , when the parameters vary and cross over the singular point of the matrix, a specific state transition will happen and the total energy of the lowest symmetric state will increase remarkably. This provides an excellent opportunity for the lowest asymmetric state to emerge as the ground state. For attractive V AB , when the parameters tend to a singular point, the system will tend to collapse. The effects caused by the singular points have been particularly studied.

  6. Volume effects of late term normal tissue toxicity in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Bonta, Dacian Viorel

    Modeling of volume effects for treatment toxicity is paramount for optimization of radiation therapy. This thesis proposes a new model for calculating volume effects in gastro-intestinal and genito-urinary normal tissue complication probability (NTCP) following radiation therapy for prostate carcinoma. The radiobiological and the pathological basis for this model and its relationship to other models are detailed. A review of the radiobiological experiments and published clinical data identified salient features and specific properties a biologically adequate model has to conform to. The new model was fit to a set of actual clinical data. In order to verify the goodness of fit, two established NTCP models and a non-NTCP measure for complication risk were fitted to the same clinical data. The method of fit for the model parameters was maximum likelihood estimation. Within the framework of the maximum likelihood approach I estimated the parameter uncertainties for each complication prediction model. The quality-of-fit was determined using the Aikaike Information Criterion. Based on the model that provided the best fit, I identified the volume effects for both types of toxicities. Computer-based bootstrap resampling of the original dataset was used to estimate the bias and variance for the fitted parameter values. Computer simulation was also used to estimate the population size that generates a specific uncertainty level (3%) in the value of predicted complication probability. The same method was used to estimate the size of the patient population needed for accurate choice of the model underlying the NTCP. The results indicate that, depending on the number of parameters of a specific NTCP model, 100 (for two parameter models) and 500 patients (for three parameter models) are needed for accurate parameter fit. Correlation of complication occurrence in patients was also investigated. The results suggest that complication outcomes are correlated in a patient, although the correlation coefficient is rather small.

  7. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.

    PubMed

    Vilhelmsen, Troels N; Ferré, Ty P A

    2018-05-01

    Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.

  8. Acoustic Characterization of a Multi-Rotor Unmanned Aircraft

    NASA Astrophysics Data System (ADS)

    Feight, Jordan; Gaeta, Richard; Jacob, Jamey

    2017-11-01

    In this study, the noise produced by a small multi-rotor rotary wing aircraft, or drone, is measured and characterized. The aircraft is tested in different configurations and environments to investigate specific parameters and how they affect the acoustic signature of the system. The parameters include rotor RPM, the number of rotors, distance and angle of microphone array from the noise source, and the ambient environment. The testing environments include an anechoic chamber for an idealized setting and both indoor and outdoor settings to represent real world conditions. PIV measurements are conducted to link the downwash and vortical flow structures from the rotors with the noise generation. The significant factors that arise from this study are the operational state of the aircraft and the microphone location (or the directivity of the noise source). The directivity in the rotor plane was shown to be omni-directional, regardless of the varying parameters. The tonal noise dominates the low to mid frequencies while the broadband noise dominates the higher frequencies. The fundamental characteristics of the acoustic signature appear to be invariant to the number of rotors. Flight maneuvers of the aircraft also significantly impact the tonal content in the acoustic signature.

  9. Large-scale systematic analysis of 2D fingerprint methods and parameters to improve virtual screening enrichments.

    PubMed

    Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody

    2010-05-24

    A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.

  10. Figures of Merit for Control Verification

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Goesu. Daniel P.

    2008-01-01

    This paper proposes a methodology for evaluating a controller's ability to satisfy a set of closed-loop specifications when the plant has an arbitrary functional dependency on uncertain parameters. Control verification metrics applicable to deterministic and probabilistic uncertainty models are proposed. These metrics, which result from sizing the largest uncertainty set of a given class for which the specifications are satisfied, enable systematic assessment of competing control alternatives regardless of the methods used to derive them. A particularly attractive feature of the tools derived is that their efficiency and accuracy do not depend on the robustness of the controller. This is in sharp contrast to Monte Carlo based methods where the number of simulations required to accurately approximate the failure probability grows exponentially with its closeness to zero. This framework allows for the integration of complex, high-fidelity simulations of the integrated system and only requires standard optimization algorithms for its implementation.

  11. Designing for Annual Spacelift Performance

    NASA Technical Reports Server (NTRS)

    McCleskey, Carey M.; Zapata, Edgar

    2017-01-01

    This paper presents a methodology for approaching space launch system design from a total architectural point of view. This different approach to conceptual design is contrasted with traditional approaches that focus on a single set of metrics for flight system performance, i.e., payload lift per flight, vehicle mass, specific impulse, etc. The approach presented works with a larger set of metrics, including annual system lift, or "spacelift" performance. Spacelift performance is more inclusive of the flight production capability of the total architecture, i.e., the flight and ground systems working together as a whole to produce flights on a repeated basis. In the proposed methodology, spacelift performance becomes an important design-for-support parameter for flight system concepts and truly advanced spaceport architectures of the future. The paper covers examples of existing system spacelift performance as benchmarks, points out specific attributes of space transportation systems that must be greatly improved over these existing designs, and outlines current activity in this area.

  12. The use of magnetic resonance sounding for quantifying specific yield and transmissivity in hard rock aquifers: The example of Benin

    NASA Astrophysics Data System (ADS)

    Vouillamoz, J. M.; Lawson, F. M. A.; Yalo, N.; Descloitres, M.

    2014-08-01

    Hundreds of thousands of boreholes have been drilled in hard rocks of Africa and Asia for supplying human communities with drinking water. Despite the common use of geophysics for improving the siting of boreholes, a significant number of drilled holes does not deliver enough water to be equipped (e.g. 40% on average in Benin). As compared to other non-invasive geophysical methods, magnetic resonance sounding (MRS) is selective to groundwater. However, this distinctive feature has not been fully used in previous published studies for quantifying the drainable groundwater in hard rocks (i.e. the specific yield) and the short-term productivity of aquifer (i.e. the transmissivity). We present in this paper a comparison of MRS results (i.e. the water content and pore-size parameter) with both specific yield and transmissivity calculated from long duration pumping tests. We conducted our experiments in six sites located in different hard rock groups in Benin, thus providing a unique data set to assess the usefulness of MRS in hard rock aquifers. We found that the MRS water content is about twice the specific yield. We also found that the MRS pore-size parameter is well correlated with the specific yield. Thus we proposed two linear equations for calculating the specific yield from the MRS water content (with an uncertainty of about 10%) and from the pore-size parameter (with an uncertainty of about 20%). The later has the advantage of defining a so-named MRS cutoff time value for indentifying non-drainable MRS water content and thus low groundwater reserve. We eventually propose a nonlinear equation for calculating the specific yield using jointly the MRS water content and the pore-size parameters, but this approach has to be confirmed with further investigations. This study also confirmed that aquifer transmissivity can be estimated from MRS results with an uncertainty of about 70%. We conclude that MRS can be usefully applied for estimating aquifer specific yield and transmissivity in weathered hard rock aquifers. Our result will contribute to the improvement of well siting and groundwater management in hard rocks.

  13. Automated respiratory cycles selection is highly specific and improves respiratory mechanics analysis.

    PubMed

    Rigo, Vincent; Graas, Estelle; Rigo, Jacques

    2012-07-01

    Selected optimal respiratory cycles should allow calculation of respiratory mechanic parameters focusing on patient-ventilator interaction. New computer software automatically selecting optimal breaths and respiratory mechanics derived from those cycles are evaluated. Retrospective study. University level III neonatal intensive care unit. Ten mins synchronized intermittent mandatory ventilation and assist/control ventilation recordings from ten newborns. The ventilator provided respiratory mechanic data (ventilator respiratory cycles) every 10 secs. Pressure, flow, and volume waves and pressure-volume, pressure-flow, and volume-flow loops were reconstructed from continuous pressure-volume recordings. Visual assessment determined assisted leak-free optimal respiratory cycles (selected respiratory cycles). New software graded the quality of cycles (automated respiratory cycles). Respiratory mechanic values were derived from both sets of optimal cycles. We evaluated quality selection and compared mean values and their variability according to ventilatory mode and respiratory mechanic provenance. To assess discriminating power, all 45 "t" values obtained from interpatient comparisons were compared for each respiratory mechanic parameter. A total of 11,724 breaths are evaluated. Automated respiratory cycle/selected respiratory cycle selections agreement is high: 88% of maximal κ with linear weighting. Specificity and positive predictive values are 0.98 and 0.96, respectively. Averaged values are similar between automated respiratory cycle and ventilator respiratory cycle. C20/C alone is markedly decreased in automated respiratory cycle (1.27 ± 0.37 vs. 1.81 ± 0.67). Tidal volume apparent similarity disappears in assist/control: automated respiratory cycle tidal volume (4.8 ± 1.0 mL/kg) is significantly lower than for ventilator respiratory cycle (5.6 ± 1.8 mL/kg). Coefficients of variation decrease for all automated respiratory cycle parameters in all infants. "t" values from ventilator respiratory cycle data are two to three times higher than ventilator respiratory cycles. Automated selection is highly specific. Automated respiratory cycle reflects most the interaction of both ventilator and patient. Improving discriminating power of ventilator monitoring will likely help in assessing disease status and following trends. Averaged parameters derived from automated respiratory cycles are more precise and could be displayed by ventilators to improve real-time fine tuning of ventilator settings.

  14. Effects and dose-response relationships of resistance training on physical performance in youth athletes: a systematic review and meta-analysis.

    PubMed

    Lesinski, Melanie; Prieske, Olaf; Granacher, Urs

    2016-07-01

    To quantify age, sex, sport and training type-specific effects of resistance training on physical performance, and to characterise dose-response relationships of resistance training parameters that could maximise gains in physical performance in youth athletes. Systematic review and meta-analysis of intervention studies. Studies were identified by systematic literature search in the databases PubMed and Web of Science (1985-2015). Weighted mean standardised mean differences (SMDwm) were calculated using random-effects models. Only studies with an active control group were included if these investigated the effects of resistance training in youth athletes (6-18 years) and tested at least one physical performance measure. 43 studies met the inclusion criteria. Our analyses revealed moderate effects of resistance training on muscle strength and vertical jump performance (SMDwm 0.8-1.09), and small effects on linear sprint, agility and sport-specific performance (SMDwm 0.58-0.75). Effects were moderated by sex and resistance training type. Independently computed dose-response relationships for resistance training parameters revealed that a training period of >23 weeks, 5 sets/exercise, 6-8 repetitions/set, a training intensity of 80-89% of 1 repetition maximum (RM), and 3-4 min rest between sets were most effective to improve muscle strength (SMDwm 2.09-3.40). Resistance training is an effective method to enhance muscle strength and jump performance in youth athletes, moderated by sex and resistance training type. Dose-response relationships for key training parameters indicate that youth coaches should primarily implement resistance training programmes with fewer repetitions and higher intensities to improve physical performance measures of youth athletes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. The structural identifiability and parameter estimation of a multispecies model for the transmission of mastitis in dairy cows with postmilking teat disinfection.

    PubMed

    White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J

    2002-01-01

    A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.

  16. Power analysis and trend detection for water quality monitoring data. An application for the Greater Yellowstone Inventory and Monitoring Network

    USGS Publications Warehouse

    Irvine, Kathryn M.; Manlove, Kezia; Hollimon, Cynthia

    2012-01-01

    An important consideration for long term monitoring programs is determining the required sampling effort to detect trends in specific ecological indicators of interest. To enhance the Greater Yellowstone Inventory and Monitoring Network’s water resources protocol(s) (O’Ney 2006 and O’Ney et al. 2009 [under review]), we developed a set of tools to: (1) determine the statistical power for detecting trends of varying magnitude in a specified water quality parameter over different lengths of sampling (years) and different within-year collection frequencies (monthly or seasonal sampling) at particular locations using historical data, and (2) perform periodic trend analyses for water quality parameters while addressing seasonality and flow weighting. A power analysis for trend detection is a statistical procedure used to estimate the probability of rejecting the hypothesis of no trend when in fact there is a trend, within a specific modeling framework. In this report, we base our power estimates on using the seasonal Kendall test (Helsel and Hirsch 2002) for detecting trend in water quality parameters measured at fixed locations over multiple years. We also present procedures (R-scripts) for conducting a periodic trend analysis using the seasonal Kendall test with and without flow adjustment. This report provides the R-scripts developed for power and trend analysis, tutorials, and the associated tables and graphs. The purpose of this report is to provide practical information for monitoring network staff on how to use these statistical tools for water quality monitoring data sets.

  17. Edge Modeling by Two Blur Parameters in Varying Contrasts.

    PubMed

    Seo, Suyoung

    2018-06-01

    This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.

  18. Recommendations and settings of word prediction software by health-related professionals for patients with spinal cord injury: a prospective observational study.

    PubMed

    Pouplin, Samuel; Roche, Nicolas; Hugeron, Caroline; Vaugier, Isabelle; Bensmail, Djamel

    2016-02-01

    For people with cervical spinal cord injury (SCI), access to computers can be difficult, thus several devices have been developed to facilitate their use. However, text input speed remains very slow compared to users who do not have a disability, even with these devices. Several methods have been developed to increase text input speed, such as word prediction software (WPS). Health-related professionals (HRP) often recommend this type of software to people with cervical SCI. WPS can be customized using different settings. It is likely that the settings used will influence the effectiveness of the software on text input speed. However, there is currently a lack of literature regarding professional practices for the setting of WPS as well as the impact for users. To analyze word prediction software settings used by HRP for people with cervical SCI. Prospective observational study. Garches, France; health-related professionals who recommend Word Prediction Software. A questionnaire was submitted to HRP who advise tetraplegic people regarding the use of communication devices. A total of 93 professionals responded to the survey. The most frequently recommended software was Skippy, a commercially available software. HRP rated the importance of the possibility to customise the settings as high. Moreover, they rated some settings as more important than others (P<0.001). However, except for the number of words displayed, each setting was configured by less than 50% of HRP. The results showed that there was a difference between the perception of the importance of some settings and data in the literature regarding the optimization of settings. Moreover, although some parameters were considered as very important, they were rarely specifically configured. Confidence in default settings and lack of information regarding optimal settings seem to be the main reasons for this discordance. This could also explain the disparate results of studies which evaluated the impact of WPS on text input speed in people with cervical SCI. The results showed that there was a difference between the perception of the importance of some settings and data in the literature regarding the optimization of settings. Moreover, although some parameters were considered as very important, they were rarely specifically configured. Confidence in default settings and lack of information regarding optimal settings seem to be the main reasons for this discordance. This could also explain the disparate results of studies which evaluated the impact of WPS on text input speed in people with cervical SCI. Professionals tend to have confidence in default settings, despite the fact they are not always appropriate for users. It thus seems essential to develop information networks and training to disseminate the results of studies and in consequence possibly improve communication for people with cervical SCI who use such devices.

  19. Argon-oxygen atmospheric pressure plasma treatment on carbon fiber reinforced polymer for improved bonding

    NASA Astrophysics Data System (ADS)

    Chartosias, Marios

    Acceptance of Carbon Fiber Reinforced Polymer (CFRP) structures requires a robust surface preparation method with improved process controls capable of ensuring high bond quality. Surface preparation in a production clean room environment prior to applying adhesive for bonding would minimize risk of contamination and reduce cost. Plasma treatment is a robust surface preparation process capable of being applied in a production clean room environment with process parameters that are easily controlled and documented. Repeatable and consistent processing is enabled through the development of a process parameter window utilizing techniques such as Design of Experiments (DOE) tailored to specific adhesive and substrate bonding applications. Insight from respective plasma treatment Original Equipment Manufacturers (OEMs) and screening tests determined critical process factors from non-factors and set the associated factor levels prior to execution of the DOE. Results from mode I Double Cantilever Beam (DCB) testing per ASTM D 5528 [1] standard and DOE statistical analysis software are used to produce a regression model and determine appropriate optimum settings for each factor.

  20. Delayed neutron spectral data for Hansen-Roach energy group structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, J.M.; Spriggs, G.D.

    A detailed knowledge of delayed neutron spectra is important in reactor physics. It not only allows for an accurate estimate of the effective delayed neutron fraction {beta}{sub eff} but also is essential to calculating important reactor kinetic parameters, such as effective group abundances and the ratio of {beta}{sub eff} to the prompt neutron generation time. Numerous measurements of delayed neutron spectra for various delayed neutron precursors have been performed and reported in the literature. However, for application in reactor physics calculations, these spectra are usually lumped into one of the traditional six groups of delayed neutrons in accordance to theirmore » half-lives. Subsequently, these six-group spectra are binned into energy intervals corresponding to the energy intervals of a chosen nuclear cross-section set. In this work, the authors present a set of delayed neutron spectra that were formulated specifically to match Keepin`s six-group parameters and the 16-energy-group Hansen-Roach cross sections.« less

  1. Water quality program elements for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Sauer, Richard L.; Ramanathan, Raghupathy; Straub, John E.; Schultz, John R.

    1991-01-01

    A strategy is outlined for the development of water-quality criteria and standards relevant to recycling and monitoring the in-flight water for the Space Station Freedom (SSF). The water-reclamation subsystem of the SSF's ECLSS is described, and the objectives of the water-quality are set forth with attention to contaminants. Quality parameters are listed for potable and hygiene-related water including physical and organic parameters, inorganic constituents, bactericides, and microbial content. Comparisons are made to the quality parameters established for the Shuttle's potable water and to the EPA's current standards. Specific research is required to develop in-flight monitoring techniques for unique SSF contaminants, ECLSS microbial control, and on- and off-line monitoring. After discussing some of the in-flight water-monitoring hardware it is concluded that water reclamation and recycling are necessary and feasible for the SSF.

  2. Quantifying and predicting Drosophila larvae crawling phenotypes

    NASA Astrophysics Data System (ADS)

    Günther, Maximilian N.; Nettesheim, Guilherme; Shubeita, George T.

    2016-06-01

    The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly’s power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer’s disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design.

  3. Methodology for modeling the devolatilization of refuse-derived fuel from thermogravimetric analysis of municipal solid waste components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritsky, K.J.; Miller, D.L.; Cernansky, N.P.

    1994-09-01

    A methodology was introduced for modeling the devolatilization characteristics of refuse-derived fuel (RFD) in terms of temperature-dependent weight loss. The basic premise of the methodology is that RDF is modeled as a combination of select municipal solid waste (MSW) components. Kinetic parameters are derived for each component from thermogravimetric analyzer (TGA) data measured at a specific set of conditions. These experimentally derived parameters, along with user-derived parameters, are inputted to model equations for the purpose of calculating thermograms for the components. The component thermograms are summed to create a composite thermogram that is an estimate of the devolatilization for themore » as-modeled RFD. The methodology has several attractive features as a thermal analysis tool for waste fuels. 7 refs., 10 figs., 3 tabs.« less

  4. Determination and correction of persistent biases in quantum annealers

    PubMed Central

    Perdomo-Ortiz, Alejandro; O’Gorman, Bryan; Fluegemann, Joseph; Biswas, Rupak; Smelyanskiy, Vadim N.

    2016-01-01

    Calibration of quantum computers is essential to the effective utilisation of their quantum resources. Specifically, the performance of quantum annealers is likely to be significantly impaired by noise in their programmable parameters, effectively misspecification of the computational problem to be solved, often resulting in spurious suboptimal solutions. We developed a strategy to determine and correct persistent, systematic biases between the actual values of the programmable parameters and their user-specified values. We applied the recalibration strategy to two D-Wave Two quantum annealers, one at NASA Ames Research Center in Moffett Field, California, and another at D-Wave Systems in Burnaby, Canada. We show that the recalibration procedure not only reduces the magnitudes of the biases in the programmable parameters but also enhances the performance of the device on a set of random benchmark instances. PMID:26783120

  5. Atmospheric radiance interpolation for the modeling of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony

    2008-04-01

    The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.

  6. Personalizing Androgen Suppression for Prostate Cancer Using Mathematical Modeling.

    PubMed

    Hirata, Yoshito; Morino, Kai; Akakura, Koichiro; Higano, Celestia S; Aihara, Kazuyuki

    2018-02-08

    Using a dataset of 150 patients treated with intermittent androgen suppression (IAS) through a fixed treatment schedule, we retrospectively designed a personalized treatment schedule mathematically for each patient. We estimated 100 sets of parameter values for each patient by randomly resampling each patient's time points to take into account the uncertainty for observations of prostate specific antigen (PSA). Then, we identified 3 types and classified patients accordingly: in type (i), the relapse, namely the divergence of PSA, can be prevented by IAS; in type (ii), the relapse can be delayed by IAS later than by continuous androgen suppression (CAS); in type (iii) IAS was not beneficial and therefore CAS would have been more appropriate in the long run. Moreover, we obtained a treatment schedule of hormone therapy by minimizing the PSA of 3 years later in the worst case scenario among the 100 parameter sets by searching exhaustively all over the possible treatment schedules. If the most frequent type among 100 sets was type (i), the maximal PSA tended to be kept less than 100 ng/ml longer in IAS than in CAS, while there was no statistical difference for the other cases. Thus, mathematically personalized IAS should be studied prospectively.

  7. Diffusion Restrictions Surrounding Mitochondria: A Mathematical Model of Heart Muscle Fibers

    PubMed Central

    Ramay, Hena R.; Vendelin, Marko

    2009-01-01

    Abstract Several experiments on permeabilized heart muscle fibers suggest the existence of diffusion restrictions grouping mitochondria and surrounding ATPases. The specific causes of these restrictions are not known, but intracellular structures are speculated to act as diffusion barriers. In this work, we assume that diffusion restrictions are induced by sarcoplasmic reticulum (SR), cytoskeleton proteins localized near SR, and crowding of cytosolic proteins. The aim of this work was to test whether such localization of diffusion restrictions would be consistent with the available experimental data and evaluate the extent of the restrictions. For that, a three-dimensional finite-element model was composed with the geometry based on mitochondrial and SR structural organization. Diffusion restrictions induced by SR and cytoskeleton proteins were varied with other model parameters to fit the set of experimental data obtained on permeabilized rat heart muscle fibers. There are many sets of model parameters that were able to reproduce all experiments considered in this work. However, in all the sets, <5–6% of the surface formed by SR and associated cytoskeleton proteins is permeable to metabolites. Such a low level of permeability indicates that the proteins should play a dominant part in formation of the diffusion restrictions. PMID:19619458

  8. Predicting the High Redshift Galaxy Population for JWST

    NASA Astrophysics Data System (ADS)

    Flynn, Zoey; Benson, Andrew

    2017-01-01

    The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.

  9. Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets.

    PubMed

    Valenzuela, Loreto M; Knight, Doyle D; Kohn, Joachim

    2016-01-01

    Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R (2) > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R (2) = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.

  10. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    NASA Astrophysics Data System (ADS)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  11. Your Place or Mine? Does the Sleep Location Matter in Young Couples?

    PubMed

    Spiegelhalder, Kai; Regen, Wolfram; Siemon, Franziska; Kyle, Simon D; Baglioni, Chiara; Feige, Bernd; Nissen, Christoph; Riemann, Dieter

    2017-01-01

    This study sought to characterize the impact of sleep location (own sleeping environment vs. partner's sleeping environment), social setting (sleeping in pairs vs. sleeping alone), and sex on sleep. An experimental 2 x 2 (sleep location x social setting) within-subject design was employed with 15 young heterosexual couples. The results suggest that sleep location does not appear to have a strong and consistent effect on sleep quantity or quality. The social setting had a specific effect in heterosexual young men, who were found to sleep longer and rise later when cosleeping with their partner. In contrast, we did not find any significant effect of the social setting on sleep continuity parameters in women. In both sexes, sleep quality was perceived to be better when sleeping in pairs. However, there was a higher concordance of the partners' body movements in cosleeping nights compared to the sleeping alone condition.

  12. Coupling heat and chemical tracer experiments for estimating heat transfer parameters in shallow alluvial aquifers.

    PubMed

    Wildemeersch, S; Jamin, P; Orban, P; Hermans, T; Klepikova, M; Nguyen, F; Brouyère, S; Dassargues, A

    2014-11-15

    Geothermal energy systems, closed or open, are increasingly considered for heating and/or cooling buildings. The efficiency of such systems depends on the thermal properties of the subsurface. Therefore, feasibility and impact studies performed prior to their installation should include a field characterization of thermal properties and a heat transfer model using parameter values measured in situ. However, there is a lack of in situ experiments and methodology for performing such a field characterization, especially for open systems. This study presents an in situ experiment designed for estimating heat transfer parameters in shallow alluvial aquifers with focus on the specific heat capacity. This experiment consists in simultaneously injecting hot water and a chemical tracer into the aquifer and monitoring the evolution of groundwater temperature and concentration in the recovery well (and possibly in other piezometers located down gradient). Temperature and concentrations are then used for estimating the specific heat capacity. The first method for estimating this parameter is based on a modeling in series of the chemical tracer and temperature breakthrough curves at the recovery well. The second method is based on an energy balance. The values of specific heat capacity estimated for both methods (2.30 and 2.54MJ/m(3)/K) for the experimental site in the alluvial aquifer of the Meuse River (Belgium) are almost identical and consistent with values found in the literature. Temperature breakthrough curves in other piezometers are not required for estimating the specific heat capacity. However, they highlight that heat transfer in the alluvial aquifer of the Meuse River is complex and contrasted with different dominant process depending on the depth leading to significant vertical heat exchange between upper and lower part of the aquifer. Furthermore, these temperature breakthrough curves could be included in the calibration of a complex heat transfer model for estimating the entire set of heat transfer parameters and their spatial distribution by inverse modeling. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Jing-Jy; Flood, Paul E.; LePoire, David

    In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less

  14. Setting priorities in health care organizations: criteria, processes, and parameters of success.

    PubMed

    Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A

    2004-09-08

    Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.

  15. Phytoplankton growth rate modelling: can spectroscopic cell chemotyping be superior to physiological predictors?

    PubMed

    Fanesi, Andrea; Wagner, Heiko; Wilhelm, Christian

    2017-02-08

    Climate change has a strong impact on phytoplankton communities and water quality. However, the development of robust techniques to assess phytoplankton growth is still in progress. In this study, the growth rate of phytoplankton cells grown at different temperatures was modelled based on conventional physiological traits (e.g. chlorophyll, carbon and photosynthetic parameters) using the partial least square regression (PLSR) algorithm and compared with a new approach combining Fourier transform infrared-spectroscopy and PLSR. In this second model, it is assumed that the macromolecular composition of phytoplankton cells represents an intracellular marker for growth. The models have comparable high predictive power (R 2 > 0.8) and low error in predicting new observations. Interestingly, not all of the predictors present the same weight in the modelling of growth rate. A set of specific parameters, such as non-photochemical fluorescence quenching (NPQ) and the quantum yield of carbon production in the first model, and lipid, protein and carbohydrate contents for the second one, strongly covary with cell growth rate regardless of the taxonomic position of the phytoplankton species investigated. This reflects a set of specific physiological adjustments covarying with growth rate, conserved among taxonomically distant algal species that might be used as guidelines for the improvement of modern primary production models. The high predictive power of both sets of cellular traits for growth rate is of great importance for applied phycological studies. Our approach may find application as a quality control tool for the monitoring of phytoplankton populations in natural communities or in photobioreactors. © 2017 The Author(s).

  16. Autonomic Management in a Distributed Storage System

    NASA Astrophysics Data System (ADS)

    Tauber, Markus

    2010-07-01

    This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.

  17. Experimental validation of a Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.

  18. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  19. Continuous Firefly Algorithm for Optimal Tuning of Pid Controller in Avr System

    NASA Astrophysics Data System (ADS)

    Bendjeghaba, Omar

    2014-01-01

    This paper presents a tuning approach based on Continuous firefly algorithm (CFA) to obtain the proportional-integral- derivative (PID) controller parameters in Automatic Voltage Regulator system (AVR). In the tuning processes the CFA is iterated to reach the optimal or the near optimal of PID controller parameters when the main goal is to improve the AVR step response characteristics. Conducted simulations show the effectiveness and the efficiency of the proposed approach. Furthermore the proposed approach can improve the dynamic of the AVR system. Compared with particle swarm optimization (PSO), the new CFA tuning method has better control system performance in terms of time domain specifications and set-point tracking.

  20. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes

    NASA Astrophysics Data System (ADS)

    Pietsch, Stephan

    2017-04-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong, since: (i) No given ecosystem ever is at steady state! (ii) Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  1. Adressing optimality principles in DGVMs: Dynamics of Carbon allocation changes.

    NASA Astrophysics Data System (ADS)

    Pietsch, S.

    2016-12-01

    DGVMs are designed to reproduce and quantify ecosystem processes. Based on plant functions or species specific parameter sets, the energy, carbon, nitrogen and water cycles of different ecosystems are assessed. These models have been proven to be important tools to investigate ecosystem fluxes as they are derived by plant, site and environmental factors. The general model approach assumes steady state conditions and constant model parameters. Both assumptions, however, are wrong. Any given ecosystem never is at steady state! Ecosystems have the capability to adapt to changes in growth conditions, e.g. via changes in allocation patterns! This presentation will give examples how these general failures within current DGVMs may be addressed.

  2. Defining the Field of Existence of Shrouded Blades in High-Speed Gas Turbines

    NASA Astrophysics Data System (ADS)

    Belousov, Anatoliy I.; Nazdrachev, Sergeiy V.

    2018-01-01

    This work provides a method for determining the region of existence of banded blades of gas turbines for aircraft engines based on the analytical evaluation of tensile stresses in specific characteristic sections of the blade. This region is determined by the set of values of the parameter, which forms the law of distribution of the cross-sectional area of the cross-sections along the height of the airfoil. When seven independent parameters (gas-dynamic, structural and strength) are changed, the choice of the best option is proposed at the early design stage. As an example, the influence of the dimension of a turbine on the domain of the existence of banded blades is shown.

  3. Invariant polarimetric contrast parameters of coherent light.

    PubMed

    Réfrégier, Philippe; Goudail, François

    2002-06-01

    Many applications use an active coherent illumination and analyze the variation of the polarization state of optical signals. However, as a result of the use of coherent light, these signals are generally strongly perturbed with speckle noise. This is the case, for example, for active polarimetric imaging systems that are useful for enhancing contrast between different elements in a scene. We propose a rigorous definition of the minimal set of parameters that characterize the difference between two coherent and partially polarized states. Indeed, two states of partially polarized light are a priori defined by eight parameters, for example, their two Stokes vectors. We demonstrate that the processing performance for such signal processing tasks as detection, localization, or segmentation of spatial or temporal polarization variations is uniquely determined by two scalar functions of these eight parameters. These two scalar functions are the invariant parameters that define the polarimetric contrast between two polarized states of coherent light. Different polarization configurations with the same invariant contrast parameters will necessarily lead to the same performance for a given task, which is a desirable quality for a rigorous contrast measure. The definition of these polarimetric contrast parameters simplifies the analysis and the specification of processing techniques for coherent polarimetric signals.

  4. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  5. Quantitative prediction of radio frequency induced local heating derived from measured magnetic field maps in magnetic resonance imaging: A phantom validation at 7 T

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaotong; Liu, Jiaen; Van de Moortele, Pierre-Francois

    2014-12-15

    Electrical Properties Tomography (EPT) technique utilizes measurable radio frequency (RF) coil induced magnetic fields (B1 fields) in a Magnetic Resonance Imaging (MRI) system to quantitatively reconstruct the local electrical properties (EP) of biological tissues. Information derived from the same data set, e.g., complex numbers of B1 distribution towards electric field calculation, can be used to estimate, on a subject-specific basis, local Specific Absorption Rate (SAR). SAR plays a significant role in RF pulse design for high-field MRI applications, where maximum local tissue heating remains one of the most constraining limits. The purpose of the present work is to investigate themore » feasibility of such B1-based local SAR estimation, expanding on previously proposed EPT approaches. To this end, B1 calibration was obtained in a gelatin phantom at 7 T with a multi-channel transmit coil, under a particular multi-channel B1-shim setting (B1-shim I). Using this unique set of B1 calibration, local SAR distribution was subsequently predicted for B1-shim I, as well as for another B1-shim setting (B1-shim II), considering a specific set of parameter for a heating MRI protocol consisting of RF pulses plaid at 1% duty cycle. Local SAR results, which could not be directly measured with MRI, were subsequently converted into temperature change which in turn were validated against temperature changes measured by MRI Thermometry based on the proton chemical shift.« less

  6. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, Noah G.; Veirs, Douglas K.; Claytor, Thomas N.

    1994-01-01

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure.

  7. Fluid density and concentration measurement using noninvasive in situ ultrasonic resonance interferometry

    DOEpatents

    Pope, N.G.; Veirs, D.K.; Claytor, T.N.

    1994-10-25

    The specific gravity or solute concentration of a process fluid solution located in a selected structure is determined by obtaining a resonance response spectrum of the fluid/structure over a range of frequencies that are outside the response of the structure itself. A fast Fourier transform (FFT) of the resonance response spectrum is performed to form a set of FFT values. A peak value for the FFT values is determined, e.g., by curve fitting, to output a process parameter that is functionally related to the specific gravity and solute concentration of the process fluid solution. Calibration curves are required to correlate the peak FFT value over the range of expected specific gravities and solute concentrations in the selected structure. 7 figs.

  8. Modeling Single Ventricle Physiology: Review of Engineering Tools to Study First Stage Palliation of Hypoplastic Left Heart Syndrome

    PubMed Central

    Biglino, Giovanni; Giardini, Alessandro; Hsia, Tain-Yen; Figliola, Richard; Taylor, Andrew M.; Schievano, Silvia

    2013-01-01

    First stage palliation of hypoplastic left heart syndrome, i.e., the Norwood operation, results in a complex physiological arrangement, involving different shunting options (modified Blalock-Taussig, RV-PA conduit, central shunt from the ascending aorta) and enlargement of the hypoplastic ascending aorta. Engineering techniques, both computational and experimental, can aid in the understanding of the Norwood physiology and their correct implementation can potentially lead to refinement of the decision-making process, by means of patient-specific simulations. This paper presents some of the available tools that can corroborate clinical evidence by providing detailed insight into the fluid dynamics of the Norwood circulation as well as alternative surgical scenarios (i.e., virtual surgery). Patient-specific anatomies can be manufactured by means of rapid prototyping and such models can be inserted in experimental set-ups (mock circulatory loops) that can provide a valuable source of validation data as well as hydrodynamic information. Such models can be tuned to respond to differing the patient physiologies. Experimental set-ups can also be compatible with visualization techniques, like particle image velocimetry and cardiovascular magnetic resonance, further adding to the knowledge of the local fluid dynamics. Multi-scale computational models include detailed three-dimensional (3D) anatomical information coupled to a lumped parameter network representing the remainder of the circulation. These models output both overall hemodynamic parameters while also enabling to investigate the local fluid dynamics of the aortic arch or the shunt. As an alternative, pure lumped parameter models can also be employed to model Stage 1 palliation, taking advantage of a much lower computational cost, albeit missing the 3D anatomical component. Finally, analytical techniques, such as wave intensity analysis, can be employed to study the Norwood physiology, providing a mechanistic perspective on the ventriculo-arterial coupling for this specific surgical scenario. PMID:24400277

  9. The influence of uncertainty and location-specific conditions on the environmental prioritisation of human pharmaceuticals in Europe.

    PubMed

    Oldenkamp, Rik; Huijbregts, Mark A J; Ragas, Ad M J

    2016-05-01

    The selection of priority APIs (Active Pharmaceutical Ingredients) can benefit from a spatially explicit approach, since an API might exceed the threshold of environmental concern in one location, while staying below that same threshold in another. However, such a spatially explicit approach is relatively data intensive and subject to parameter uncertainty due to limited data. This raises the question to what extent a spatially explicit approach for the environmental prioritisation of APIs remains worthwhile when accounting for uncertainty in parameter settings. We show here that the inclusion of spatially explicit information enables a more efficient environmental prioritisation of APIs in Europe, compared with a non-spatial EU-wide approach, also under uncertain conditions. In a case study with nine antibiotics, uncertainty distributions of the PAF (Potentially Affected Fraction) of aquatic species were calculated in 100∗100km(2) environmental grid cells throughout Europe, and used for the selection of priority APIs. Two APIs have median PAF values that exceed a threshold PAF of 1% in at least one environmental grid cell in Europe, i.e., oxytetracycline and erythromycin. At a tenfold lower threshold PAF (i.e., 0.1%), two additional APIs would be selected, i.e., cefuroxime and ciprofloxacin. However, in 94% of the environmental grid cells in Europe, no APIs exceed either of the thresholds. This illustrates the advantage of following a location-specific approach in the prioritisation of APIs. This added value remains when accounting for uncertainty in parameter settings, i.e., if the 95th percentile of the PAF instead of its median value is compared with the threshold. In 96% of the environmental grid cells, the location-specific approach still enables a reduction of the selection of priority APIs of at least 50%, compared with a EU-wide prioritisation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Parameter optimization of parenchymal texture analysis for prediction of false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2016-03-01

    This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.

  11. Dose Calculation on KV Cone Beam CT Images: An Investigation of the Hu-Density Conversion Stability and Dose Accuracy Using the Site-Specific Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rong Yi, E-mail: rong@humonc.wisc.ed; Smilowitz, Jennifer; Tewatia, Dinesh

    2010-10-01

    Precise calibration of Hounsfield units (HU) to electron density (HU-density) is essential to dose calculation. On-board kV cone beam computed tomography (CBCT) imaging is used predominantly for patients' positioning, but will potentially be used for dose calculation. The impacts of varying 3 imaging parameters (mAs, source-imager distance [SID], and cone angle) and phantom size on the HU number accuracy and HU-density calibrations for CBCT imaging were studied. We proposed a site-specific calibration method to achieve higher accuracy in CBCT image-based dose calculation. Three configurations of the Computerized Imaging Reference Systems (CIRS) water equivalent electron density phantom were used to simulatemore » sites including head, lungs, and lower body (abdomen/pelvis). The planning computed tomography (CT) scan was used as the baseline for comparisons. CBCT scans of these phantom configurations were performed using Varian Trilogy{sup TM} system in a precalibrated mode with fixed tube voltage (125 kVp), but varied mAs, SID, and cone angle. An HU-density curve was generated and evaluated for each set of scan parameters. Three HU-density tables generated using different phantom configurations with the same imaging parameter settings were selected for dose calculation on CBCT images for an accuracy comparison. Changing mAs or SID had small impact on HU numbers. For adipose tissue, the HU discrepancy from the baseline was 20 HU in a small phantom, but 5 times lager in a large phantom. Yet, reducing the cone angle significantly decreases the HU discrepancy. The HU-density table was also affected accordingly. By performing dose comparison between CT and CBCT image-based plans, results showed that using the site-specific HU-density tables to calibrate CBCT images of different sites improves the dose accuracy to {approx}2%. Our phantom study showed that CBCT imaging can be a feasible option for dose computation in adaptive radiotherapy approach if the site-specific calibration is applied.« less

  12. Advanced interactive display formats for terminal area traffic control

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.

    1996-01-01

    This report describes the basic design considerations for perspective air traffic control displays. A software framework has been developed for manual viewing parameter setting (MVPS) in preparation for continued, ongoing developments on automated viewing parameter setting (AVPS) schemes. Two distinct modes of MVPS operations are considered, both of which utilize manipulation pointers imbedded in the three-dimensional scene: (1) direct manipulation of the viewing parameters -- in this mode the manipulation pointers act like the control-input device, through which the viewing parameter changes are made. Part of the parameters are rate controlled, and part of them position controlled. This mode is intended for making fast, iterative small changes in the parameters. (2) Indirect manipulation of the viewing parameters -- this mode is intended primarily for introducing large, predetermined changes in the parameters. Requests for changes in viewing parameter setting are entered manually by the operator by moving viewing parameter manipulation pointers on the screen. The motion of these pointers, which are an integral part of the 3-D scene, is limited to the boundaries of the screen. This arrangement has been chosen in order to preserve the correspondence between the spatial lay-outs of the new and the old viewing parameter setting, a feature which contributes to preventing spatial disorientation of the operator. For all viewing operations, e.g. rotation, translation and ranging, the actual change is executed automatically by the system, through gradual transitions with an exponentially damped, sinusoidal velocity profile, in this work referred to as 'slewing' motions. The slewing functions, which eliminate discontinuities in the viewing parameter changes, are designed primarily for enhancing the operator's impression that he, or she, is dealing with an actually existing physical system, rather than an abstract computer-generated scene. The proposed, continued research efforts will deal with the development of automated viewing parameter setting schemes. These schemes employ an optimization strategy, aimed at identifying the best possible vantage point, from which the air traffic control scene can be viewed for a given traffic situation. They determine whether a change in viewing parameter setting is required and determine the dynamic path along which the change to the new viewing parameter setting should take place.

  13. Red Cell Properties after Different Modes of Blood Transportation

    PubMed Central

    Makhro, Asya; Huisjes, Rick; Verhagen, Liesbeth P.; Mañú-Pereira, María del Mar; Llaudet-Planas, Esther; Petkova-Kirova, Polina; Wang, Jue; Eichler, Hermann; Bogdanova, Anna; van Wijk, Richard; Vives-Corrons, Joan-Lluís; Kaestner, Lars

    2016-01-01

    Transportation of blood samples is unavoidable for assessment of specific parameters in blood of patients with rare anemias, blood doping testing, or for research purposes. Despite the awareness that shipment may substantially alter multiple parameters, no study of that extent has been performed to assess these changes and optimize shipment conditions to reduce transportation-related artifacts. Here we investigate the changes in multiple parameters in blood of healthy donors over 72 h of simulated shipment conditions. Three different anticoagulants (K3EDTA, Sodium Heparin, and citrate-based CPDA) for two temperatures (4°C and room temperature) were tested to define the optimal transportation conditions. Parameters measured cover common cytology and biochemistry parameters (complete blood count, hematocrit, morphological examination), red blood cell (RBC) volume, ion content and density, membrane properties and stability (hemolysis, osmotic fragility, membrane heat stability, patch-clamp investigations, and formation of micro vesicles), Ca2+ handling, RBC metabolism, activity of numerous enzymes, and O2 transport capacity. Our findings indicate that individual sets of parameters may require different shipment settings (anticoagulants, temperature). Most of the parameters except for ion (Na+, K+, Ca2+) handling and, possibly, reticulocytes counts, tend to favor transportation at 4°C. Whereas plasma and intraerythrocytic Ca2+ cannot be accurately measured in the presence of chelators such as citrate and EDTA, the majority of Ca2+-dependent parameters are stabilized in CPDA samples. Even in blood samples from healthy donors transported using an optimized shipment protocol, the majority of parameters were stable within 24 h, a condition that may not hold for the samples of patients with rare anemias. This implies for as short as possible shipping using fast courier services to the closest expert laboratory at reach. Mobile laboratories or the travel of the patients to the specialized laboratories may be the only option for some groups of patients with highly unstable RBCs. PMID:27471472

  14. Dependence of subject-specific parameters for a fast helical CT respiratory motion model on breathing rate: an animal study

    NASA Astrophysics Data System (ADS)

    O'Connell, Dylan; Thomas, David H.; Lamb, James M.; Lewis, John H.; Dou, Tai; Sieren, Jered P.; Saylor, Melissa; Hofmann, Christian; Hoffman, Eric A.; Lee, Percy P.; Low, Daniel A.

    2018-02-01

    To determine if the parameters relating lung tissue displacement to a breathing surrogate signal in a previously published respiratory motion model vary with the rate of breathing during image acquisition. An anesthetized pig was imaged using multiple fast helical scans to sample the breathing cycle with simultaneous surrogate monitoring. Three datasets were collected while the animal was mechanically ventilated with different respiratory rates: 12 bpm (breaths per minute), 17 bpm, and 24 bpm. Three sets of motion model parameters describing the correspondences between surrogate signals and tissue displacements were determined. The model error was calculated individually for each dataset, as well asfor pairs of parameters and surrogate signals from different experiments. The values of one model parameter, a vector field denoted α which related tissue displacement to surrogate amplitude, determined for each experiment were compared. The mean model error of the three datasets was 1.00  ±  0.36 mm with a 95th percentile value of 1.69 mm. The mean error computed from all combinations of parameters and surrogate signals from different datasets was 1.14  ±  0.42 mm with a 95th percentile of 1.95 mm. The mean difference in α over all pairs of experiments was 4.7%  ±  5.4%, and the 95th percentile was 16.8%. The mean angle between pairs of α was 5.0  ±  4.0 degrees, with a 95th percentile of 13.2 mm. The motion model parameters were largely unaffected by changes in the breathing rate during image acquisition. The mean error associated with mismatched sets of parameters and surrogate signals was 0.14 mm greater than the error achieved when using parameters and surrogate signals acquired with the same breathing rate, while maximum respiratory motion was 23.23 mm on average.

  15. Global-scale regionalization of hydrological model parameters using streamflow data from many small catchments

    NASA Astrophysics Data System (ADS)

    Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard

    2015-04-01

    Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.

  16. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  17. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  18. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  19. Method of Implementing Digital Phase-Locked Loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A. (Inventor); Thomas, J. Brooks (Inventor)

    1997-01-01

    In a new formulation for digital phase-locked loops, loop-filter constants are determined from loop roots that can each be selectively placed in the s-plane on the basis of a new set of parameters, each with simple and direct physical meaning in terms of loop noise bandwidth, root-specific decay rate, and root-specific damping. Loops of first to fourth order are treated in the continuous-update approximation (B(sub L)T approaches 0) and in a discrete-update formulation with arbitrary B(sub L)T. Deficiencies of the continuous-update approximation in large-B(sub L)T applications are avoided in the new discrete-update formulation.

  20. Simulation of financial market via nonlinear Ising model

    NASA Astrophysics Data System (ADS)

    Ko, Bonggyun; Song, Jae Wook; Chang, Woojin

    2016-09-01

    In this research, we propose a practical method for simulating the financial return series whose distribution has a specific heaviness. We employ the Ising model for generating financial return series to be analogous to those of the real series. The similarity between real financial return series and simulated one is statistically verified based on their stylized facts including the power law behavior of tail distribution. We also suggest the scheme for setting the parameters in order to simulate the financial return series with specific tail behavior. The simulation method introduced in this paper is expected to be applied to the other financial products whose price return distribution is fat-tailed.

  1. The influence of societal individualism on a century of tobacco use: modelling the prevalence of smoking.

    PubMed

    Lang, John C; Abrams, Daniel M; De Sterck, Hans

    2015-12-22

    Smoking of tobacco is estimated to have caused approximately six million deaths worldwide in 2014. Responding effectively to this epidemic requires a thorough understanding of how smoking behaviour is transmitted and modified. We present a new mathematical model of the social dynamics that cause cigarette smoking to spread in a population, incorporating aspects of individual and social utility. Model predictions are tested against two independent data sets spanning 25 countries: a newly compiled century-long composite data set on smoking prevalence, and Hofstede's individualism/collectivism measure (IDV). The general model prediction that more individualistic societies will show faster adoption and cessation of smoking is supported by the full 25 country smoking prevalence data set. Calibration of the model to the available smoking prevalence data is possible in a subset of 7 countries. Consistency of fitted model parameters with an additional, independent, data set further supports our model: the fitted value of the country-specific model parameter that determines the relative importance of social and individual factors in the decision of whether or not to smoke, is found to be significantly correlated with Hofstede's IDV for the 25 countries in our data set. Our model in conjunction with extensive data on smoking prevalence provides evidence for the hypothesis that individualism/collectivism may have an important influence on the dynamics of smoking prevalence at the aggregate, population level. Significant implications for public health interventions are discussed.

  2. Constitutive description of human femoropopliteal artery aging.

    PubMed

    Kamenskiy, Alexey; Seas, Andreas; Deegan, Paul; Poulson, William; Anttila, Eric; Sim, Sylvie; Desyatova, Anastasia; MacTaggart, Jason

    2017-04-01

    Femoropopliteal artery (FPA) mechanics play a paramount role in pathophysiology and the artery's response to therapeutic interventions, but data on FPA mechanical properties are scarce. Our goal was to characterize human FPAs over a wide population to derive a constitutive description of FPA aging to be used for computational modeling. Fresh human FPA specimens ([Formula: see text]) were obtained from [Formula: see text] predominantly male (80 %) donors 54±15 years old (range 13-82 years). Morphometric characteristics including radius, wall thickness, opening angle, and longitudinal pre-stretch were recorded. Arteries were subjected to multi-ratio planar biaxial extension to determine constitutive parameters for an invariant-based model accounting for the passive contributions of ground substance, elastin, collagen, and smooth muscle. Nonparametric bootstrapping was used to determine unique sets of material parameters that were used to derive age-group-specific characteristics. Physiologic stress-stretch state was calculated to capture changes with aging. Morphometric and constitutive parameters were derived for seven age groups. Vessel radius, wall thickness, and circumferential opening angle increased with aging, while longitudinal pre-stretch decreased ([Formula: see text]). Age-group-specific constitutive parameters portrayed orthotropic FPA stiffening, especially in the longitudinal direction. Structural changes in artery wall elastin were associated with reduction of physiologic longitudinal and circumferential stretches and stresses with age. These data and the constitutive description of FPA aging shed new light on our understanding of peripheral arterial disease pathophysiology and arterial aging. Application of this knowledge might improve patient selection for specific treatment modalities in personalized, precision medicine algorithms and could assist in device development for treatment of peripheral artery disease.

  3. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O2 + O system

    NASA Astrophysics Data System (ADS)

    Kulakhmetov, Marat; Gallis, Michael; Alexeenko, Alina

    2016-05-01

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O2 + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociation exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500-20 000 K temperature range. The VRT model captures 80 × 106 state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000-15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.

  4. Investigating the complexity of precipitation sets within California via the fractal-multifractal method

    NASA Astrophysics Data System (ADS)

    Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie

    2017-04-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.

  5. The potentiality of Trichoderma harzianum in alleviation the adverse effects of salinity in faba bean plants.

    PubMed

    Abd El-Baki, G K; Mostafa, Doaa

    2014-12-01

    The interaction between sodium chloride and Trichoderma harzianum (T24) on growth parameters, ion contents, MDA content, proline, soluble proteins as well as SDS page protein profile were studied in Vicia faba Giza 429. A sharp reduction was found in fresh and dry mass of shoots and roots with increasing salinity. Trichoderma treatments promoted the growth criteria as compared with corresponding salinized plants. The water content and leaf area exhibited a marked decrease with increasing salinity. Trichoderma treatments induced a progressive increase in both parameters. Both proline and MDA contents were increased progressively as the salinity rose in the soil. Trichoderma treatments considerably retarded the accumulation of both parameters in shoots and roots. Both Na+ and K+ concentration increased in both organs by enhancing salinity levels. The treatment with Trichoderma harzianum enhanced the accumulation of both ions. Exposure of plants to different concentrations of salinity, or others treated with Trichoderma harzianum produced marked changes in their protein pattern. Three types of alterations were observed: the synthesis of certain proteins declined significantly, specific synthesis of certain other proteins were markedly observed and synthesis of a set specific protein was induced de novo in plant treated with Trichoderma harzianum.

  6. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Identification of growth phases and influencing factors in cultivations with AGE1.HN cells using set-based methods.

    PubMed

    Borchers, Steffen; Freund, Susann; Rath, Alexander; Streif, Stefan; Reichl, Udo; Findeisen, Rolf

    2013-01-01

    Production of bio-pharmaceuticals in cell culture, such as mammalian cells, is challenging. Mathematical models can provide support to the analysis, optimization, and the operation of production processes. In particular, unstructured models are suited for these purposes, since they can be tailored to particular process conditions. To this end, growth phases and the most relevant factors influencing cell growth and product formation have to be identified. Due to noisy and erroneous experimental data, unknown kinetic parameters, and the large number of combinations of influencing factors, currently there are only limited structured approaches to tackle these issues. We outline a structured set-based approach to identify different growth phases and the factors influencing cell growth and metabolism. To this end, measurement uncertainties are taken explicitly into account to bound the time-dependent specific growth rate based on the observed increase of the cell concentration. Based on the bounds on the specific growth rate, we can identify qualitatively different growth phases and (in-)validate hypotheses on the factors influencing cell growth and metabolism. We apply the approach to a mammalian suspension cell line (AGE1.HN). We show that growth in batch culture can be divided into two main growth phases. The initial phase is characterized by exponential growth dynamics, which can be described consistently by a relatively simple unstructured and segregated model. The subsequent phase is characterized by a decrease in the specific growth rate, which, as shown, results from substrate limitation and the pH of the medium. An extended model is provided which describes the observed dynamics of cell growth and main metabolites, and the corresponding kinetic parameters as well as their confidence intervals are estimated. The study is complemented by an uncertainty and outlier analysis. Overall, we demonstrate utility of set-based methods for analyzing cell growth and metabolism under conditions of uncertainty.

  8. Identification of Growth Phases and Influencing Factors in Cultivations with AGE1.HN Cells Using Set-Based Methods

    PubMed Central

    Borchers, Steffen; Freund, Susann; Rath, Alexander; Streif, Stefan; Reichl, Udo; Findeisen, Rolf

    2013-01-01

    Production of bio-pharmaceuticals in cell culture, such as mammalian cells, is challenging. Mathematical models can provide support to the analysis, optimization, and the operation of production processes. In particular, unstructured models are suited for these purposes, since they can be tailored to particular process conditions. To this end, growth phases and the most relevant factors influencing cell growth and product formation have to be identified. Due to noisy and erroneous experimental data, unknown kinetic parameters, and the large number of combinations of influencing factors, currently there are only limited structured approaches to tackle these issues. We outline a structured set-based approach to identify different growth phases and the factors influencing cell growth and metabolism. To this end, measurement uncertainties are taken explicitly into account to bound the time-dependent specific growth rate based on the observed increase of the cell concentration. Based on the bounds on the specific growth rate, we can identify qualitatively different growth phases and (in-)validate hypotheses on the factors influencing cell growth and metabolism. We apply the approach to a mammalian suspension cell line (AGE1.HN). We show that growth in batch culture can be divided into two main growth phases. The initial phase is characterized by exponential growth dynamics, which can be described consistently by a relatively simple unstructured and segregated model. The subsequent phase is characterized by a decrease in the specific growth rate, which, as shown, results from substrate limitation and the pH of the medium. An extended model is provided which describes the observed dynamics of cell growth and main metabolites, and the corresponding kinetic parameters as well as their confidence intervals are estimated. The study is complemented by an uncertainty and outlier analysis. Overall, we demonstrate utility of set-based methods for analyzing cell growth and metabolism under conditions of uncertainty. PMID:23936299

  9. Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.

    2016-12-01

    Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.

  10. System health monitoring using multiple-model adaptive estimation techniques

    NASA Astrophysics Data System (ADS)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.

  11. Estimation procedure of the efficiency of the heat network segment

    NASA Astrophysics Data System (ADS)

    Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.

    2017-07-01

    An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.

  12. Thermal and volumetric properties of complex aqueous electrolyte solutions using the Pitzer formalism - The PhreeSCALE code

    NASA Astrophysics Data System (ADS)

    Lach, Adeline; Boulahya, Faïza; André, Laurent; Lassin, Arnault; Azaroual, Mohamed; Serin, Jean-Paul; Cézac, Pierre

    2016-07-01

    The thermal and volumetric properties of complex aqueous solutions are described according to the Pitzer equation, explicitly taking into account the speciation in the aqueous solutions. The thermal properties are the apparent relative molar enthalpy (Lϕ) and the apparent molar heat capacity (Cp,ϕ). The volumetric property is the apparent molar volume (Vϕ). Equations describing these properties are obtained from the temperature or pressure derivatives of the excess Gibbs energy and make it possible to calculate the dilution enthalpy (∆HD), the heat capacity (cp) and the density (ρ) of aqueous solutions up to high concentrations. Their implementation in PHREEQC V.3 (Parkhurst and Appelo, 2013) is described and has led to a new numerical tool, called PhreeSCALE. It was tested first, using a set of parameters (specific interaction parameters and standard properties) from the literature for two binary systems (Na2SO4-H2O and MgSO4-H2O), for the quaternary K-Na-Cl-SO4 system (heat capacity only) and for the Na-K-Ca-Mg-Cl-SO4-HCO3 system (density only). The results obtained with PhreeSCALE are in agreement with the literature data when the same standard solution heat capacity (Cp0) and volume (V0) values are used. For further applications of this improved computation tool, these standard solution properties were calculated independently, using the Helgeson-Kirkham-Flowers (HKF) equations. By using this kind of approach, most of the Pitzer interaction parameters coming from literature become obsolete since they are not coherent with the standard properties calculated according to the HKF formalism. Consequently a new set of interaction parameters must be determined. This approach was successfully applied to the Na2SO4-H2O and MgSO4-H2O binary systems, providing a new set of optimized interaction parameters, consistent with the standard solution properties derived from the HKF equations.

  13. Perceptual control models of pursuit manual tracking demonstrate individual specificity and parameter consistency.

    PubMed

    Parker, Maximilian G; Tyson, Sarah F; Weightman, Andrew P; Abbott, Bruce; Emsley, Richard; Mansell, Warren

    2017-11-01

    Computational models that simulate individuals' movements in pursuit-tracking tasks have been used to elucidate mechanisms of human motor control. Whilst there is evidence that individuals demonstrate idiosyncratic control-tracking strategies, it remains unclear whether models can be sensitive to these idiosyncrasies. Perceptual control theory (PCT) provides a unique model architecture with an internally set reference value parameter, and can be optimized to fit an individual's tracking behavior. The current study investigated whether PCT models could show temporal stability and individual specificity over time. Twenty adults completed three blocks of 15 1-min, pursuit-tracking trials. Two blocks (training and post-training) were completed in one session and the third was completed after 1 week (follow-up). The target moved in a one-dimensional, pseudorandom pattern. PCT models were optimized to the training data using a least-mean-squares algorithm, and validated with data from post-training and follow-up. We found significant inter-individual variability (partial η 2 : .464-.697) and intra-individual consistency (Cronbach's α: .880-.976) in parameter estimates. Polynomial regression revealed that all model parameters, including the reference value parameter, contribute to simulation accuracy. Participants' tracking performances were significantly more accurately simulated by models developed from their own tracking data than by models developed from other participants' data. We conclude that PCT models can be optimized to simulate the performance of an individual and that the test-retest reliability of individual models is a necessary criterion for evaluating computational models of human performance.

  14. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values.

    PubMed

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-10-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight-normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. © 2014 The Authors. Environmental Toxicology and Chemistry Published by Wiley Periodicals, Inc.

  15. SU-E-I-95: Personalized Radiography Technical Parameters for Each Patient and Exam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soares, F; Camozzato, T; Kahl, G

    Purpose: To determine exact electrical parameters (kV, mAs) a radiological technologist shall use taking account the exam and patient's structure, with guarantee of minimum dose and adequate quality image. Methods: A patient's absorbed dose equation was developed by means of Entrance Skin Dose (ESD), irradiated area and patient width for specific anatomy. ESD is calculated from a developed equation, where entrance surface air-KERMA and backscatter factor are included, with air-to-skin coefficient conversion. We developed specific Lambert-Beer attenuation equations derived from mass energy-absorption coefficients data for skin, fat, and muscle and bone as one tissue. Anatomy tissue thickness distribution at centralmore » X-ray location in anteroposterior incidence for hand and chest, was estimate by discounting constant skin and bone thickness from patient measured width, assuming the result as muscle and fat. A clinical research at a big hospital were executed when real parameters (kV, mAs, filtration, ripple) used by technologists were combined with the image quality and patient's data: anatomy width, height and weight. A correlation among the best images acquired and electrical parameters used were confronted with patient's data and dose estimation. The best combinations were used as gold standards. Results: For each anatomy, two equations were developed to calculate voltage (kV) and exposure (mAs) to reproduce and interpolate the gold standards. Patient is measured and data are input into equations, giving radiological technologists the right set of electrical parameters for that specific exam. Conclusion: This work indicates that radiological technologist can personalize the exact electrical parameters for each patient exam, instead of using standard values. It also guarantee that patients under or over-sized measures will receive the right dose for the best image. It will stop wrong empiric adjusts technologists do when examining a non-standard patient and reduce probability of radiography retaken because of over or under exposition.« less

  16. Sensitivity of ecological soil-screening levels for metals to exposure model parameterization and toxicity reference values

    PubMed Central

    Sample, Bradley E; Fairbrother, Anne; Kaiser, Ashley; Law, Sheryl; Adams, Bill

    2014-01-01

    Ecological soil-screening levels (Eco-SSLs) were developed by the United States Environmental Protection Agency (USEPA) for the purposes of setting conservative soil screening values that can be used to eliminate the need for further ecological assessment for specific analytes at a given site. Ecological soil-screening levels for wildlife represent a simplified dietary exposure model solved in terms of soil concentrations to produce exposure equal to a no-observed-adverse-effect toxicity reference value (TRV). Sensitivity analyses were performed for 6 avian and mammalian model species, and 16 metals/metalloids for which Eco-SSLs have been developed. The relative influence of model parameters was expressed as the absolute value of the range of variation observed in the resulting soil concentration when exposure is equal to the TRV. Rank analysis of variance was used to identify parameters with greatest influence on model output. For both birds and mammals, soil ingestion displayed the broadest overall range (variability), although TRVs consistently had the greatest influence on calculated soil concentrations; bioavailability in food was consistently the least influential parameter, although an important site-specific variable. Relative importance of parameters differed by trophic group. Soil ingestion ranked 2nd for carnivores and herbivores, but was 4th for invertivores. Different patterns were exhibited, depending on which parameter, trophic group, and analyte combination was considered. The approach for TRV selection was also examined in detail, with Cu as the representative analyte. The underlying assumption that generic body-weight–normalized TRVs can be used to derive protective levels for any species is not supported by the data. Whereas the use of site-, species-, and analyte-specific exposure parameters is recommended to reduce variation in exposure estimates (soil protection level), improvement of TRVs is more problematic. Environ Toxicol Chem 2014;33:2386–2398. PMID:24944000

  17. Marginally specified priors for non-parametric Bayesian estimation

    PubMed Central

    Kessler, David C.; Hoff, Peter D.; Dunson, David B.

    2014-01-01

    Summary Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of such a parameter but will have real information about functionals of the parameter, such as the population mean or variance. The paper proposes a new framework for non-parametric Bayes inference in which the prior distribution for a possibly infinite dimensional parameter is decomposed into two parts: an informative prior on a finite set of functionals, and a non-parametric conditional prior for the parameter given the functionals. Such priors can be easily constructed from standard non-parametric prior distributions in common use and inherit the large support of the standard priors on which they are based. Additionally, posterior approximations under these informative priors can generally be made via minor adjustments to existing Markov chain approximation algorithms for standard non-parametric prior distributions. We illustrate the use of such priors in the context of multivariate density estimation using Dirichlet process mixture models, and in the modelling of high dimensional sparse contingency tables. PMID:25663813

  18. Analysis of regional rainfall-runoff parameters for the Lake Michigan Diversion hydrological modeling

    USGS Publications Warehouse

    Soong, David T.; Over, Thomas M.

    2015-01-01

    Recalibration of the HSPF parameters to the updated inputs and land covers was completed on two representative watershed models selected from the nine by using a manual method (HSPEXP) and an automatic method (PEST). The objective of the recalibration was to develop a regional parameter set that improves the accuracy in runoff volume prediction for the nine study watersheds. Knowledge about flow and watershed characteristics plays a vital role for validating the calibration in both manual and automatic methods. The best performing parameter set was determined by the automatic calibration method on a two-watershed model. Applying this newly determined parameter set to the nine watersheds for runoff volume simulation resulted in “very good” ratings in five watersheds, an improvement as compared to “very good” ratings achieved for three watersheds by the North Branch parameter set.

  19. Constraints on cosmic strings using data from the first Advanced LIGO observing run

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allen, G.; Allocca, A.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Fitz-Axen, M.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lo, R. K. L.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steer, D. A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, J. A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y.-F.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, K. W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2018-05-01

    Cosmic strings are topological defects which can be formed in grand unified theory scale phase transitions in the early universe. They are also predicted to form in the context of string theory. The main mechanism for a network of Nambu-Goto cosmic strings to lose energy is through the production of loops and the subsequent emission of gravitational waves, thus offering an experimental signature for the existence of cosmic strings. Here we report on the analysis conducted to specifically search for gravitational-wave bursts from cosmic string loops in the data of Advanced LIGO 2015-2016 observing run (O1). No evidence of such signals was found in the data, and as a result we set upper limits on the cosmic string parameters for three recent loop distribution models. In this paper, we initially derive constraints on the string tension G μ and the intercommutation probability, using not only the burst analysis performed on the O1 data set but also results from the previously published LIGO stochastic O1 analysis, pulsar timing arrays, cosmic microwave background and big-bang nucleosynthesis experiments. We show that these data sets are complementary in that they probe gravitational waves produced by cosmic string loops during very different epochs. Finally, we show that the data sets exclude large parts of the parameter space of the three loop distribution models we consider.

  20. The value of swarm data for practical modeling of plasma devices

    NASA Astrophysics Data System (ADS)

    Napartovich, A. P.; Kochetov, I. V.

    2011-04-01

    The non-thermal plasma is a key component in gas lasers, waste gas cleaners, ozone generators, plasma igniters, flame holders, flow control in high-speed aerodynamics and other applications. The specific feature of the non-thermal plasma is its high sensitivity to variations in governing parameters (gas composition, pressure, pulse duration, E/N parameter). The reactivity of the plasma is due to the appearance of atoms and chemical radicals. For the efficient production of chemically active species high average electron energy is required, which is controlled by the balance of gain from the electric field and loss in inelastic collisions. In low-ionized plasma the electron energy distribution function is far from Maxwellian and must be found numerically for specified conditions. Numerical modeling of processes in plasma technologies requires vast databases on electron scattering cross sections to be available. The only reliable criterion for evaluations of validity of a set of cross sections for a particular species is a correct prediction of electron transport and kinetic coefficients measured in swarm experiments. This criterion is used traditionally to improve experimentally measured cross sections, as was suggested earlier by Phelps. The set of cross sections subjected to this procedure is called a self-consistent set. Nowadays, such reliable self-consistent sets are known for many species. Problems encountered in implementation of the fitting procedure and examples of its successful applications are described in the paper. .

  1. The use and misuse of V(c,max) in Earth System Models.

    PubMed

    Rogers, Alistair

    2014-02-01

    Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.

  2. Improved modeling of clinical data with kernel methods.

    PubMed

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. IPO: a tool for automated optimization of XCMS parameters.

    PubMed

    Libiseller, Gunnar; Dvorzak, Michaela; Kleb, Ulrike; Gander, Edgar; Eisenberg, Tobias; Madeo, Frank; Neumann, Steffen; Trausinger, Gert; Sinner, Frank; Pieber, Thomas; Magnes, Christoph

    2015-04-16

    Untargeted metabolomics generates a huge amount of data. Software packages for automated data processing are crucial to successfully process these data. A variety of such software packages exist, but the outcome of data processing strongly depends on algorithm parameter settings. If they are not carefully chosen, suboptimal parameter settings can easily lead to biased results. Therefore, parameter settings also require optimization. Several parameter optimization approaches have already been proposed, but a software package for parameter optimization which is free of intricate experimental labeling steps, fast and widely applicable is still missing. We implemented the software package IPO ('Isotopologue Parameter Optimization') which is fast and free of labeling steps, and applicable to data from different kinds of samples and data from different methods of liquid chromatography - high resolution mass spectrometry and data from different instruments. IPO optimizes XCMS peak picking parameters by using natural, stable (13)C isotopic peaks to calculate a peak picking score. Retention time correction is optimized by minimizing relative retention time differences within peak groups. Grouping parameters are optimized by maximizing the number of peak groups that show one peak from each injection of a pooled sample. The different parameter settings are achieved by design of experiments, and the resulting scores are evaluated using response surface models. IPO was tested on three different data sets, each consisting of a training set and test set. IPO resulted in an increase of reliable groups (146% - 361%), a decrease of non-reliable groups (3% - 8%) and a decrease of the retention time deviation to one third. IPO was successfully applied to data derived from liquid chromatography coupled to high resolution mass spectrometry from three studies with different sample types and different chromatographic methods and devices. We were also able to show the potential of IPO to increase the reliability of metabolomics data. The source code is implemented in R, tested on Linux and Windows and it is freely available for download at https://github.com/glibiseller/IPO . The training sets and test sets can be downloaded from https://health.joanneum.at/IPO .

  4. Evaluating Uncertainty of Runoff Simulation using SWAT model of the Feilaixia Watershed in China Based on the GLUE Method

    NASA Astrophysics Data System (ADS)

    Chen, X.; Huang, G.

    2017-12-01

    In recent years, distributed hydrological models have been widely used in storm water management, water resources protection and so on. Therefore, how to evaluate the uncertainty of the model reasonably and efficiently becomes a hot topic today. In this paper, the soil and water assessment tool (SWAT) model is constructed for the study area of China's Feilaixia watershed, and the uncertainty of the runoff simulation is analyzed by GLUE method deeply. Taking the initial parameter range of GLUE method as the research core, the influence of different initial parameter ranges on model uncertainty is studied. In this paper, two sets of parameter ranges are chosen as the object of study, the first one (range 1) is recommended by SWAT-CUP and the second one (range 2) is calibrated by SUFI-2. The results showed that under the same number of simulations (10,000 times), the overall uncertainty obtained by the range 2 is less than the range 1. Specifically, the "behavioral" parameter sets for the range 2 is 10000 and for the range 1 is 4448. In the calibration and the validation, the ratio of P-factor to R-factor for range 1 is 1.387 and 1.391, and for range 2 is 1.405 and 1.462 respectively. In addition, the simulation result of range 2 is better with the NS and R2 slightly higher than range 1. Therefore, it can be concluded that using the parameter range calibrated by SUFI-2 as the initial parameter range for the GLUE is a way to effectively capture and evaluate the simulation uncertainty.

  5. Repeat sequence chromosome specific nucleic acid probes and methods of preparing and using

    DOEpatents

    Weier, H.U.G.; Gray, J.W.

    1995-06-27

    A primer directed DNA amplification method to isolate efficiently chromosome-specific repeated DNA wherein degenerate oligonucleotide primers are used is disclosed. The probes produced are a heterogeneous mixture that can be used with blocking DNA as a chromosome-specific staining reagent, and/or the elements of the mixture can be screened for high specificity, size and/or high degree of repetition among other parameters. The degenerate primers are sets of primers that vary in sequence but are substantially complementary to highly repeated nucleic acid sequences, preferably clustered within the template DNA, for example, pericentromeric alpha satellite repeat sequences. The template DNA is preferably chromosome-specific. Exemplary primers and probes are disclosed. The probes of this invention can be used to determine the number of chromosomes of a specific type in metaphase spreads, in germ line and/or somatic cell interphase nuclei, micronuclei and/or in tissue sections. Also provided is a method to select arbitrarily repeat sequence probes that can be screened for chromosome-specificity. 18 figs.

  6. Repeat sequence chromosome specific nucleic acid probes and methods of preparing and using

    DOEpatents

    Weier, Heinz-Ulrich G.; Gray, Joe W.

    1995-01-01

    A primer directed DNA amplification method to isolate efficiently chromosome-specific repeated DNA wherein degenerate oligonucleotide primers are used is disclosed. The probes produced are a heterogeneous mixture that can be used with blocking DNA as a chromosome-specific staining reagent, and/or the elements of the mixture can be screened for high specificity, size and/or high degree of repetition among other parameters. The degenerate primers are sets of primers that vary in sequence but are substantially complementary to highly repeated nucleic acid sequences, preferably clustered within the template DNA, for example, pericentromeric alpha satellite repeat sequences. The template DNA is preferably chromosome-specific. Exemplary primers ard probes are disclosed. The probes of this invention can be used to determine the number of chromosomes of a specific type in metaphase spreads, in germ line and/or somatic cell interphase nuclei, micronuclei and/or in tissue sections. Also provided is a method to select arbitrarily repeat sequence probes that can be screened for chromosome-specificity.

  7. Hybrid pairwise likelihood analysis of animal behavior experiments.

    PubMed

    Cattelan, Manuela; Varin, Cristiano

    2013-12-01

    The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.

  8. Estimating age at a specified length from the von Bertalanffy growth function

    USGS Publications Warehouse

    Ogle, Derek H.; Isermann, Daniel A.

    2017-01-01

    Estimating the time required (i.e., age) for fish in a population to reach a specific length (e.g., legal harvest length) is useful for understanding population dynamics and simulating the potential effects of length-based harvest regulations. The age at which a population reaches a specific mean length is typically estimated by fitting a von Bertalanffy growth function to length-at-age data and then rearranging the best-fit equation to solve for age at the specified length. This process precludes the use of standard frequentist methods to compute confidence intervals and compare estimates of age at the specified length among populations. We provide a parameterization of the von Bertalanffy growth function that has age at a specified length as a parameter. With this parameterization, age at a specified length is directly estimated, and standard methods can be used to construct confidence intervals and make among-group comparisons for this parameter. We demonstrate use of the new parameterization with two data sets.

  9. Laboratory-scale anaerobic sequencing batch reactor for treatment of stillage from fruit distillation.

    PubMed

    Rada, Elena Cristina; Ragazzi, Marco; Torretta, Vincenzo

    2013-01-01

    This work describes batch anaerobic digestion tests carried out on stillages, the residue of the distillation process on fruit, in order to contribute to the setting of design parameters for a planned plant. The experimental apparatus was characterized by three reactors, each with a useful volume of 5 L. The different phases of the work carried out were: determining the basic components of the chemical oxygen demand (COD) of the stillages; determining the specific production of biogas; and estimating the rapidly biodegradable COD contained in the stillages. In particular, the main goal of the anaerobic digestion tests on stillages was to measure the parameters of specific gas production (SGP) and gas production rate (GPR) in reactors in which stillages were being digested using ASBR (anaerobic sequencing batch reactor) technology. Runs were developed with increasing concentrations of the feed. The optimal loads for obtaining the maximum SGP and GPR values were 8-9 gCOD L(-1) and 0.9 gCOD g(-1) volatile solids.

  10. Evaluating the influence of plant-specific physiological parameterizations on the partitioning of land surface energy fluxes

    NASA Astrophysics Data System (ADS)

    Sulis, Mauro; Langensiepen, Matthias; Shrestha, Prabhakar; Schickling, Anke; Simmer, Clemens; Kollet, Stefan

    2015-04-01

    Vegetation has a significant influence on the partitioning of radiative forcing, the spatial and temporal variability of soil water and soil temperature. Therefore plant physiological properties play a key role in mediating and amplifying interactions and feedback mechanisms in the soil-vegetation-atmosphere continuum. Because of the direct impact on latent heat fluxes, these properties may also influence weather generating processes, such as the evolution of the atmospheric boundary layer (ABL). In land surface models, plant physiological properties are usually obtained from literature synthesis by unifying several plant/crop species in predefined vegetation classes. In this work, crop-specific physiological characteristics, retrieved from detailed field measurements, are included in the bio-physical parameterization of the Community Land Model (CLM), which is a component of the Terrestrial Systems Modeling Platform (TerrSysMP). The measured set of parameters for two typical European mid-latitudinal crops (sugar beet and winter wheat) is validated using eddy covariance measurements (sensible heat and latent heat) over multiple years from three measurement sites located in the North Rhine-Westphalia region, Germany. We found clear improvements of CLM simulations, when using the crop-specific physiological characteristics of the plants instead of the generic crop type when compared to the measurements. In particular, the increase of latent heat fluxes in conjunction with decreased sensible heat fluxes as simulated by the two new crop-specific parameter sets leads to an improved quantification of the diurnal energy partitioning. These findings are cross-validated using estimates of gross primary production extracted from net ecosystem exchange measurements. This independent analysis reveals that the better agreement between observed and simulated latent heat using the plant-specific physiological properties largely stems from an improved simulation of the photosynthesis process owing to a better estimation of the Rubisco enzyme kinematics. Finally, to evaluate the effects of the crop-specific parameterizations on the ABL dynamics, we perform a series of semi-idealized land-atmosphere coupled simulations by hypothesizing three cropland configurations. These numerical experiments reveal different heat and moisture budgets of the ABL that clearly impact the evolution of the boundary layer when using the crop-specific physiological properties.

  11. Reducing sojourn points from recurrence plots to improve transition detection: Application to fetal heart rate transitions.

    PubMed

    Zaylaa, Amira; Charara, Jamal; Girault, Jean-Marc

    2015-08-01

    The analysis of biomedical signals demonstrating complexity through recurrence plots is challenging. Quantification of recurrences is often biased by sojourn points that hide dynamic transitions. To overcome this problem, time series have previously been embedded at high dimensions. However, no one has quantified the elimination of sojourn points and rate of detection, nor the enhancement of transition detection has been investigated. This paper reports our on-going efforts to improve the detection of dynamic transitions from logistic maps and fetal hearts by reducing sojourn points. Three signal-based recurrence plots were developed, i.e. embedded with specific settings, derivative-based and m-time pattern. Determinism, cross-determinism and percentage of reduced sojourn points were computed to detect transitions. For logistic maps, an increase of 50% and 34.3% in sensitivity of detection over alternatives was achieved by m-time pattern and embedded recurrence plots with specific settings, respectively, and with a 100% specificity. For fetal heart rates, embedded recurrence plots with specific settings provided the best performance, followed by derivative-based recurrence plot, then unembedded recurrence plot using the determinism parameter. The relative errors between healthy and distressed fetuses were 153%, 95% and 91%. More than 50% of sojourn points were eliminated, allowing better detection of heart transitions triggered by gaseous exchange factors. This could be significant in improving the diagnosis of fetal state. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Investigations of quantum heuristics for optimization

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui

    We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.

  13. UNITRAN (UNIversal TRANslator): A Principle-Based Approach to Machine Translation.

    DTIC Science & Technology

    1987-12-01

    C*TENDONOSLA*) C*YENDO NOS LA *))) 0 %4 APPENDIX D. TRANSLATION SYSTEM PARAMETERS 222 0 1 :MERGES I ((A EL LADO DE ) (AL-.LADO- DE )) ((ACERCA DE ...not only permits a language to be de - ’Slocuim’s system ( 1994a) relies on a separate set of context-free language-specific rules for each source and...requirements as small subject domain, narrow linguistic coverage, and enormous lexical entries (as found in exclusively semantic-based systems). Thus

  14. AlignMe—a membrane protein sequence alignment web server

    PubMed Central

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  15. Flight motor set 360L008 (STS-32R). Volume 1: System overview

    NASA Technical Reports Server (NTRS)

    Garecht, D. M.

    1990-01-01

    Flight motor set 360L008 was launched as part of NASA space shuttle mission STS-32R. As with all previous redesigned solid rocket motor launches, overall motor performance was excellent. All ballistic contract end item specification parameters were verified with the exception of ignition interval and rise rates, which could not be verified due to elimination of developmental flight instrumentation. But the available low sample rate data showed nominal propulsion performance. All ballistic and mass property parameters closely matched the predicted values and were well within the required contract end item specification levels that could be assessed. All field joint heaters and igniter joint heaters performed without anomalies. Redesigned field joint heaters and the redesigned left-hand igniter heater were used on this flight. The changes to the heaters were primarily to improve durability and reducing handling damage. Evaluation of the ground environment instrumentation measurements again verified thermal mode analysis data and showed agreement with predicted environmental effects. No launch commit criteria violation occurred. Postflight inspection again verified superior performance of the insulation, phenolics, metal parts, and seals. Postflight evaluation indicated both nozzles performed as expected during flight. All combustion gas was contained by insulation in the field and case-to-nozzle joints. Recommendations were made concerning improved thermal modeling and measurements. The rationale for these recommendations and complete result details are presented.

  16. Biogeochemical regions of the Mediterranean Sea: An objective multidimensional and multivariate environmental approach

    NASA Astrophysics Data System (ADS)

    Reygondeau, Gabriel; Guieu, Cécile; Benedetti, Fabio; Irisson, Jean-Olivier; Ayata, Sakina-Dorothée; Gasparini, Stéphane; Koubbi, Philippe

    2017-02-01

    When dividing the ocean, the aim is generally to summarise a complex system into a representative number of units, each representing a specific environment, a biological community or a socio-economical specificity. Recently, several geographical partitions of the global ocean have been proposed using statistical approaches applied to remote sensing or observations gathered during oceanographic cruises. Such geographical frameworks defined at a macroscale appear hardly applicable to characterise the biogeochemical features of semi-enclosed seas that are driven by smaller-scale chemical and physical processes. Following the Longhurst's biogeochemical partitioning of the pelagic realm, this study investigates the environmental divisions of the Mediterranean Sea using a large set of environmental parameters. These parameters were informed in the horizontal and the vertical dimensions to provide a 3D spatial framework for environmental management (12 regions found for the epipelagic, 12 for the mesopelagic, 13 for the bathypelagic and 26 for the seafloor). We show that: (1) the contribution of the longitudinal environmental gradient to the biogeochemical partitions decreases with depth; (2) the partition of the surface layer cannot be extrapolated to other vertical layers as the partition is driven by a different set of environmental variables. This new partitioning of the Mediterranean Sea has strong implications for conservation as it highlights that management must account for the differences in zoning with depth at a regional scale.

  17. Continuum modelling of pedestrian flows - Part 2: Sensitivity analysis featuring crowd movement phenomena

    NASA Astrophysics Data System (ADS)

    Duives, Dorine C.; Daamen, Winnie; Hoogendoorn, Serge P.

    2016-04-01

    In recent years numerous pedestrian simulation tools have been developed that can support crowd managers and government officials in their tasks. New technologies to monitor pedestrian flows are in dire need of models that allow for rapid state-estimation. Many contemporary pedestrian simulation tools model the movements of pedestrians at a microscopic level, which does not provide an exact solution. Macroscopic models capture the fundamental characteristics of the traffic state at a more aggregate level, and generally have a closed form solution which is necessary for rapid state estimation for traffic management purposes. This contribution presents a next step in the calibration and validation of the macroscopic continuum model detailed in Hoogendoorn et al. (2014). The influence of global and local route choice on the development of crowd movement phenomena, such as dissipation, lane-formation and stripe-formation, is studied. This study shows that most self-organization phenomena and behavioural trends only develop under very specific conditions, and as such can only be simulated using specific parameter sets. Moreover, all crowd movement phenomena can be reproduced by means of the continuum model using one parameter set. This study concludes that the incorporation of local route choice behaviour and the balancing of the aptitude of pedestrians with respect to their own class and other classes are both essential in the correct prediction of crowd movement dynamics.

  18. The ferromagnetic-spin glass transition in PdMn alloys: symmetry breaking of ferromagnetism and spin glass studied by a multicanonical method.

    PubMed

    Kato, Tomohiko; Saita, Takahiro

    2011-03-16

    The magnetism of Pd(1-x)Mn(x) is investigated theoretically. A localized spin model for Mn spins that interact with short-range antiferromagnetic interactions and long-range ferromagnetic interactions via itinerant d electrons is set up, with no adjustable parameters. A multicanonical Monte Carlo simulation, combined with a procedure of symmetry breaking, is employed to discriminate between the ferromagnetic and spin glass orders. The transition temperature and the low-temperature phase are determined from the temperature variation of the specific heat and the probability distributions of the ferromagnetic order parameter and the spin glass order parameter at different concentrations. The calculation results reveal that only the ferromagnetic phase exists at x < 0.02, that only the spin glass phase exists at x > 0.04, and that the two phases coexist at intermediate concentrations. This result agrees semi-quantitatively with experimental results.

  19. Saturn systems holddown acoustic efficiency and normalized acoustic power spectrum.

    NASA Technical Reports Server (NTRS)

    Gilbert, D. W.

    1972-01-01

    Saturn systems field acoustic data are used to derive mid- and far-field prediction parameters for rocket engine noise. The data were obtained during Saturn vehicle launches at the Kennedy Space Center. The data base is a sorted set of acoustic data measured during the period 1961 through 1971 for Saturn system launches SA-1 through AS-509. The model assumes hemispherical radiation from a simple source located at the intersection of the longitudinal axis of each booster and the engine exit plane. The model parameters are evaluated only during vehicle holddown. The acoustic normalized power spectrum and efficiency for each system are isolated as a composite from the data using linear numerical methods. The specific definitions of each allows separation. The resulting power spectra are nondimensionalized as a function of rocket engine parameters. The nondimensional Saturn system acoustic spectrum and efficiencies are compared as a function of Strouhal number with power spectra from other systems.

  20. Auricular anthropometry of Hong Kong Chinese babies.

    PubMed

    Fok, T F; Hon, K L; So, H K; Ng, P C; Wong, E; Lee, A K Y; Chang, A

    2004-02-01

    To provide a database of the auricular measurements of Chinese infants born in Hong Kong. Prospective cross-sectional study. A total of 2384 healthy singleton, born consecutively at the Prince of Wales Hospital and the Union Hospital from October 1998 to September 2000, were included in the study. The range of gestation was 33-42 weeks. Measurements included ear width (EW), ear length (EL) and ear position (EP). The data show generally higher values for males in the parameters measured. When compared with previously published data for Caucasian and Jordanian term babies, Chinese babies have shorter EL. The ears were within normal position in nearly all our infants. The human ear appears to grow in a remarkably constant fashion. This study establishes the first set of gestational age-specific standard of the ear parameters for Chinese new-borns, potentially enabling early syndromal diagnosis. There are significant inter-racial differences in these ear parameters.

  1. Stochastic Car-Following Model for Explaining Nonlinear Traffic Phenomena

    NASA Astrophysics Data System (ADS)

    Meng, Jianping; Song, Tao; Dong, Liyun; Dai, Shiqiang

    There is a common time parameter for representing the sensitivity or the lag (response) time of drivers in many car-following models. In the viewpoint of traffic psychology, this parameter could be considered as the perception-response time (PRT). Generally, this parameter is set to be a constant in previous models. However, PRT is actually not a constant but a random variable described by the lognormal distribution. Thus the probability can be naturally introduced into car-following models by recovering the probability of PRT. For demonstrating this idea, a specific stochastic model is constructed based on the optimal velocity model. By conducting simulations under periodic boundary conditions, it is found that some important traffic phenomena, such as the hysteresis and phantom traffic jams phenomena, can be reproduced more realistically. Especially, an interesting experimental feature of traffic jams, i.e., two moving jams propagating in parallel with constant speed stably and sustainably, is successfully captured by the present model.

  2. Modeling of copper sorption onto GFH and design of full-scale GFH adsorbers.

    PubMed

    Steiner, Michele; Pronk, Wouter; Boller, Markus A

    2006-03-01

    During rain events, copper wash-off occurring from copper roofs results in environmental hazards. In this study, columns filled with granulated ferric hydroxide (GFH) were used to treat copper-containing roof runoff. It was shown that copper could be removed to a high extent. A model was developed to describe this removal process. The model was based on the Two Region Model (TRM), extended with an additional diffusion zone. The extended model was able to describe the copper removal in long-term experiments (up to 125 days) with variable flow rates reflecting realistic runoff events. The four parameters of the model were estimated based on data gained with specific column experiments according to maximum sensitivity for each parameter. After model validation, the parameter set was used for the design of full-scale adsorbers. These full-scale adsorbers show high removal rates during extended periods of time.

  3. Mathematical model for carbon dioxide evolution from the thermophilic composting of synthetic food wastes made of dog food

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, J.I.; Tsai, J.J.; Wu, K.H.

    2005-07-01

    The impacts of the aeration and the agitation on the composting process of synthetic food wastes made of dog food were studied in a laboratory-scale reactor. Two major peaks of CO{sub 2} evolution rate were observed. Each peak represented an independent stage of composting associated with the activities of thermophilic bacteria. CO{sub 2} evolutions known to correlate well with microbial activities and reactor temperatures were fitted successfully to a modified Gompertz equation, which incorporated three biokinetic parameters, namely, CO{sub 2} evolution potential, specific CO{sub 2} evolution rate, and lag phase time. No parameters that describe the impact of operating variablesmore » are involved. The model is only valid for the specified experimental conditions and may look different with others. The effects of operating parameters such as aeration and agitation were studied statistically with multivariate regression technique. Contour plots were constructed using regression equations for the examination of the dependence of CO{sub 2} evolution potentials on aeration and agitation. In the first stage, a maximum CO{sub 2} evolution potential was found when the aeration rate and the agitation parameter were set at 1.75 l/kg solids-min and 0.35, respectively. In the second stage, a maximum existed when the aeration rate and the agitation parameter were set at 1.8 l/kg solids-min and 0.5, respectively. The methods presented here can also be applied for the optimization of large-scale composting facilities that are operated differently and take longer time.« less

  4. Beyond activity tracking: next-generation wearable and implantable sensor technologies (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Mercier, Patrick

    2017-05-01

    Current-generation wearable devices have had success continuously measuring the activity and heart rate of subjects during exercise and daily life activities, resulting in interesting new data sets that can, though machine learning algorithms, predict a small subset of health conditions. However, this information is only very peripherally related to most health conditions, and thus offers limited utility to a wide range of the population. In this presentation, I will discuss emerging sensor technologies capable of measuring new and interesting parameters that can potentially offer much more meaningful and actionable data sets. Specifically, I will present recent work on wearable chemical sensors that can, for the first time, continuously monitor a suite of parameters like glucose, alcohol, lactate, and electrolytes, all while wirelessly delivering these results to a smart phone in real time. Demonstration platforms featuring patch, temporary tattoo, and mouthguard form factors will be described, in addition to the corresponding electronics necessary to perform sensor conditioning and wireless readout. Beyond chemical sensors, I will also discuss integration strategies with more conventional electrophysiological and physical parameters like ECG and strain gauges for cardiac and respiration rate monitoring, respectively. Finally, I will conclude the talk by introducing a new form of wireless communications in body-area networks that utilize the body itself as a channel for magnetic energy. Since the power consumption of conventional RF circuits often dominates the power of wearable devices, this new magnetic human body communication technique is specifically architected to dramatically reduce the path loss compared to conventional RF and capacitive human body communication techniques, thereby enabling ultra-low-power body area networks for next-generation wearable devices.

  5. Estimation of stream conditions in tributaries of the Klamath River, northern California

    USGS Publications Warehouse

    Manhard, Christopher V.; Som, Nicholas A.; Jones, Edward C.; Perry, Russell W.

    2018-01-01

    Because of their critical ecological role, stream temperature and discharge are requisite inputs for models of salmonid population dynamics. Coho Salmon inhabiting the Klamath Basin spend much of their freshwater life cycle inhabiting tributaries, but environmental data are often absent or only seasonally available at these locations. To address this information gap, we constructed daily averaged water temperature models that used simulated meteorological data to estimate daily tributary temperatures, and we used flow differentials recorded on the mainstem Klamath River to estimate daily tributary discharge. Observed temperature data were available for fourteen of the major salmon bearing tributaries, which enabled estimation of tributary-specific model parameters at those locations. Water temperature data from six mid-Klamath Basin tributaries were used to estimate a global set of parameters for predicting water temperatures in the remaining tributaries. The resulting parameter sets were used to simulate water temperatures for each of 75 tributaries from 1980-2015. Goodness-of-fit statistics computed from a cross-validation analysis demonstrated a high precision of the tributary-specific models in predicting temperature in unobserved years and of the global model in predicting temperatures in unobserved streams. Klamath River discharge has been monitored by four gages that broadly intersperse the 292 kilometers from the Iron Gate Dam to the Klamath River mouth. These gages defined the upstream and downstream margins of three reaches. Daily discharge of tributaries within a reach was estimated from 1980-2015 based on drainage-area proportionate allocations of the discharge differential between the upstream and downstream margin. Comparisons with measured discharge on Indian Creek, a moderate-sized tributary with naturally regulated flows, revealed that the estimates effectively approximated both the variability and magnitude of discharge.

  6. Sleep-Wake Evaluation from Whole-Night Non-Contact Audio Recordings of Breathing Sounds

    PubMed Central

    Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv

    2015-01-01

    Study Objectives To develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA). Design Whole-night breathing sounds (using ambient microphone) and polysomnography (PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohen's kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity. Setting University affiliated sleep-wake disorder center and biomedical signal processing laboratory. Patients One hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects. Measurements and Results Epoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohen's kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively. Conclusions This study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances. PMID:25710495

  7. Infrared Video Pupillography Coupled with Smart Phone LED for Measurement of Pupillary Light Reflex.

    PubMed

    Chang, Lily Yu-Li; Turuwhenua, Jason; Qu, Tian Yuan; Black, Joanna M; Acosta, Monica L

    2017-01-01

    Clinical assessment of pupil appearance and pupillary light reflex (PLR) may inform us the integrity of the autonomic nervous system (ANS). Current clinical pupil assessment is limited to qualitative examination, and relies on clinical judgment. Infrared (IR) video pupillography combined with image processing software offer the possibility of recording quantitative parameters. In this study we describe an IR video pupillography set-up intended for human and animal testing. As part of the validation, resting pupil diameter was measured in human subjects using the NeurOptics ™ (Irvine, CA, USA) pupillometer, to compare against that measured by our IR video pupillography set-up, and PLR was assessed in guinea pigs. The set-up consisted of a smart phone with a light emitting diode (LED) strobe light (0.2 s light ON, 5 s light OFF cycles) as the stimulus and an IR camera to record pupil kinetics. The consensual response was recorded, and the video recording was processed using a custom MATLAB program. The parameters assessed were resting pupil diameter (D1), constriction velocity (CV), percentage constriction ratio, re-dilation velocity (DV) and percentage re-dilation ratio. We report that the IR video pupillography set-up provided comparable results as the NeurOptics ™ pupillometer in human subjects, and was able to detect larger resting pupil size in juvenile male guinea pigs compared to juvenile female guinea pigs. At juvenile age, male guinea pigs also had stronger pupil kinetics for both pupil constriction and dilation. Furthermore, our IR video pupillography set-up was able to detect an age-specific increase in pupil diameter (female guinea pigs only) and reduction in CV (male and female guinea pigs) as animals developed from juvenile (3 months) to adult age (7 months). This technique demonstrated accurate and quantitative assessment of pupil parameters, and may provide the foundation for further development of an integrated system useful for clinical applications.

  8. Developing a policy for delegation of nursing care in the school setting.

    PubMed

    Spriggle, Melinda

    2009-04-01

    School nurses are in a unique position to provide care for students with special health care needs in the school setting. The incidence of chronic conditions and improved technology necessitate care of complex health care needs that had formerly been managed in inpatient settings. Delegation is a tool that may be used by registered nurses to allow unlicensed assistive personnel to perform appropriate nursing tasks and activities while keeping in mind that the registered nurse ultimately retains accountability for the delegation. The legal parameters for nursing delegation are defined by State Nurse Practice Acts, State Board of Nursing guidelines, and Nursing Administrative Rules/Regulations. Delegation becomes more challenging when carried out in a non-health care setting. School administrators may not be aware of legal issues related to delegation of nursing care in the school setting. It is crucial for school nurses to have a working knowledge of the delegation process. Development of a specific delegation policy will ensure that delegation is carried out in a manner providing for safe and appropriate care in the school setting.

  9. Generation of Requirements for Simulant Measurements

    NASA Technical Reports Server (NTRS)

    Rickman, D. L.; Schrader, C. M.; Edmunson, J. E.

    2010-01-01

    This TM presents a formal, logical explanation of the parameters selected for the figure of merit (FoM) algorithm. The FoM algorithm is used to evaluate lunar regolith simulant. The objectives, requirements, assumptions, and analysis behind the parameters are provided. A requirement is derived to verify and validate simulant performance versus lunar regolith from NASA s objectives for lunar simulants. This requirement leads to a specification that comparative measurements be taken the same way on the regolith and the simulant. In turn, this leads to a set of nine criteria with which to evaluate comparative measurements. Many of the potential measurements of interest are not defensible under these criteria. For example, many geotechnical properties of interest were not explicitly measured during Apollo and they can only be measured in situ on the Moon. A 2005 workshop identified 32 properties of major interest to users. Virtually all of the properties are tightly constrained, though not predictable, if just four parameters are controlled. Three parameters (composition, size, and shape) are recognized as being definable at the particle level. The fourth parameter (density) is a bulk property. In recent work, a fifth parameter (spectroscopy) has been identified, which will need to be added to future releases of the FoM.

  10. A Comparative Analysis of Life-Cycle Assessment Tools for ...

    EPA Pesticide Factsheets

    We identified and evaluated five life-cycle assessment tools that community decision makers can use to assess the environmental and economic impacts of end-of-life (EOL) materials management options. The tools evaluated in this report are waste reduction mode (WARM), municipal solid waste-decision support tool (MSW-DST), solid waste optimization life-cycle framework (SWOLF), environmental assessment system for environmental technologies (EASETECH), and waste and resources assessment for the environment (WRATE). WARM, MSW-DST, and SWOLF were developed for US-specific materials management strategies, while WRATE and EASETECH were developed for European-specific conditions. All of the tools (with the exception of WARM) allow specification of a wide variety of parameters (e.g., materials composition and energy mix) to a varying degree, thus allowing users to model specific EOL materials management methods even outside the geographical domain they are originally intended for. The flexibility to accept user-specified input for a large number of parameters increases the level of complexity and the skill set needed for using these tools. The tools were evaluated and compared based on a series of criteria, including general tool features, the scope of the analysis (e.g., materials and processes included), and the impact categories analyzed (e.g., climate change, acidification). A series of scenarios representing materials management problems currently relevant to c

  11. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  12. Simulate what is measured: next steps towards predictive simulations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bussmann, Michael; Kluge, Thomas; Debus, Alexander; Hübl, Axel; Garten, Marco; Zacharias, Malte; Vorberger, Jan; Pausch, Richard; Widera, René; Schramm, Ulrich; Cowan, Thomas E.; Irman, Arie; Zeil, Karl; Kraus, Dominik

    2017-05-01

    Simulations of laser matter interaction at extreme intensities that have predictive power are nowadays in reach when considering codes that make optimum use of high performance compute architectures. Nevertheless, this is mostly true for very specific settings where model parameters are very well known from experiment and the underlying plasma dynamics is governed by Maxwell's equations solely. When including atomic effects, prepulse influences, radiation reaction and other physical phenomena things look different. Not only is it harder to evaluate the sensitivity of the simulation result on the variation of the various model parameters but numerical models are less well tested and their combination can lead to subtle side effects that influence the simulation outcome. We propose to make optimum use of future compute hardware to compute statistical and systematic errors rather than just find the mots optimum set of parameters fitting an experiment. This requires to include experimental uncertainties which is a challenge to current state of the art techniques. Moreover, it demands better comparison to experiments as inclusion of simulating the diagnostic's response becomes important. We strongly advocate the use of open standards for finding interoperability between codes for comparison studies, building complete tool chains for simulating laser matter experiments from start to end.

  13. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE PAGES

    Maljovec, D.; Liu, S.; Wang, B.; ...

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  14. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  15. Identification of Cell Type-Specific Differences in Erythropoietin Receptor Signaling in Primary Erythroid and Lung Cancer Cells

    PubMed Central

    Salopiata, Florian; Depner, Sofia; Wäsch, Marvin; Böhm, Martin E.; Mücke, Oliver; Plass, Christoph; Lehmann, Wolf D.; Kreutz, Clemens; Timmer, Jens; Klingmüller, Ursula

    2016-01-01

    Lung cancer, with its most prevalent form non-small-cell lung carcinoma (NSCLC), is one of the leading causes of cancer-related deaths worldwide, and is commonly treated with chemotherapeutic drugs such as cisplatin. Lung cancer patients frequently suffer from chemotherapy-induced anemia, which can be treated with erythropoietin (EPO). However, studies have indicated that EPO not only promotes erythropoiesis in hematopoietic cells, but may also enhance survival of NSCLC cells. Here, we verified that the NSCLC cell line H838 expresses functional erythropoietin receptors (EPOR) and that treatment with EPO reduces cisplatin-induced apoptosis. To pinpoint differences in EPO-induced survival signaling in erythroid progenitor cells (CFU-E, colony forming unit-erythroid) and H838 cells, we combined mathematical modeling with a method for feature selection, the L1 regularization. Utilizing an example model and simulated data, we demonstrated that this approach enables the accurate identification and quantification of cell type-specific parameters. We applied our strategy to quantitative time-resolved data of EPO-induced JAK/STAT signaling generated by quantitative immunoblotting, mass spectrometry and quantitative real-time PCR (qRT-PCR) in CFU-E and H838 cells as well as H838 cells overexpressing human EPOR (H838-HA-hEPOR). The established parsimonious mathematical model was able to simultaneously describe the data sets of CFU-E, H838 and H838-HA-hEPOR cells. Seven cell type-specific parameters were identified that included for example parameters for nuclear translocation of STAT5 and target gene induction. Cell type-specific differences in target gene induction were experimentally validated by qRT-PCR experiments. The systematic identification of pathway differences and sensitivities of EPOR signaling in CFU-E and H838 cells revealed potential targets for intervention to selectively inhibit EPO-induced signaling in the tumor cells but leave the responses in erythroid progenitor cells unaffected. Thus, the proposed modeling strategy can be employed as a general procedure to identify cell type-specific parameters and to recommend treatment strategies for the selective targeting of specific cell types. PMID:27494133

  16. Research in Satellite-Fiber Network Interoperability

    NASA Technical Reports Server (NTRS)

    Edelson, Burt

    1997-01-01

    This four part report evaluated the performance of high data rate transmission links using the ACTS satellite, and to provide a preparatory test framework for two of the space science applications that have been approved for tests and demonstrations as part of the overall ACTS program. The test plan will provide guidance and information necessary to find the optimal values of the transmission parameters and then apply these parameters to specific applications. The first part will focus on the satellite-to-earth link. The second part is a set of tests to study the performance of ATM on the ACTS channel. The third and fourth parts of the test plan will cover the space science applications, Global Climate Modeling and Keck Telescope Acquisition Modeling and Control.

  17. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  18. A parametric model and estimation techniques for the inharmonicity and tuning of the piano.

    PubMed

    Rigaud, François; David, Bertrand; Daudet, Laurent

    2013-05-01

    Inharmonicity of piano tones is an essential property of their timbre that strongly influences the tuning, leading to the so-called octave stretching. It is proposed in this paper to jointly model the inharmonicity and tuning of pianos on the whole compass. While using a small number of parameters, these models are able to reflect both the specificities of instrument design and tuner's practice. An estimation algorithm is derived that can run either on a set of isolated note recordings, but also on chord recordings, assuming that the played notes are known. It is applied to extract parameters highlighting some tuner's choices on different piano types and to propose tuning curves for out-of-tune pianos or piano synthesizers.

  19. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  20. Structural investigation of the Grenville Province by radar and other imaging and nonimaging sensors

    NASA Technical Reports Server (NTRS)

    Lowman, P. D., Jr.; Blodget, H. W.; Webster, W. J., Jr.; Paia, S.; Singhroy, V. H.; Slaney, V. R.

    1984-01-01

    The structural investigation of the Canadian Shield by orbital radar and LANDSAT, is outlined. The area includes parts of the central metasedimentary belt and the Ontario gneiss belt, and major structures as well-expressed topographically. The primary objective is to apply SIR-B data to the mapping of this key part of the Grenville orogen, specifically ductile fold structures and associated features, and igneous, metamorphic, and sedimentary rock (including glacial and recent sediments). Secondary objectives are to support the Canadian RADARSAT project by evaluating the baseline parameters of a Canadian imaging radar satellite planned for late in the decade. The baseline parameters include optimum incidence and azimuth angles. The experiment is to develop techniques for the use of multiple data sets.

  1. Multi-institutional validation of a novel textural analysis tool for preoperative stratification of suspected thyroid tumors on diffusion-weighted MRI.

    PubMed

    Brown, Anna M; Nagala, Sidhartha; McLean, Mary A; Lu, Yonggang; Scoffings, Daniel; Apte, Aditya; Gonen, Mithat; Stambuk, Hilda E; Shaha, Ashok R; Tuttle, R Michael; Deasy, Joseph O; Priest, Andrew N; Jani, Piyush; Shukla-Dave, Amita; Griffiths, John

    2016-04-01

    Ultrasound-guided fine needle aspirate cytology fails to diagnose many malignant thyroid nodules; consequently, patients may undergo diagnostic lobectomy. This study assessed whether textural analysis (TA) could noninvasively stratify thyroid nodules accurately using diffusion-weighted MRI (DW-MRI). This multi-institutional study examined 3T DW-MRI images obtained with spin echo echo planar imaging sequences. The training data set included 26 patients from Cambridge, United Kingdom, and the test data set included 18 thyroid cancer patients from Memorial Sloan Kettering Cancer Center (New York, New York, USA). Apparent diffusion coefficients (ADCs) were compared over regions of interest (ROIs) defined on thyroid nodules. TA, linear discriminant analysis (LDA), and feature reduction were performed using the 21 MaZda-generated texture parameters that best distinguished benign and malignant ROIs. Training data set mean ADC values were significantly different for benign and malignant nodules (P = 0.02) with a sensitivity and specificity of 70% and 63%, respectively, and a receiver operator characteristic (ROC) area under the curve (AUC) of 0.73. The LDA model of the top 21 textural features correctly classified 89/94 DW-MRI ROIs with 92% sensitivity, 96% specificity, and an AUC of 0.97. This algorithm correctly classified 16/18 (89%) patients in the independently obtained test set of thyroid DW-MRI scans. TA classifies thyroid nodules with high sensitivity and specificity on multi-institutional DW-MRI data sets. This method requires further validation in a larger prospective study. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  2. Comparison of five automated hematology analyzers in a university hospital setting: Abbott Cell-Dyn Sapphire, Beckman Coulter DxH 800, Siemens Advia 2120i, Sysmex XE-5000, and Sysmex XN-2000.

    PubMed

    Bruegel, Mathias; Nagel, Dorothea; Funk, Manuela; Fuhrmann, Petra; Zander, Johannes; Teupser, Daniel

    2015-06-01

    Various types of automated hematology analyzers are used in clinical laboratories. Here, we performed a side-by-side comparison of five current top of the range routine hematology analyzers in the setting of a university hospital central laboratory. Complete blood counts (CBC), differentials, reticulocyte and nucleated red blood cell (NRBC) counts of 349 patient samples, randomly taken out of routine diagnostics, were analyzed with Cell-Dyn Sapphire (Abbott), DxH 800 (Beckman Coulter), Advia 2120i (Siemens), XE-5000 and XN-2000 (Sysmex). Inter-instrument comparison of CBCs including reticulocyte and NRBC counts and investigation of flagging quality in relation to microscopy were performed with the complete set of samples. Inter-instrument comparison of five-part differential was performed using samples without atypical cells in blood smear (n=292). Automated five-part differentials and NRBCs were additionally compared with microscopy. The five analyzers showed a good concordance for basic blood count parameters. Correlations between instruments were less well for reticulocyte counts, NRBCs, and differentials. The poorest concordance for NRBCs with microscopy was observed for Advia 2120i (Kendall's τb=0.37). The highest flagging sensitivity for blasts was observed for XN-2000 (97% compared to 65%-76% for other analyzers), whereas overall specificity was comparable between different instruments. To the best of our knowledge, this is the most comprehensive side-by-side comparison of five current top of the range routine hematology analyzers. Variable analyzer quality and parameter specific limitations must be considered in defining laboratory algorithms in clinical practice.

  3. Prediction of EST functional relationships via literature mining with user-specified parameters.

    PubMed

    Wang, Hei-Chia; Huang, Tian-Hsiang

    2009-04-01

    The massive amount of expressed sequence tags (ESTs) gathered over recent years has triggered great interest in efficient applications for genomic research. In particular, EST functional relationships can be used to determine a possible gene network for biological processes of interest. In recent years, many researchers have tried to determine EST functional relationships by analyzing the biological literature. However, it has been challenging to find efficient prediction methods. Moreover, an annotated EST is usually associated with many functions, so successful methods must be able to distinguish between relevant and irrelevant functions based on user specifications. This paper proposes a method to discover functional relationships between ESTs of interest by analyzing literature from the Medical Literature Analysis and Retrieval System Online, with user-specified parameters for selecting keywords. This method performs better than the multiple kernel documents method in setting up a specific threshold for gathering materials. The method is also able to uncover known functional relationships, as shown by a comparison with the Kyoto Encyclopedia of Genes and Genomes database. The reliable EST relationships predicted by the proposed method can help to construct gene networks for specific biological functions of interest.

  4. Effects of a wearable exoskeleton stride management assist system (SMA®) on spatiotemporal gait characteristics in individuals after stroke: a randomized controlled trial.

    PubMed

    Buesing, Carolyn; Fisch, Gabriela; O'Donnell, Megan; Shahidi, Ida; Thomas, Lauren; Mummidisetty, Chaithanya K; Williams, Kenton J; Takahashi, Hideaki; Rymer, William Zev; Jayaraman, Arun

    2015-08-20

    Robots offer an alternative, potentially advantageous method of providing repetitive, high-dosage, and high-intensity training to address the gait impairments caused by stroke. In this study, we compared the effects of the Stride Management Assist (SMA®) System, a new wearable robotic device developed by Honda R&D Corporation, Japan, with functional task specific training (FTST) on spatiotemporal gait parameters in stroke survivors. A single blinded randomized control trial was performed to assess the effect of FTST and task-specific walking training with the SMA® device on spatiotemporal gait parameters. Participants (n=50) were randomly assigned to FTST or SMA. Subjects in both groups received training 3 times per week for 6-8 weeks for a maximum of 18 training sessions. The GAITRite® system was used to collect data on subjects' spatiotemporal gait characteristics before training (baseline), at mid-training, post-training, and at a 3-month follow-up. After training, significant improvements in gait parameters were observed in both training groups compared to baseline, including an increase in velocity and cadence, a decrease in swing time on the impaired side, a decrease in double support time, an increase in stride length on impaired and non-impaired sides, and an increase in step length on impaired and non-impaired sides. No significant differences were observed between training groups; except for SMA group, step length on the impaired side increased significantly during self-selected walking speed trials and spatial asymmetry decreased significantly during fast-velocity walking trials. SMA and FTST interventions provided similar, significant improvements in spatiotemporal gait parameters; however, the SMA group showed additional improvements across more parameters at various time points. These results indicate that the SMA® device could be a useful therapeutic tool to improve spatiotemporal parameters and contribute to improved functional mobility in stroke survivors. Further research is needed to determine the feasibility of using this device in a home setting vs a clinic setting, and whether such home use provides continued benefits. This study is registered under the title "Development of walk assist device to improve community ambulation" and can be located in clinicaltrials.gov with the study identifier: NCT01994395 .

  5. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  6. THERMUS—A thermal model package for ROOT

    NASA Astrophysics Data System (ADS)

    Wheaton, S.; Cleymans, J.; Hauer, M.

    2009-01-01

    THERMUS is a package of C++ classes and functions allowing statistical-thermal model analyses of particle production in relativistic heavy-ion collisions to be performed within the ROOT framework of analysis. Calculations are possible within three statistical ensembles; a grand-canonical treatment of the conserved charges B, S and Q, a fully canonical treatment of the conserved charges, and a mixed-canonical ensemble combining a canonical treatment of strangeness with a grand-canonical treatment of baryon number and electric charge. THERMUS allows for the assignment of decay chains and detector efficiencies specific to each particle yield, which enables sensible fitting of model parameters to experimental data. Program summaryProgram title: THERMUS, version 2.1 Catalogue identifier: AEBW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 152 No. of bytes in distributed program, including test data, etc.: 93 581 Distribution format: tar.gz Programming language: C++ Computer: PC, Pentium 4, 1 GB RAM (not hardware dependent) Operating system: Linux: FEDORA, RedHat, etc. Classification: 17.7 External routines: Numerical Recipes in C [1], ROOT [2] Nature of problem: Statistical-thermal model analyses of heavy-ion collision data require the calculation of both primordial particle densities and contributions from resonance decay. A set of thermal parameters (the number depending on the particular model imposed) and a set of thermalized particles, with their decays specified, is required as input to these models. The output is then a complete set of primordial thermal quantities for each particle, together with the contributions to the final particle yields from resonance decay. In many applications of statistical-thermal models it is required to fit experimental particle multiplicities or particle ratios. In such analyses, the input is a set of experimental yields and ratios, a set of particles comprising the assumed hadron resonance gas formed in the collision and the constraints to be placed on the system. The thermal model parameters consistent with the specified constraints leading to the best-fit to the experimental data are then output. Solution method: THERMUS is a package designed for incorporation into the ROOT [2] framework, used extensively by the heavy-ion community. As such, it utilizes a great deal of ROOT's functionality in its operation. ROOT features used in THERMUS include its containers, the wrapper TMinuit implementing the MINUIT fitting package, and the TMath class of mathematical functions and routines. Arguably the most useful feature is the utilization of CINT as the control language, which allows interactive access to the THERMUS objects. Three distinct statistical ensembles are included in THERMUS, while additional options to include quantum statistics, resonance width and excluded volume corrections are also available. THERMUS provides a default particle list including all mesons (up to the K4∗ (2045)) and baryons (up to the Ω) listed in the July 2002 Particle Physics Booklet [3]. For each typically unstable particle in this list, THERMUS includes a text-file listing its decays. With thermal parameters specified, THERMUS calculates primordial thermal densities either by performing numerical integrations or else, in the case of the Boltzmann approximation without resonance width in the grand-canonical ensemble, by evaluating Bessel functions. Particle decay chains are then used to evaluate experimental observables (i.e. particle yields following resonance decay). Additional detector efficiency factors allow fine-tuning of the model predictions to a specific detector arrangement. When parameters are required to be constrained, use is made of the 'Numerical Recipes in C' [1] function which applies the Broyden globally convergent secant method of solving nonlinear systems of equations. Since the NRC software is not freely-available, it has to be purchased by the user. THERMUS provides the means of imposing a large number of constraints on the chosen model (amongst others, THERMUS can fix the baryon-to-charge ratio of the system, the strangeness density of the system and the primordial energy per hadron). Fits to experimental data are accomplished in THERMUS by using the ROOT TMinuit class. In its default operation, the standard χ function is minimized, yielding the set of best-fit thermal parameters. THERMUS allows the assignment of separate decay chains to each experimental input. In this way, the model is able to match the specific feed-down corrections of a particular data set. Running time: Depending on the analysis required, run-times vary from seconds (for the evaluation of particle multiplicities given a set of parameters) to several minutes (for fits to experimental data subject to constraints). References:W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2002. R. Brun, F. Rademakers, Nucl. Inst. Meth. Phys. Res. A 389 (1997) 81. See also http://root.cern.ch/. K. Hagiwara et al., Phys. Rev. D 66 (2002) 010001.

  7. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    PubMed

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets

    NASA Astrophysics Data System (ADS)

    Morelli, Gian Luca

    2016-05-01

    The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.

  9. True colour classification of natural waters with medium-spectral resolution satellites: SeaWiFS, MODIS, MERIS and OLCI.

    PubMed

    Woerd, Hendrik J van der; Wernand, Marcel R

    2015-10-09

    The colours from natural waters differ markedly over the globe, depending on the water composition and illumination conditions. The space-borne "ocean colour" instruments are operational instruments designed to retrieve important water-quality indicators, based on the measurement of water leaving radiance in a limited number (5 to 10) of narrow (≈10 nm) bands. Surprisingly, the analysis of the satellite data has not yet paid attention to colour as an integral optical property that can also be retrieved from multispectral satellite data. In this paper we re-introduce colour as a valuable parameter that can be expressed mainly by the hue angle (α). Based on a set of 500 synthetic spectra covering a broad range of natural waters a simple algorithm is developed to derive the hue angle from SeaWiFS, MODIS, MERIS and OLCI data. The algorithm consists of a weighted linear sum of the remote sensing reflectance in all visual bands plus a correction term for the specific band-setting of each instrument. The algorithm is validated by a set of 603 hyperspectral measurements from inland-, coastal- and near-ocean waters. We conclude that the hue angle is a simple objective parameter of natural waters that can be retrieved uniformly for all space-borne ocean colour instruments.

  10. A global analysis of Y-chromosomal haplotype diversity for 23 STR loci

    PubMed Central

    Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M.T.; Santos, Lorna H.; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F.; Domingues, Patricia M.; Chamoun, Wafaa Takash; Coble, Michael D.; Hill, Carolyn R.; Corach, Daniel; Caputo, Mariela; D’Amato, Maria E.; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H.D.; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E.; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E.; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M.; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S. Moura; Nogueira, Tatiana L.S.; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K.; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L.; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U.; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M.; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H.; Gwynne, Gareth M.; Jobling, Mark A.; Whittle, Martin R.; Sumita, Denilce R.; Wolańska-Nowak, Paulina; Yong, Rita Y.Y.; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz

    2014-01-01

    In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. PMID:24854874

  11. Corrected Integral Shape Averaging Applied to Obstructive Sleep Apnea Detection from the Electrocardiogram

    NASA Astrophysics Data System (ADS)

    Boudaoud, S.; Rix, H.; Meste, O.; Heneghan, C.; O'Brien, C.

    2007-12-01

    We present a technique called corrected integral shape averaging (CISA) for quantifying shape and shape differences in a set of signals. CISA can be used to account for signal differences which are purely due to affine time warping (jitter and dilation/compression), and hence provide access to intrinsic shape fluctuations. CISA can also be used to define a distance between shapes which has useful mathematical properties; a mean shape signal for a set of signals can be defined, which minimizes the sum of squared shape distances of the set from the mean. The CISA procedure also allows joint estimation of the affine time parameters. Numerical simulations are presented to support the algorithm for obtaining the CISA mean and parameters. Since CISA provides a well-defined shape distance, it can be used in shape clustering applications based on distance measures such as[InlineEquation not available: see fulltext.]-means. We present an application in which CISA shape clustering is applied to P-waves extracted from the electrocardiogram of subjects suffering from sleep apnea. The resulting shape clustering distinguishes ECG segments recorded during apnea from those recorded during normal breathing with a sensitivity of[InlineEquation not available: see fulltext.] and specificity of[InlineEquation not available: see fulltext.].

  12. Improving the efficiency of an Er:YAG laser on enamel and dentin.

    PubMed

    Rizcalla, Nicolas; Bader, Carl; Bortolotto, Tissiana; Krejci, Ivo

    2012-02-01

    To evaluate the influence of air pressure, water flow rate, and pulse frequency on the removal speed of enamel and dentin as well as on their surface morphology. Twenty-four bovine incisors were horizontally cut in slices. Each sample was mounted on an experimental assembly, allowing precise orientation. Eighteen cavities were prepared, nine in enamel and nine in dentin. Specific parameters for frequency, water flow rate, and air pressure were applied for each experimental group. Three groups were randomly formed according to the air pressure settings. Cavity depth was measured using a digital micrometer gauge, and surface morphology was checked by means of scanning electron microscopy. Data was analyzed with ANOVA and Duncan post hoc test. Irradiation at 25 Hz for enamel and 30 Hz for dentin provided the best ablation rates within this study, but efficiency decreased if the frequency was raised further. Greater tissue ablation was found with water flow rate set to low and dropped with higher values. Air pressure was found to have an interaction with the other settings, since ablation rates varied with different air pressure values. Fine-tuning of all parameters to get a good ablation rate with minimum surface damage seems to be key in achieving optimal efficiency for cavity preparation with an Er:YAG laser.

  13. Advanced engine management of individual cylinders for control of exhaust species

    DOEpatents

    Graves, Ronald L [Knoxville, TN; West, Brian H [Knoxville, TN; Huff, Shean P [Knoxville, TN; Parks, II, James E

    2008-12-30

    A method and system controls engine-out exhaust species of a combustion engine having a plurality of cylinders. The method typically includes various combinations of steps such as controlling combustion parameters in individual cylinders, grouping the individual cylinders into a lean set and a rich set of one or more cylinders, combusting the lean set in a lean combustion parameter condition having a lean air:fuel equivalence ratio, combusting the rich set in a rich combustion parameter condition having a rich air:fuel equivalence ratio, and adjusting the lean set and the rich set of one or more cylinders to generate net-lean combustion. The exhaust species may have elevated concentrations of hydrogen and oxygen.

  14. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-01-01

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  15. Beyond Vmax and Km: How details of enzyme function influence geochemical cycles

    NASA Astrophysics Data System (ADS)

    Steen, A. D.

    2015-12-01

    Enzymes catalyze the vast majority of chemical reactions relevant to geomicrobiology. Studies of the activities of enzymes in environmental systems often report Vmax (the maximum possible rate of reaction; often proportional to the concentration of enzymes in the system) and sometimes Km (a measure of the affinity between enzymes and their substrates). However, enzyme studies - particularly those related to enzymes involved in organic carbon oxidation - are often limited to only those parameters, and a relatively limited and mixed set of enzymes. Here I will discuss some novel methods to assay and characterize the specific sets of enzymes that may be important to the carbon cycle in aquatic environments. First, kinetic experiments revealed the collective properties of the complex mixtures of extracellular peptidases that occur where microbial communities are diverse. Crystal structures combined with biochemical characterization of specific enzymes can yield more detailed information about key steps in organic carbon transformations. These new techniques have the potential to provide mechanistic grounding to geomicrobiological models.

  16. Structure-related statistical singularities along protein sequences: a correlation study.

    PubMed

    Colafranceschi, Mauro; Colosimo, Alfredo; Zbilut, Joseph P; Uversky, Vladimir N; Giuliani, Alessandro

    2005-01-01

    A data set composed of 1141 proteins representative of all eukaryotic protein sequences in the Swiss-Prot Protein Knowledge base was coded by seven physicochemical properties of amino acid residues. The resulting numerical profiles were submitted to correlation analysis after the application of a linear (simple mean) and a nonlinear (Recurrence Quantification Analysis, RQA) filter. The main RQA variables, Recurrence and Determinism, were subsequently analyzed by Principal Component Analysis. The RQA descriptors showed that (i) within protein sequences is embedded specific information neither present in the codes nor in the amino acid composition and (ii) the most sensitive code for detecting ordered recurrent (deterministic) patterns of residues in protein sequences is the Miyazawa-Jernigan hydrophobicity scale. The most deterministic proteins in terms of autocorrelation properties of primary structures were found (i) to be involved in protein-protein and protein-DNA interactions and (ii) to display a significantly higher proportion of structural disorder with respect to the average data set. A study of the scaling behavior of the average determinism with the setting parameters of RQA (embedding dimension and radius) allows for the identification of patterns of minimal length (six residues) as possible markers of zones specifically prone to inter- and intramolecular interactions.

  17. Automated diagnosis of coronary artery disease based on data mining and fuzzy modeling.

    PubMed

    Tsipouras, Markos G; Exarchos, Themis P; Fotiadis, Dimitrios I; Kotsia, Anna P; Vakalis, Konstantinos V; Naka, Katerina K; Michalis, Lampros K

    2008-07-01

    A fuzzy rule-based decision support system (DSS) is presented for the diagnosis of coronary artery disease (CAD). The system is automatically generated from an initial annotated dataset, using a four stage methodology: 1) induction of a decision tree from the data; 2) extraction of a set of rules from the decision tree, in disjunctive normal form and formulation of a crisp model; 3) transformation of the crisp set of rules into a fuzzy model; and 4) optimization of the parameters of the fuzzy model. The dataset used for the DSS generation and evaluation consists of 199 subjects, each one characterized by 19 features, including demographic and history data, as well as laboratory examinations. Tenfold cross validation is employed, and the average sensitivity and specificity obtained is 62% and 54%, respectively, using the set of rules extracted from the decision tree (first and second stages), while the average sensitivity and specificity increase to 80% and 65%, respectively, when the fuzzification and optimization stages are used. The system offers several advantages since it is automatically generated, it provides CAD diagnosis based on easily and noninvasively acquired features, and is able to provide interpretation for the decisions made.

  18. Impact of Uncertainties in Meteorological Forcing Data and Land Surface Parameters on Global Estimates of Terrestrial Water Balance Components

    NASA Astrophysics Data System (ADS)

    Nasonova, O. N.; Gusev, Ye. M.; Kovalev, Ye. E.

    2009-04-01

    Global estimates of the components of terrestrial water balance depend on a technique of estimation and on the global observational data sets used for this purpose. Land surface modelling is an up-to-date and powerful tool for such estimates. However, the results of modelling are affected by the quality of both a model and input information (including meteorological forcing data and model parameters). The latter is based on available global data sets containing meteorological data, land-use information, and soil and vegetation characteristics. Now there are a lot of global data sets, which differ in spatial and temporal resolution, as well as in accuracy and reliability. Evidently, uncertainties in global data sets will influence the results of model simulations, but to which extent? The present work is an attempt to investigate this issue. The work is based on the land surface model SWAP (Soil Water - Atmosphere - Plants) and global 1-degree data sets on meteorological forcing data and the land surface parameters, provided within the framework of the Second Global Soil Wetness Project (GSWP-2). The 3-hourly near-surface meteorological data (for the period from 1 July 1982 to 31 December 1995) are based on reanalyses and gridded observational data used in the International Satellite Land-Surface Climatology Project (ISLSCP) Initiative II. Following the GSWP-2 strategy, we used a number of alternative global forcing data sets to perform different sensitivity experiments (with six alternative versions of precipitation, four versions of radiation, two pure reanalysis products and two fully hybridized products of meteorological data). To reveal the influence of model parameters on simulations, in addition to GSWP-2 parameter data sets, we produced two alternative global data sets with soil parameters on the basis of their relationships with the content of clay and sand in a soil. After this the sensitivity experiments with three different sets of parameters were performed. As a result, 16 variants of global annual estimates of water balance components were obtained. Application of alternative data sets on radiation, precipitation, and soil parameters allowed us to reveal the influence of uncertainties in input data on global estimates of water balance components.

  19. The clinical and economic impact of point-of-care CD4 testing in mozambique and other resource-limited settings: a cost-effectiveness analysis.

    PubMed

    Hyle, Emily P; Jani, Ilesh V; Lehe, Jonathan; Su, Amanda E; Wood, Robin; Quevedo, Jorge; Losina, Elena; Bassett, Ingrid V; Pei, Pamela P; Paltiel, A David; Resch, Stephen; Freedberg, Kenneth A; Peter, Trevor; Walensky, Rochelle P

    2014-09-01

    Point-of-care CD4 tests at HIV diagnosis could improve linkage to care in resource-limited settings. Our objective is to evaluate the clinical and economic impact of point-of-care CD4 tests compared to laboratory-based tests in Mozambique. We use a validated model of HIV testing, linkage, and treatment (CEPAC-International) to examine two strategies of immunological staging in Mozambique: (1) laboratory-based CD4 testing (LAB-CD4) and (2) point-of-care CD4 testing (POC-CD4). Model outcomes include 5-y survival, life expectancy, lifetime costs, and incremental cost-effectiveness ratios (ICERs). Input parameters include linkage to care (LAB-CD4, 34%; POC-CD4, 61%), probability of correctly detecting antiretroviral therapy (ART) eligibility (sensitivity: LAB-CD4, 100%; POC-CD4, 90%) or ART ineligibility (specificity: LAB-CD4, 100%; POC-CD4, 85%), and test cost (LAB-CD4, US$10; POC-CD4, US$24). In sensitivity analyses, we vary POC-CD4-specific parameters, as well as cohort and setting parameters to reflect a range of scenarios in sub-Saharan Africa. We consider ICERs less than three times the per capita gross domestic product in Mozambique (US$570) to be cost-effective, and ICERs less than one times the per capita gross domestic product in Mozambique to be very cost-effective. Projected 5-y survival in HIV-infected persons with LAB-CD4 is 60.9% (95% CI, 60.9%-61.0%), increasing to 65.0% (95% CI, 64.9%-65.1%) with POC-CD4. Discounted life expectancy and per person lifetime costs with LAB-CD4 are 9.6 y (95% CI, 9.6-9.6 y) and US$2,440 (95% CI, US$2,440-US$2,450) and increase with POC-CD4 to 10.3 y (95% CI, 10.3-10.3 y) and US$2,800 (95% CI, US$2,790-US$2,800); the ICER of POC-CD4 compared to LAB-CD4 is US$500/year of life saved (YLS) (95% CI, US$480-US$520/YLS). POC-CD4 improves clinical outcomes and remains near the very cost-effective threshold in sensitivity analyses, even if point-of-care CD4 tests have lower sensitivity/specificity and higher cost than published values. In other resource-limited settings with fewer opportunities to access care, POC-CD4 has a greater impact on clinical outcomes and remains cost-effective compared to LAB-CD4. Limitations of the analysis include the uncertainty around input parameters, which is examined in sensitivity analyses. The potential added benefits due to decreased transmission are excluded; their inclusion would likely further increase the value of POC-CD4 compared to LAB-CD4. POC-CD4 at the time of HIV diagnosis could improve survival and be cost-effective compared to LAB-CD4 in Mozambique, if it improves linkage to care. POC-CD4 could have the greatest impact on mortality in settings where resources for HIV testing and linkage are most limited. Please see later in the article for the Editors' Summary.

  20. Clinically relevant hypoglycemia prediction metrics for event mitigation.

    PubMed

    Harvey, Rebecca A; Dassau, Eyal; Zisser, Howard C; Bevier, Wendy; Seborg, Dale E; Jovanovič, Lois; Doyle, Francis J

    2012-08-01

    The purpose of this study was to develop a method to compare hypoglycemia prediction algorithms and choose parameter settings for different applications, such as triggering insulin pump suspension or alerting for rescue carbohydrate treatment. Hypoglycemia prediction algorithms with different parameter settings were implemented on an ambulatory dataset containing 490 days from 30 subjects with type 1 diabetes mellitus using the Dexcom™ (San Diego, CA) SEVEN™ continuous glucose monitoring system. The performance was evaluated using a proposed set of metrics representing the true-positive ratio, false-positive rate, and distribution of warning times. A prospective, in silico study was performed to show the effect of using different parameter settings to prevent or rescue from hypoglycemia. The retrospective study results suggest the parameter settings for different methods of hypoglycemia mitigation. When rescue carbohydrates are used, a high true-positive ratio, a minimal false-positive rate, and alarms with short warning time are desired. These objectives were met with a 30-min prediction horizon and two successive flags required to alarm: 78% of events were detected with 3.0 false alarms/day and 66% probability of alarms occurring within 30 min of the event. This parameter setting selection was confirmed in silico: treating with rescue carbohydrates reduced the duration of hypoglycemia from 14.9% to 0.5%. However, for a different method, such as pump suspension, this parameter setting only reduced hypoglycemia to 8.7%, as can be expected by the low probability of alarming more than 30 min ahead. The proposed metrics allow direct comparison of hypoglycemia prediction algorithms and selection of parameter settings for different types of hypoglycemia mitigation, as shown in the prospective in silico study in which hypoglycemia was alerted or treated with rescue carbohydrates.

  1. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  2. Considerations for setting the specifications of vaccines.

    PubMed

    Minor, Philip

    2012-05-01

    The specifications of vaccines are determined by the particular product and its method of manufacture, which raise issues unique to the vaccine in question. However, the general principles are shared, including the need to have sufficient active material to immunize a very high proportion of recipients, an acceptable level of safety, which may require specific testing or may come from the production process, and an acceptable low level of contamination with unwanted materials, which may include infectious agents or materials used in production. These principles apply to the earliest smallpox vaccines and the most recent recombinant vaccines, such as those against HPV. Manufacturing development includes more precise definitions of the product through improved tests and tighter control of the process parameters. Good manufacturing practice plays a major role, which is likely to increase in importance in assuring product quality almost independent of end-product specifications.

  3. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    NASA Astrophysics Data System (ADS)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  4. The effect of air flow, panel curvature, and internal pressurization on field-incidence transmission loss. [acoustic propagation through aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Koval, L. R.

    1975-01-01

    In the context of sound transmission through aircraft fuselage panels, equations for the field-incidence transmission loss (TL) of a single-walled panel are derived that include the effects of external air flow, panel curvature, and internal fuselage pressurization. These effects are incorporated into the classical equations for the TL of single panels, and the resulting double integral for field-incidence TL is numerically evaluated for a specific set of parameters.

  5. Identification of D-region ledges of ionization by LF/VLF observations during a meteor shower

    NASA Astrophysics Data System (ADS)

    Rumi, G. C.

    1982-09-01

    It is shown that the perturbation produced by a meteor shower on a pair of LF and/or VLF standard frequency signals received after reflection on the lower ionosphere can be used for the detection of the nocturnal ledges which characterize the ionization profile of the D-region. The theoretical developments are applied to two specific sets of experimental data and the height and ionization of the ledges, together with other associated parameters, are determined.

  6. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces

    PubMed Central

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2010-01-01

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm’s behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method. PMID:20182556

  7. Analytical redundancy management mechanization and flight data analysis for the F-8 digital fly-by-wire aircraft flight control sensors

    NASA Technical Reports Server (NTRS)

    Deckert, J. C.

    1983-01-01

    The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.

  8. On The Behavior of Subgradient Projections Methods for Convex Feasibility Problems in Euclidean Spaces.

    PubMed

    Butnariu, Dan; Censor, Yair; Gurfil, Pini; Hadar, Ethan

    2008-07-03

    We study some methods of subgradient projections for solving a convex feasibility problem with general (not necessarily hyperplanes or half-spaces) convex sets in the inconsistent case and propose a strategy that controls the relaxation parameters in a specific self-adapting manner. This strategy leaves enough user-flexibility but gives a mathematical guarantee for the algorithm's behavior in the inconsistent case. We present numerical results of computational experiments that illustrate the computational advantage of the new method.

  9. Observational and Data Reduction Techniques to Optimize Mineralogical Characterizations of Asteroid Surface Materials

    NASA Technical Reports Server (NTRS)

    Gaffey, M. J.

    2003-01-01

    Mineralogy is the key to determining the compositional history of the asteroids and to determining the genetic relationships between the asteroids and meteorites. The most sophisticated remote mineralogical characterizations involve the quantitative extraction of specific diagnostic parameters from reflectance spectra and the use of quantitative interpretive calibrations to determine the presence, abundance and/or composition of mineral phases in a surface material. Although this approach is potentially subject to systematic errors, it provides the only consistent set of asteroid surface material characterizations.

  10. New fundamental parameters for attitude representation

    NASA Astrophysics Data System (ADS)

    Patera, Russell P.

    2017-08-01

    A new attitude parameter set is developed to clarify the geometry of combining finite rotations in a rotational sequence and in combining infinitesimal angular increments generated by angular rate. The resulting parameter set of six Pivot Parameters represents a rotation as a great circle arc on a unit sphere that can be located at any clocking location in the rotation plane. Two rotations are combined by linking their arcs at either of the two intersection points of the respective rotation planes. In a similar fashion, linking rotational increments produced by angular rate is used to derive the associated kinematical equations, which are linear and have no singularities. Included in this paper is the derivation of twelve Pivot Parameter elements that represent all twelve Euler Angle sequences, which enables efficient conversions between Pivot Parameters and any Euler Angle sequence. Applications of this new parameter set include the derivation of quaternions and the quaternion composition rule, as well as, the derivation of the analytical solution to time dependent coning motion. The relationships between Pivot Parameters and traditional parameter sets are included in this work. Pivot Parameters are well suited for a variety of aerospace applications due to their effective composition rule, singularity free kinematic equations, efficient conversion to and from Euler Angle sequences and clarity of their geometrical foundation.

  11. Investigating the Temperature Problem in Narrow Line Emitting AGN

    NASA Astrophysics Data System (ADS)

    Jenkins, Sam; Richardson, Chris T.

    2018-06-01

    Our research investigates the physical conditions in gas clouds around the narrow line region of AGN. Specifically, we explore the necessary conditions for anomalously high electron temperatures, Te, in those clouds. Our 321 galaxy data set was acquired from SDSS DR14 after requiring S/N > 5.0 in [OIII] 4363 and S/N > 3.0 in all BPT diagram emission lines, to ensure both accurate Te and galaxy classification, with 0.04 < z < 1.0. Interestingly, our data set contained no LINERs. We ran simulations using the simulation code Cloudy, and focused on matching the emission exhibited by the hottest of the 70 AGN in our data set. We used multicore computing to cut down on run time, which drastically improved the efficiency of our simulations. We varied hydrogen density, ionization parameter, and metallicity, Z, only to find these three parameters alone were incapable of recreating anomalously high Te, but successfully matched galaxies showing low- to moderate Te. These highest temperature simulations were at low Z, and were able to facilitate higher temperatures because they avoided the cooling effects of high Z. Our most successful simulations varied Z and grain content, which matched approximately 10% of our high temperature data. Our simulations with the highest grain content produced the highest Te because of the photoelectric heating effect that grains provide, which we confirmed by monitoring each heating mechanism as a function of depth. In the near future, we plan to run simulations varying grain content and ionization parameter in order to study the effects these conditions have on gas cloud Te.

  12. Space Station Food System

    NASA Technical Reports Server (NTRS)

    Thurmond, Beverly A.; Gillan, Douglas J.; Perchonok, Michele G.; Marcus, Beth A.; Bourland, Charles T.

    1986-01-01

    A team of engineers and food scientists from NASA, the aerospace industry, food companies, and academia are defining the Space Station Food System. The team identified the system requirements based on an analysis of past and current space food systems, food systems from isolated environment communities that resemble Space Station, and the projected Space Station parameters. The team is resolving conflicts among requirements through the use of trade-off analyses. The requirements will give rise to a set of specifications which, in turn, will be used to produce concepts. Concept verification will include testing of prototypes, both in 1-g and microgravity. The end-item specification provides an overall guide for assembling a functional food system for Space Station.

  13. Guidelines for Standardized Testing of Broadband Seismometers and Accelerometers

    USGS Publications Warehouse

    Hutt, Charles R.; Evans, John R.; Followill, Fred; Nigbor, Robert L.; Wielandt, Erhard

    2010-01-01

    Testing and specification of seismic and earthquake-engineering sensors and recorders has been marked by significant variations in procedures and selected parameters. These variations cause difficulty in comparing such specifications and test results. In July 1989, and again in May 2005, the U.S. Geological Survey hosted international pub-lic/private workshops with the goal of defining widely accepted guidelines for the testing of seismological inertial sensors, seismometers, and accelerometers. The Proceedings of the 2005 workshop have been published and include as appendix 6 the report of the 1989 workshop. This document represents a collation and rationalization of a single set of formal guidelines for testing and specifying broadband seismometers and accelerometers.

  14. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  15. Detection of nuclei in 4D Nomarski DIC microscope images of early Caenorhabditis elegans embryos using local image entropy and object tracking

    PubMed Central

    Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki

    2005-01-01

    Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690

  16. Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes

    NASA Astrophysics Data System (ADS)

    Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi

    2014-07-01

    While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.

  17. A Database of Herbaceous Vegetation Responses to Elevated Atmospheric CO2 (NDP-073)

    DOE Data Explorer

    Jones, Michael H [The Ohio State Univ., Columbus, OH (United States); Curtis, Peter S [The Ohio State Univ., Columbus, OH (United States); Cushman, Robert M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Brenkert, Antoinette L [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1999-01-01

    To perform a statistically rigorous meta-analysis of research results on the response by herbaceous vegetation to increased atmospheric CO2 levels, a multiparameter database of responses was compiled from the published literature. Seventy-eight independent CO2-enrichment studies, covering 53 species and 26 response parameters, reported mean response, sample size, and variance of the response (either as standard deviation or standard error). An additional 43 studies, covering 25 species and 6 response parameters, did not report variances. This numeric data package accompanies the Carbon Dioxide Information Analysis Center's (CDIAC's) NDP- 072, which provides similar information for woody vegetation. This numeric data package contains a 30-field data set of CO2- exposure experiment responses by herbaceous plants (as both a flat ASCII file and a spreadsheet file), files listing the references to the CO2-exposure experiments and specific comments relevant to the data in the data sets, and this documentation file (which includes SAS and Fortran codes to read the ASCII data file; SAS is a registered trademark of the SAS Institute, Inc., Cary, North Carolina 27511).

  18. Statistical symmetries of the Lundgren-Monin-Novikov hierarchy.

    PubMed

    Wacławczyk, Marta; Staffolani, Nicola; Oberlack, Martin; Rosteck, Andreas; Wilczek, Michael; Friedrich, Rudolf

    2014-07-01

    It was shown by Oberlack and Rosteck [Discr. Cont. Dyn. Sys. S, 3, 451 2010] that the infinite set of multipoint correlation (MPC) equations of turbulence admits a considerable extended set of Lie point symmetries compared to the Galilean group, which is implied by the original set of equations of fluid mechanics. Specifically, a new scaling group and an infinite set of translational groups of all multipoint correlation tensors have been discovered. These new statistical groups have important consequences for our understanding of turbulent scaling laws as they are essential ingredients of, e.g., the logarithmic law of the wall and other scaling laws, which in turn are exact solutions of the MPC equations. In this paper we first show that the infinite set of translational groups of all multipoint correlation tensors corresponds to an infinite dimensional set of translations under which the Lundgren-Monin-Novikov (LMN) hierarchy of equations for the probability density functions (PDF) are left invariant. Second, we derive a symmetry for the LMN hierarchy which is analogous to the scaling group of the MPC equations. Most importantly, we show that this symmetry is a measure of the intermittency of the velocity signal and the transformed functions represent PDFs of an intermittent (i.e., turbulent or nonturbulent) flow. Interesting enough, the positivity of the PDF puts a constraint on the group parameters of both shape and intermittency symmetry, leading to two conclusions. First, the latter symmetries may no longer be Lie group as under certain conditions group properties are violated, but still they are symmetries of the LMN equations. Second, as the latter two symmetries in its MPC versions are ingredients of many scaling laws such as the log law, the above constraints implicitly put weak conditions on the scaling parameter such as von Karman constant κ as they are functions of the group parameters. Finally, let us note that these kind of statistical symmetries are of much more general type, i.e., not limited to MPC or PDF equations emerging from Navier-Stokes, but instead they are admitted by other nonlinear partial differential equations like, for example, the Burgers equation when in conservative form and if the nonlinearity is quadratic.

  19. A case study on the in silico absorption simulations of levothyroxine sodium immediate-release tablets.

    PubMed

    Kocic, Ivana; Homsek, Irena; Dacevic, Mirjana; Grbic, Sandra; Parojcic, Jelena; Vucicevic, Katarina; Prostran, Milica; Miljkovic, Branislava

    2012-04-01

    The aim of this case study was to develop a drug-specific absorption model for levothyroxine (LT4) using mechanistic gastrointestinal simulation technology (GIST) implemented in the GastroPlus™ software package. The required input parameters were determined experimentally, in silico predicted and/or taken from the literature. The simulated plasma profile was similar and in a good agreement with the data observed in the in vivo bioequivalence study, indicating that the GIST model gave an accurate prediction of LT4 oral absorption. Additionally, plasma concentration-time profiles were simulated based on a set of experimental and virtual in vitro dissolution data in order to estimate the influence of different in vitro drug dissolution kinetics on the simulated plasma profiles and to identify biorelevant dissolution specification for LT4 immediate-release (IR) tablets. A set of experimental and virtual in vitro data was also used for correlation purposes. In vitro-in vivo correlation model based on the convolution approach was applied in order to assess the relationship between the in vitro and in vivo data. The obtained results suggest that dissolution specification of more than 85% LT4 dissolved in 60 min might be considered as biorelevant dissolution specification criteria for LT4 IR tablets. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Deep Learning Role in Early Diagnosis of Prostate Cancer

    PubMed Central

    Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman

    2018-01-01

    The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518

  1. Ab initio-informed maximum entropy modeling of rovibrational relaxation and state-specific dissociation with application to the O{sub 2} + O system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulakhmetov, Marat, E-mail: mkulakhm@purdue.edu; Alexeenko, Alina, E-mail: alexeenk@purdue.edu; Gallis, Michael, E-mail: magalli@sandia.gov

    Quasi-classical trajectory (QCT) calculations are used to study state-specific ro-vibrational energy exchange and dissociation in the O{sub 2} + O system. Atom-diatom collisions with energy between 0.1 and 20 eV are calculated with a double many body expansion potential energy surface by Varandas and Pais [Mol. Phys. 65, 843 (1988)]. Inelastic collisions favor mono-quantum vibrational transitions at translational energies above 1.3 eV although multi-quantum transitions are also important. Post-collision vibrational favoring decreases first exponentially and then linearly as Δv increases. Vibrationally elastic collisions (Δv = 0) favor small ΔJ transitions while vibrationally inelastic collisions have equilibrium post-collision rotational distributions. Dissociationmore » exhibits both vibrational and rotational favoring. New vibrational-translational (VT), vibrational-rotational-translational (VRT) energy exchange, and dissociation models are developed based on QCT observations and maximum entropy considerations. Full set of parameters for state-to-state modeling of oxygen is presented. The VT energy exchange model describes 22 000 state-to-state vibrational cross sections using 11 parameters and reproduces vibrational relaxation rates within 30% in the 2500–20 000 K temperature range. The VRT model captures 80 × 10{sup 6} state-to-state ro-vibrational cross sections using 19 parameters and reproduces vibrational relaxation rates within 60% in the 5000–15 000 K temperature range. The developed dissociation model reproduces state-specific and equilibrium dissociation rates within 25% using just 48 parameters. The maximum entropy framework makes it feasible to upscale ab initio simulation to full nonequilibrium flow calculations.« less

  2. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  3. Characterization and optimization of image quality as a function of reconstruction algorithms and parameter settings in a Siemens Inveon small-animal PET scanner using the NEMA NU 4-2008 standards

    NASA Astrophysics Data System (ADS)

    Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.

    2011-02-01

    The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.

  4. Julia Sets in Parameter Spaces

    NASA Astrophysics Data System (ADS)

    Buff, X.; Henriksen, C.

    Given a complex number λ of modulus 1, we show that the bifurcation locus of the one parameter family {fb(z)=λz+bz2+z3}b∈ contains quasi-conformal copies of the quadratic Julia set J(λz+z2). As a corollary, we show that when the Julia set J(λz+z2) is not locally connected (for example when z|-->λz+z2 has a Cremer point at 0), the bifurcation locus is not locally connected. To our knowledge, this is the first example of complex analytic parameter space of dimension 1, with connected but non-locally connected bifurcation locus. We also show that the set of complex numbers λ of modulus 1, for which at least one of the parameter rays has a non-trivial accumulation set, contains a dense Gδ subset of S1.

  5. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less

  6. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    PubMed Central

    2010-01-01

    Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Conclusions Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available. PMID:21050468

  7. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    PubMed

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce variability in the location of discovered cell populations across samples, and decrease the misclassification (mis-gating) of individual events when compared to default-parameter counterparts. Our results indicate that the preferred transformation for fluorescence channels is a parameter- optimized biexponential or generalized Box-Cox, in accordance with current best practices. Interestingly, for populations in the scatter channels, we find that the optimized hyperbolic arcsine may be a better choice in a high-throughput setting than current standard practice of no transformation. However, generally speaking, the choice of transformation remains data-dependent. We have implemented our algorithm in the BioConductor package, flowTrans, which is publicly available.

  8. Parameter Estimation for Thurstone Choice Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vojnovic, Milan; Yun, Seyoung

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less

  9. Developing a Screening Algorithm for Type II Diabetes Mellitus in the Resource-Limited Setting of Rural Tanzania.

    PubMed

    West, Caroline; Ploth, David; Fonner, Virginia; Mbwambo, Jessie; Fredrick, Francis; Sweat, Michael

    2016-04-01

    Noncommunicable diseases are on pace to outnumber infectious disease as the leading cause of death in sub-Saharan Africa, yet many questions remain unanswered with concern toward effective methods of screening for type II diabetes mellitus (DM) in this resource-limited setting. We aim to design a screening algorithm for type II DM that optimizes sensitivity and specificity of identifying individuals with undiagnosed DM, as well as affordability to health systems and individuals. Baseline demographic and clinical data, including hemoglobin A1c (HbA1c), were collected from 713 participants using probability sampling of the general population. We used these data, along with model parameters obtained from the literature, to mathematically model 8 purposed DM screening algorithms, while optimizing the sensitivity and specificity using Monte Carlo and Latin Hypercube simulation. An algorithm that combines risk assessment and measurement of fasting blood glucose was found to be superior for the most resource-limited settings (sensitivity 68%, sensitivity 99% and cost per patient having DM identified as $2.94). Incorporating HbA1c testing improves the sensitivity to 75.62%, but raises the cost per DM case identified to $6.04. The preferred algorithms are heavily biased to diagnose those with more severe cases of DM. Using basic risk assessment tools and fasting blood sugar testing in lieu of HbA1c testing in resource-limited settings could allow for significantly more feasible DM screening programs with reasonable sensitivity and specificity. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  10. Estimating a test's accuracy using tailored meta-analysis-How setting-specific data may aid study selection.

    PubMed

    Willis, Brian H; Hyde, Christopher J

    2014-05-01

    To determine a plausible estimate for a test's performance in a specific setting using a new method for selecting studies. It is shown how routine data from practice may be used to define an "applicable region" for studies in receiver operating characteristic space. After qualitative appraisal, studies are selected based on the probability that their study accuracy estimates arose from parameters lying in this applicable region. Three methods for calculating these probabilities are developed and used to tailor the selection of studies for meta-analysis. The Pap test applied to the UK National Health Service (NHS) Cervical Screening Programme provides a case example. The meta-analysis for the Pap test included 68 studies, but at most 17 studies were considered applicable to the NHS. For conventional meta-analysis, the sensitivity and specificity (with 95% confidence intervals) were estimated to be 72.8% (65.8, 78.8) and 75.4% (68.1, 81.5) compared with 50.9% (35.8, 66.0) and 98.0% (95.4, 99.1) from tailored meta-analysis using a binomial method for selection. Thus, for a cervical intraepithelial neoplasia (CIN) 1 prevalence of 2.2%, the post-test probability for CIN 1 would increase from 6.2% to 36.6% between the two methods of meta-analysis. Tailored meta-analysis provides a method for augmenting study selection based on the study's applicability to a setting. As such, the summary estimate is more likely to be plausible for a setting and could improve diagnostic prediction in practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Spatial occupancy models for large data sets

    USGS Publications Warehouse

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  12. RF tissue-heating near metallic implants during magnetic resonance examinations: an approach in the ac limit.

    PubMed

    Ballweg, Verena; Eibofner, Frank; Graf, Hansjorg

    2011-10-01

    State of the art to access radiofrequency (RF) heating near implants is computer modeling of the devices and solving Maxwell's equations for the specific setup. For a set of input parameters, a fixed result is obtained. This work presents a theoretical approach in the alternating current (ac) limit, which can potentially render closed formulas for the basic behavior of tissue heating near metallic structures. Dedicated experiments were performed to support the theory. For the ac calculations, the implant was modeled as an RLC parallel circuit, with L being the secondary of a transformer and the RF transmission coil being its primary. Parameters influencing coupling, power matching, and specific absorption rate (SAR) were determined and formula relations were established. Experiments on a copper ring with a radial gap as capacitor for inductive coupling (at 1.5 T) and on needles for capacitive coupling (at 3 T) were carried out. The temperature rise in the embedding dielectric was observed as a function of its specific resistance using an infrared (IR) camera. Closed formulas containing the parameters of the setup were obtained for the frequency dependence of the transmitted power at fixed load resistance, for the calculation of the resistance for optimum power transfer, and for the calculation of the transmitted power in dependence of the load resistance. Good qualitative agreement was found between the course of the experimentally obtained heating curves and the theoretically determined power curves. Power matching revealed as critical parameter especially if the sample was resonant close to the Larmor frequency. The presented ac approach to RF heating near an implant, which mimics specific values for R, L, and C, allows for closed formulas to estimate the potential of RF energy transfer. A first reference point for worst-case determination in MR testing procedures can be obtained. Numerical approaches, necessary to determine spatially resolved heating maps, can be supported.

  13. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  14. Closed-Form Jensen-Renyi Divergence for Mixture of Gaussians and Applications to Group-Wise Shape Registration*

    PubMed Central

    Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand

    2010-01-01

    In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043

  15. Closed-form Jensen-Renyi divergence for mixture of Gaussians and applications to group-wise shape registration.

    PubMed

    Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand

    2009-01-01

    In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.

  16. Method for passively compensating for temperature coefficient of gain in silicon photomultipliers and similar devices

    DOEpatents

    McKisson, John E.; Barbosa, Fernando

    2015-09-01

    A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.

  17. A Range Finding Protocol to Support Design for Transcriptomics Experimentation: Examples of In-Vitro and In-Vivo Murine UV Exposure

    PubMed Central

    van Oostrom, Conny T.; Jonker, Martijs J.; de Jong, Mark; Dekker, Rob J.; Rauwerda, Han; Ensink, Wim A.; de Vries, Annemieke; Breit, Timo M.

    2014-01-01

    In transcriptomics research, design for experimentation by carefully considering biological, technological, practical and statistical aspects is very important, because the experimental design space is essentially limitless. Usually, the ranges of variable biological parameters of the design space are based on common practices and in turn on phenotypic endpoints. However, specific sub-cellular processes might only be partially reflected by phenotypic endpoints or outside the associated parameter range. Here, we provide a generic protocol for range finding in design for transcriptomics experimentation based on small-scale gene-expression experiments to help in the search for the right location in the design space by analyzing the activity of already known genes of relevant molecular mechanisms. Two examples illustrate the applicability: in-vitro UV-C exposure of mouse embryonic fibroblasts and in-vivo UV-B exposure of mouse skin. Our pragmatic approach is based on: framing a specific biological question and associated gene-set, performing a wide-ranged experiment without replication, eliminating potentially non-relevant genes, and determining the experimental ‘sweet spot’ by gene-set enrichment plus dose-response correlation analysis. Examination of many cellular processes that are related to UV response, such as DNA repair and cell-cycle arrest, revealed that basically each cellular (sub-) process is active at its own specific spot(s) in the experimental design space. Hence, the use of range finding, based on an affordable protocol like this, enables researchers to conveniently identify the ‘sweet spot’ for their cellular process of interest in an experimental design space and might have far-reaching implications for experimental standardization. PMID:24823911

  18. Development of a microbial contamination susceptibility model for private domestic groundwater sources

    NASA Astrophysics Data System (ADS)

    Hynds, Paul D.; Misstear, Bruce D.; Gill, Laurence W.

    2012-12-01

    Groundwater quality analyses were carried out on samples from 262 private sources in the Republic of Ireland during the period from April 2008 to November 2010, with microbial quality assessed by thermotolerant coliform (TTC) presence. Assessment of potential microbial contamination risk factors was undertaken at all sources, and local meteorological data were also acquired. Overall, 28.9% of wells tested positive for TTC, with risk analysis indicating that source type (i.e., borehole or hand-dug well), local bedrock type, local subsoil type, groundwater vulnerability, septic tank setback distance, and 48 h antecedent precipitation were all significantly associated with TTC presence (p < 0.05). A number of source-specific design parameters were also significantly associated with bacterial presence. Hierarchical logistic regression with stepwise parameter entry was used to develop a private well susceptibility model, with the final model exhibiting a mean predictive accuracy of >80% (TTC present or absent) when compared to an independent validation data set. Model hierarchies of primary significance are source design (20%), septic tank location (11%), hydrogeological setting (10%), and antecedent 120 h precipitation (2%). Sensitivity analysis shows that the probability of contamination is highly sensitive to septic tank setback distance, with probability increasing linearly with decreases in setback distance. Likewise, contamination probability was shown to increase with increasing antecedent precipitation. Results show that while groundwater vulnerability category is a useful indicator of aquifer susceptibility to contamination, its suitability with regard to source contamination is less clear. The final model illustrates that both localized (well-specific) and generalized (aquifer-specific) contamination mechanisms are involved in contamination events, with localized bypass mechanisms dominant. The susceptibility model developed here could be employed in the appropriate location, design, construction, and operation of private groundwater wells, thereby decreasing the contamination risk, and hence health risk, associated with these sources.

  19. Optimization of injection molding process parameters for a plastic cell phone housing component

    NASA Astrophysics Data System (ADS)

    Rajalingam, Sokkalingam; Vasant, Pandian; Khe, Cheng Seong; Merican, Zulkifli; Oo, Zeya

    2016-11-01

    To produce thin-walled plastic items, injection molding process is one of the most widely used application tools. However, to set optimal process parameters is difficult as it may cause to produce faulty items on injected mold like shrinkage. This study aims at to determine such an optimum injection molding process parameters which can reduce the fault of shrinkage on a plastic cell phone cover items. Currently used setting of machines process produced shrinkage and mis-specified length and with dimensions below the limit. Thus, for identification of optimum process parameters, maintaining closer targeted length and width setting magnitudes with minimal variations, more experiments are needed. The mold temperature, injection pressure and screw rotation speed are used as process parameters in this research. For optimal molding process parameters the Response Surface Methods (RSM) is applied. The major contributing factors influencing the responses were identified from analysis of variance (ANOVA) technique. Through verification runs it was found that the shrinkage defect can be minimized with the optimal setting found by RSM.

  20. Parameter calibration for synthesizing realistic-looking variability in offline handwriting

    NASA Astrophysics Data System (ADS)

    Cheng, Wen; Lopresti, Dan

    2011-01-01

    Motivated by the widely accepted principle that the more training data, the better a recognition system performs, we conducted experiments asking human subjects to do evaluate a mixture of real English handwritten text lines and text lines altered from existing handwriting with various distortion degrees. The idea of generating synthetic handwriting is based on a perturbation method by T. Varga and H. Bunke that distorts an entire text line. There are two purposes of our experiments. First, we want to calibrate distortion parameter settings for Varga and Bunke's perturbation model. Second, we intend to compare the effects of parameter settings on different writing styles: block, cursive and mixed. From the preliminary experimental results, we determined appropriate ranges for parameter amplitude, and found that parameter settings should be altered for different handwriting styles. With the proper parameter settings, it should be possible to generate large amount of training and testing data for building better off-line handwriting recognition systems.

  1. Process to evaluate hematological parameters that reflex to manual differential cell counts in a pediatric institution.

    PubMed

    Guarner, Jeannette; Atuan, Maria Ana; Nix, Barbara; Mishak, Christopher; Vejjajiva, Connie; Curtis, Cheri; Park, Sunita; Mullins, Richard

    2010-01-01

    Each institution sets specific parameters obtained by automated hematology analyzers to trigger manual counts. We designed a process to decrease the number of manual differential cell counts without impacting patient care. We selected new criteria that prompt manual counts and studied the impact these changes had in 2 days of work and in samples of patients with newly diagnosed leukemia, sickle cell disease, and presence of left shift. By using fewer parameters and expanding our ranges we decreased the number of manual counts by 20%. The parameters that prompted manual counts most frequently were the presence of blast flags and nucleated red blood cells, 2 parameters that were not changed. The parameters that accounted for a decrease in the number of manual counts were the white blood cell count and large unstained cells. Eight of 32 patients with newly diagnosed leukemia did not show blast flags; however, other parameters triggered manual counts. In 47 patients with sickle cell disease, nucleated red cells and red cell variability prompted manual review. Bands were observed in 18% of the specimens and 4% would not have been counted manually with the new criteria, for the latter the mean band count was 2.6%. The process we followed to evaluate hematological parameters that reflex to manual differential cell counts increased efficiency without compromising patient care in our hospital system.

  2. A simple model-based control for Pichia pastoris allows a more efficient heterologous protein production bioprocess.

    PubMed

    Cos, Oriol; Ramon, Ramon; Montesinos, José Luis; Valero, Francisco

    2006-09-05

    A predictive control algorithm coupled with a PI feedback controller has been satisfactorily implemented in the heterologous Rhizopus oryzae lipase production by Pichia pastoris methanol utilization slow (Mut(s)) phenotype. This control algorithm has allowed the study of the effect of methanol concentration, ranging from 0.5 to 1.75 g/L, on heterologous protein production. The maximal lipolytic activity (490 UA/mL), specific yield (11,236 UA/g(biomass)), productivity (4,901 UA/L . h), and specific productivity (112 UA/g(biomass)h were reached for a methanol concentration of 1 g/L. These parameters are almost double than those obtained with a manual control at a similar methanol set-point. The study of the specific growth, consumption, and production rates showed different patterns for these rates depending on the methanol concentration set-point. Results obtained have shown the need of implementing a robust control scheme when reproducible quality and productivity are sought. It has been demonstrated that the model-based control proposed here is a very efficient, robust, and easy-to-implement strategy from an industrial application point of view. (c) 2006 Wiley Periodicals, Inc.

  3. Implementation of the EM Algorithm in the Estimation of Item Parameters: The BILOG Computer Program.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Bock, R. Darrell

    This paper reviews the basic elements of the EM approach to estimating item parameters and illustrates its use with one simulated and one real data set. In order to illustrate the use of the BILOG computer program, runs for 1-, 2-, and 3-parameter models are presented for the two sets of data. First is a set of responses from 1,000 persons to five…

  4. Case studies on the physical-chemical parameters' variation during three different purification approaches destined to treat wastewaters from food industry.

    PubMed

    Ghimpusan, Marieta; Nechifor, Gheorghe; Nechifor, Aurelia-Cristina; Dima, Stefan-Ovidiu; Passeri, Piero

    2017-12-01

    The paper presents a set of three interconnected case studies on the depuration of food processing wastewaters by using aeration & ozonation and two types of hollow-fiber membrane bioreactor (MBR) approaches. A secondary and more extensive objective derived from the first one is to draw a clearer, broader frame on the variation of physical-chemical parameters during the purification of wastewaters from food industry through different operating modes with the aim of improving the management of water purification process. Chemical oxygen demand (COD), pH, mixed liquor suspended solids (MLSS), total nitrogen, specific nitrogen (NH 4 + , NO 2 - , NO 3 - ) total phosphorous, and total surfactants were the measured parameters, and their influence was discussed in order to establish the best operating mode to achieve the purification performances. The integrated air-ozone aeration process applied in the second operating mode lead to a COD decrease by up to 90%, compared to only 75% obtained in a conventional biological activated sludge process. The combined purification process of MBR and ozonation produced an additional COD decrease of 10-15%, and made the Total Surfactants values to comply to the specific legislation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Visualization of Space-Time Ambiguities to be Explored by NASA GEC Mission with a Critique of Synthesized Measurements for Different GEC Mission Scenarios

    NASA Technical Reports Server (NTRS)

    Sojka, Jan J.

    2003-01-01

    The Grant supported research addressing the question of how the NASA Solar Terrestrial Probes (STP) Mission called Geospace electrodynamics Connections (GEC) will resolve space-time structures as well as collect sufficient information to solve the coupled thermosphere-ionosphere- magnetosphere dynamics and electrodynamics. The approach adopted was to develop a high resolution in both space and time model of the ionosphere-thermosphere (I-T) over altitudes relevant to GEC, especially the deep-dipping phase. This I-T model was driven by a high- resolution model of magnetospheric-ionospheric (M-I) coupling electrodynamics. Such a model contains all the key parameters to be measured by GEC instrumentation, which in turn are the required parameters to resolve present-day problems in describing the energy and momentum coupling between the ionosphere-magnetosphere and ionosphere-thermosphere. This model database has been successfully created for one geophysical condition; winter, solar maximum with disturbed geophysical conditions, specifically a substorm. Using this data set, visualizations (movies) were created to contrast dynamics of the different measurable parameters. Specifically, the rapidly varying magnetospheric E and auroral electron precipitation versus the slower varying ionospheric F-region electron density, but rapidly responding E-region density.

  6. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  7. Evaluation of SimpleTreat 4.0: Simulations of pharmaceutical removal in wastewater treatment plant facilities.

    PubMed

    Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R

    2017-02-01

    In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    PubMed

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  9. [Estimating survival of thrushes: modeling capture-recapture probabilities].

    PubMed

    Burskiî, O V

    2011-01-01

    The stochastic modeling technique serves as a way to correctly separate "return rate" of marked animals into survival rate (phi) and capture probability (p). The method can readily be used with the program MARK freely distributed through Internet (Cooch, White, 2009). Input data for the program consist of "capture histories" of marked animals--strings of units and zeros indicating presence or absence of the individual among captures (or sightings) along the set of consequent recapture occasions (e.g., years). Probability of any history is a product of binomial probabilities phi, p or their complements (1 - phi) and (1 - p) for each year of observation over the individual. Assigning certain values to parameters phi and p, one can predict the composition of all individual histories in the sample and assess the likelihood of the prediction. The survival parameters for different occasions and cohorts of individuals can be set either equal or different, as well as recapture parameters can be set in different ways. There is a possibility to constraint the parameters, according to the hypothesis being tested, in the form of a specific model. Within the specified constraints, the program searches for parameter values that describe the observed composition of histories with the maximum likelihood. It computes the parameter estimates along with confidence limits and the overall model likelihood. There is a set of tools for testing the model goodness-of-fit under assumption of equality of survival rates among individuals and independence of their fates. Other tools offer a proper selection among a possible variety of models, providing the best parity between details and precision in describing reality. The method was applied to 20-yr recapture and resighting data series on 4 thrush species (genera Turdus, Zoothera) breeding in the Yenisei River floodplain within the middle taiga subzone. The capture probabilities were quite independent of observational efforts fluctuations while differing significantly between the species and sexes. The estimates of adult survival rate, obtained for the Siberian migratory populations, were lower than those for sedentary populations from both the tropics and intermediate latitudes with marine climate (data by Ricklefs, 1997). Two factors, the average temperature influencing birds during their annual movements, and climatic seasonality (temperature difference between summer and winter) in the breeding area, fit the latitudinal pattern of survival most closely (R2 = 0.90). Final survival of migrants reflects an adaptive life history compromise for use of superabundant resources in breeding area at the cost of avoidance of severe winter conditions.

  10. A review of ADM1 extensions, applications, and analysis: 2002-2005.

    PubMed

    Batstone, D J; Keller, J; Steyer, J P

    2006-01-01

    Since publication of the Scientific and Technical Report (STR) describing the ADM1, the model has been extensively used, and analysed in both academic and practical applications. Adoption of the ADM1 in popular systems analysis tools such as the new wastewater benchmark (BSM2), and its use as a virtual industrial system can stimulate modelling of anaerobic processes by researchers and practitioners outside the core expertise of anaerobic processes. It has been used as a default structural element that allows researchers to concentrate on new extensions such as sulfate reduction, and new applications such as distributed parameter modelling of biofilms. The key limitations for anaerobic modelling originally identified in the STR were: (i) regulation of products from glucose fermentation, (ii) parameter values, and variability, and (iii) specific extensions. Parameter analysis has been widespread, and some detailed extensions have been developed (e.g., sulfate reduction). A verified extension that describes regulation of products from glucose fermentation is still limited, though there are promising fundamental approaches. This is a critical issue, given the current interest in renewable hydrogen production from carbohydrate-type waste. Critical analysis of the model has mainly focused on model structure reduction, hydrogen inhibition functions, and the default parameter set recommended in the STR. This default parameter set has largely been verified as a reasonable compromise, especially for wastewater sludge digestion. One criticism of note is that the ADM1 stoichiometry focuses on catabolism rather than anabolism. This means that inorganic carbon can be used unrealistically as a carbon source during some anabolic reactions. Advances and novel applications have also been made in the present issue, which focuses on the ADM1. These papers also explore a number of novel areas not originally envisaged in this review.

  11. Reducing streamflow forecast uncertainty: Application and qualitative assessment of the upper klamath river Basin, Oregon

    USGS Publications Warehouse

    Hay, L.E.; McCabe, G.J.; Clark, M.P.; Risley, J.C.

    2009-01-01

    The accuracy of streamflow forecasts depends on the uncertainty associated with future weather and the accuracy of the hydrologic model that is used to produce the forecasts. We present a method for streamflow forecasting where hydrologic model parameters are selected based on the climate state. Parameter sets for a hydrologic model are conditioned on an atmospheric pressure index defined using mean November through February (NDJF) 700-hectoPascal geopotential heights over northwestern North America [Pressure Index from Geopotential heights (PIG)]. The hydrologic model is applied in the Sprague River basin (SRB), a snowmelt-dominated basin located in the Upper Klamath basin in Oregon. In the SRB, the majority of streamflow occurs during March through May (MAM). Water years (WYs) 1980-2004 were divided into three groups based on their respective PIG values (high, medium, and low PIG). Low (high) PIG years tend to have higher (lower) than average MAM streamflow. Four parameter sets were calibrated for the SRB, each using a different set of WYs. The initial set used WYs 1995-2004 and the remaining three used WYs defined as high-, medium-, and low-PIG years. Two sets of March, April, and May streamflow volume forecasts were made using Ensemble Streamflow Prediction (ESP). The first set of ESP simulations used the initial parameter set. Because the PIG is defined using NDJF pressure heights, forecasts starting in March can be made using the PIG parameter set that corresponds with the year being forecasted. The second set of ESP simulations used the parameter set associated with the given PIG year. Comparison of the ESP sets indicates that more accuracy and less variability in volume forecasts may be possible when the ESP is conditioned using the PIG. This is especially true during the high-PIG years (low-flow years). ?? 2009 American Water Resources Association.

  12. Improved artificial neural networks in prediction of malignancy of lesions in contrast-enhanced MR-mammography.

    PubMed

    Vomweg, T W; Buscema, M; Kauczor, H U; Teifke, A; Intraligi, M; Terzi, S; Heussel, C P; Achenbach, T; Rieker, O; Mayer, D; Thelen, M

    2003-09-01

    The aim of this study was to evaluate the capability of improved artificial neural networks (ANN) and additional novel training methods in distinguishing between benign and malignant breast lesions in contrast-enhanced magnetic resonance-mammography (MRM). A total of 604 histologically proven cases of contrast-enhanced lesions of the female breast at MRI were analyzed. Morphological, dynamic and clinical parameters were collected and stored in a database. The data set was divided into several groups using random or experimental methods [Training & Testing (T&T) algorithm] to train and test different ANNs. An additional novel computer program for input variable selection was applied. Sensitivity and specificity were calculated and compared with a statistical method and an expert radiologist. After optimization of the distribution of cases among the training and testing sets by the T & T algorithm and the reduction of input variables by the Input Selection procedure a highly sophisticated ANN achieved a sensitivity of 93.6% and a specificity of 91.9% in predicting malignancy of lesions within an independent prediction sample set. The best statistical method reached a sensitivity of 90.5% and a specificity of 68.9%. An expert radiologist performed better than the statistical method but worse than the ANN (sensitivity 92.1%, specificity 85.6%). Features extracted out of dynamic contrast-enhanced MRM and additional clinical data can be successfully analyzed by advanced ANNs. The quality of the resulting network strongly depends on the training methods, which are improved by the use of novel training tools. The best results of an improved ANN outperform expert radiologists.

  13. A calibration protocol of a one-dimensional moving bed bioreactor (MBBR) dynamic model for nitrogen removal.

    PubMed

    Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P

    2012-01-01

    This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.

  14. Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests

    NASA Technical Reports Server (NTRS)

    Douglas, Freddie; Bourgeois, Edit Kaminsky

    2005-01-01

    The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).

  15. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    PubMed

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  16. Convergence properties of η → 3π decays in chiral perturbation theory

    NASA Astrophysics Data System (ADS)

    Kolesár, Marián; Novotný, Jiří

    2017-01-01

    The convergence of the decay widths and some of the Dalitz plot parameters of the decay η → 3π seems problematic in low energy QCD. In the framework of resummed chiral perturbation theory, we explore the question of compatibility of experimental data with a reasonable convergence of a carefully defined chiral series. By treating the uncertainties in the higher orders statistically, we numerically generate a large set of theoretical predictions, which are then confronted with experimental information. In the case of the decay widths, the experimental values can be reconstructed for a reasonable range of the free parameters and thus no tension is observed, in spite of what some of the traditional calculations suggest. The Dalitz plot parameters a and d can be described very well too. When the parameters b and α are concerned, we find a mild tension for the whole range of the free parameters, at less than 2σ C.L. This can be interpreted in two ways - either some of the higher order corrections are indeed unexpectedly large or there is a specific configuration of the remainders, which is, however, not completely improbable.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mörtsell, E., E-mail: edvard@fysik.su.se

    The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts andmore » the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.« less

  18. Influence of simulation parameters on the speed and accuracy of Monte Carlo calculations using PENEPMA

    NASA Astrophysics Data System (ADS)

    Llovet, X.; Salvat, F.

    2018-01-01

    The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.

  19. Generative Representations for Evolving Families of Designs

    NASA Technical Reports Server (NTRS)

    Hornby, Gregory S.

    2003-01-01

    Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.

  20. Cosmological histories in bimetric gravity: a graphical approach

    NASA Astrophysics Data System (ADS)

    Mörtsell, E.

    2017-02-01

    The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts and the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.

Top