Sample records for parameters including applied

  1. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  2. X-31 aerodynamic characteristics determined from flight data

    NASA Technical Reports Server (NTRS)

    Kokolios, Alex

    1993-01-01

    The lateral aerodynamic characteristics of the X-31 were determined at angles of attack ranging from 20 to 45 deg. Estimates of the lateral stability and control parameters were obtained by applying two parameter estimation techniques, linear regression, and the extended Kalman filter to flight test data. An attempt to apply maximum likelihood to extract parameters from the flight data was also made but failed for the reasons presented. An overview of the System Identification process is given. The overview includes a listing of the more important properties of all three estimation techniques that were applied to the data. A comparison is given of results obtained from flight test data and wind tunnel data for four important lateral parameters. Finally, future research to be conducted in this area is discussed.

  3. Fractional Derivative Models for Ultrasonic Characterization of Polymer and Breast Tissue Viscoelasticity

    PubMed Central

    Coussot, Cecile; Kalyanam, Sureshkumar; Yapp, Rebecca; Insana, Michael F.

    2009-01-01

    The viscoelastic response of hydropolymers, which include glandular breast tissues, may be accurately characterized for some applications with as few as 3 rheological parameters by applying the Kelvin-Voigt fractional derivative (KVFD) modeling approach. We describe a technique for ultrasonic imaging of KVFD parameters in media undergoing unconfined, quasi-static, uniaxial compression. We analyze the KVFD parameter values in simulated and experimental echo data acquired from phantoms and show that the KVFD parameters may concisely characterize the viscoelastic properties of hydropolymers. We then interpret the KVFD parameter values for normal and cancerous breast tissues and hypothesize that this modeling approach may ultimately be applied to tumor differentiation. PMID:19406700

  4. Kalman filter estimation of human pilot-model parameters

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.

    1975-01-01

    The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.

  5. System and method for regulating resonant inverters

    DOEpatents

    Stevanovic, Ljubisa Dragoljub [Clifton Park, NY; Zane, Regan Andrew [Superior, CO

    2007-08-28

    A technique is provided for direct digital phase control of resonant inverters based on sensing of one or more parameters of the resonant inverter. The resonant inverter control system includes a switching circuit for applying power signals to the resonant inverter and a sensor for sensing one or more parameters of the resonant inverter. The one or more parameters are representative of a phase angle. The resonant inverter control system also includes a comparator for comparing the one or more parameters to a reference value and a digital controller for determining timing of the one or more parameters and for regulating operation of the switching circuit based upon the timing of the one or more parameters.

  6. Advances in parameter estimation techniques applied to flexible structures

    NASA Technical Reports Server (NTRS)

    Maben, Egbert; Zimmerman, David C.

    1994-01-01

    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure.

  7. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE PAGES

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...

    2017-01-24

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  8. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  9. More physics in the laundromat

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2010-12-01

    The physics of a washing machine spin cycle is extended to include the spin-up and spin-down phases. We show that, for realistic parameters, an adiabatic approximation applies, and thus the familiar forced, damped harmonic oscillator analysis can be applied to these phases.

  10. Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stucky, D.J.; Zoeller, A.L.

    1980-01-01

    Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less

  11. GNSS Ephemeris with Graceful Degradation and Measurement Fusion

    NASA Technical Reports Server (NTRS)

    Garrison, James Levi (Inventor); Walker, Michael Allen (Inventor)

    2015-01-01

    A method for providing an extended propagation ephemeris model for a satellite in Earth orbit, the method includes obtaining a satellite's orbital position over a first period of time, applying a least square estimation filter to determine coefficients defining osculating Keplarian orbital elements and harmonic perturbation parameters associated with a coordinate system defining an extended propagation ephemeris model that can be used to estimate the satellite's position during the first period, wherein the osculating Keplarian orbital elements include semi-major axis of the satellite (a), eccentricity of the satellite (e), inclination of the satellite (i), right ascension of ascending node of the satellite (.OMEGA.), true anomaly (.theta.*), and argument of periapsis (.omega.), applying the least square estimation filter to determine a dominant frequency of the true anomaly, and applying a Fourier transform to determine dominant frequencies of the harmonic perturbation parameters.

  12. Dispersion in a thermal plasma including arbitrary degeneracy and quantum recoil.

    PubMed

    Melrose, D B; Mushtaq, A

    2010-11-01

    The longitudinal response function for a thermal electron gas is calculated including two quantum effects exactly, degeneracy, and the quantum recoil. The Fermi-Dirac distribution is expanded in powers of a parameter that is small in the nondegenerate limit and the response function is evaluated in terms of the conventional plasma dispersion function to arbitrary order in this parameter. The infinite sum is performed in terms of polylogarithms in the long-wavelength and quasistatic limits, giving results that apply for arbitrary degeneracy. The results are applied to the dispersion relations for Langmuir waves and to screening, reproducing known results in the nondegenerate and completely degenerate limits, and generalizing them to arbitrary degeneracy.

  13. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  14. Method and system for determining formation porosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pittman, R.W.; Hermes, C.E.

    1977-12-27

    The invention discloses a method and/or system for measuring formation porosity from drilling response. It involves measuring a number of drilling parameters and includes determination of tooth dullness as well as determining a reference torque empirically. One of the drilling parameters is the torque applied to the drill string.

  15. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  16. The parameters effect on the structural performance of damaged steel box beam using Taguchi method

    NASA Astrophysics Data System (ADS)

    El-taly, Boshra A.; Abd El Hameed, Mohamed F.

    2018-03-01

    In the current study, the influence of notch or opening parameters and the positions of the applied load on the structural performance of steel box beams up to failure was investigated using Finite Element analysis program, ANSYS. The Taguchi-based design of experiments technique was used to plan the current study. The plan included 12 box steel beams; three intact beams, and nine damaged beams (with opening) in the beams web. The numerical studies were conducted under varying the spacing between the two concentrated point loads (location of applied loads), the notch (opening) position, and the ratio between depth and width of the notch with a constant notch area. According to Taguchi analysis, factor X (location of the applied loads) was found the highest contributing parameters for the variation of the ultimate load, vertical deformation, shear stresses, and the compressive normal stresses.

  17. A Bayesian Method for Evaluating Trainee Proficiency. Technical Paper 323.

    ERIC Educational Resources Information Center

    Epstein, Kenneth I.; Steinheiser, Frederick H., Jr.

    A multiparameter, programmable model was developed to examine the interactive influence of certain parameters on the probability of deciding that an examinee had attained a specified degree of mastery. It was applied within the simulated context of performance testing of military trainees. These parameters included: (1) the number of assumed…

  18. Performance factors in associative learning: assessment of the sometimes competing retrieval model.

    PubMed

    Witnauer, James E; Wojick, Brittany M; Polack, Cody W; Miller, Ralph R

    2012-09-01

    Previous simulations revealed that the sometimes competing retrieval model (SOCR; Stout & Miller, Psychological Review, 114, 759-783, 2007), which assumes local error reduction, can explain many cue interaction phenomena that elude traditional associative theories based on total error reduction. Here, we applied SOCR to a new set of Pavlovian phenomena. Simulations used a single set of fixed parameters to simulate each basic effect (e.g., blocking) and, for specific experiments using different procedures, used fitted parameters discovered through hill climbing. In simulation 1, SOCR was successfully applied to basic acquisition, including the overtraining effect, which is context dependent. In simulation 2, we applied SOCR to basic extinction and renewal. SOCR anticipated these effects with both fixed parameters and best-fitting parameters, although the renewal effects were weaker than those observed in some experiments. In simulation 3a, feature-negative training was simulated, including the often observed transition from second-order conditioning to conditioned inhibition. In simulation 3b, SOCR predicted the observation that conditioned inhibition after feature-negative and differential conditioning depends on intertrial interval. In simulation 3c, SOCR successfully predicted failure of conditioned inhibition to extinguish with presentations of the inhibitor alone under most circumstances. In simulation 4, cue competition, including blocking (4a), recovery from relative validity (4b), and unblocking (4c), was simulated. In simulation 5, SOCR correctly predicted that inhibitors gain more behavioral control than do excitors when they are trained in compound. Simulation 6 demonstrated that SOCR explains the slower acquisition observed following CS-weak shock pairings.

  19. Considerations for site-specific implementation of active downforce and seeding depth technologies on row-crop planters

    USDA-ARS?s Scientific Manuscript database

    Planter technology continues to rapidly advance including row-by-row control of parameters such as applied downforce and seeding depth that permit real-time adjustment to varying field conditions. The objective of this research was to investigate the relationship of seeding depth and applied downfo...

  20. Learning Aggregation Operators for Preference Modeling

    NASA Astrophysics Data System (ADS)

    Torra, Vicenç

    Aggregation operators are useful tools for modeling preferences. Such operators include weighted means, OWA and WOWA operators, as well as some fuzzy integrals, e.g. Choquet and Sugeno integrals. To apply these operators in an effective way, their parameters have to be properly defined. In this chapter, we review some of the existing tools for learning these parameters from examples.

  1. Application of municipal biosolids to dry-land wheat fields - A monitoring program near Deer Trail, Colorado (USA). A presentation for an international conference: "The Future of Agriculture: Science, Stewardship, and Sustainability", August 7-9, 2006, Sacramento, CA

    USGS Publications Warehouse

    Crock, James G.; Smith, David B.; Yager, Tracy J.B.

    2006-01-01

    Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of non-irrigated farmland and rangeland near Deer Trail, Colorado. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site. In 1999, the USGS began a more comprehensive study of the entire site to address stakeholder concerns about the chemical effects of biosolids applications. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study included biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and stream bed sediment. Streams at the site are dry most of the year, so samples of stream bed sediment deposited after rain were used to indicate surface-water effects. This presentation will only address biosolids, soil, and crops. More information about these and the other monitoring components are presented in the literature (e.g., Yager and others, 2004a, b, c, d) and at the USGS Web site for the Deer Trail area studies at http://co.water.usgs.gov/projects/CO406/CO406.html. Priority parameters identified by the stakeholders for all monitoring components, included the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity, regulated by Colorado for biosolids to be used as an agricultural soil amendment. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Where sufficient samples could be collected, statistical methods were used to evaluate effects. Rigorous quality assurance was included in all aspects of the study. The roles of hydrology and geology also were considered in the design, data collection, and interpretation phases of the study. Study results indicate that the chemistry of the biosolids from the Denver plant was consistent during 1999-2005, and total concentrations of regulated trace elements were consistently lower than the regulatory limits. Plutonium isotopes were not detected in the biosolids. Leach tests using deionized water to simulate natural precipitation indicate arsenic, molybdenum, and nickel were the most soluble priority parameters in the biosolids. Study results show no significant difference in concentrations of priority parameters between biosolids-applied soils and unamended soils where no biosolids were applied. However, biosolids were applied only twice during 1999-2003. The next soil sampling is not scheduled until 2010. To date concentrations of most of the priority parameters were not much greater in the biosolids than in natural soil from the sites. Therefore, many more biosolids applications would need to occur before biosolids effects on the soil priority constituents can be quantified. Leach tests using deionized water to simulate precipitation indicate that molybdenum and selenium were the priority parameters that were most soluble in both biosolids-applied soil and natural or unamended soil. Study results do not indicate significant differences in concentrations of priority parameters between crops grown in biosolids-applied areas and crops grown where no biosolids were applied. However, crops were grown only twice during 1999-2003, so only two crop samples could be collected. The wheat-grain elemental data collected during 1999-2003 for both biosolids-applied areas and unamended areas are similar

  2. Ice Shape Scaling for Aircraft in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2008-01-01

    This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.

  3. Dictionary Indexing of Electron Channeling Patterns.

    PubMed

    Singh, Saransh; De Graef, Marc

    2017-02-01

    The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.

  4. Global parameter estimation for thermodynamic models of transcriptional regulation.

    PubMed

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Role of Nuclear Morphometry in Breast Cancer and its Correlation with Cytomorphological Grading of Breast Cancer: A Study of 64 Cases.

    PubMed

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2018-01-01

    Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Descriptive cross-sectional hospital-based study. This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS -Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups.

  6. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  7. Age determination of female redhead ducks

    USGS Publications Warehouse

    Dane, C.W.; Johnson, D.H.

    1975-01-01

    Eighty-seven fall-collected wings from female redhead ducks (Aythya americana) were assigned to the adult or juvenile group based on 'tertial' and 'tertial covert' shape and wear. To obtain spring age-related characters from these fall-collected groupings, we considered parameters of flight feathers retained until after the first breeding season. Parameters measured included: markings on and width of greater secondary coverts, and length, weight, and diameter of primary feathers. The best age categorization was obtained with discriminant analysis based on a combination of the most accurately measured parameters. This analysis, applied to 81 wings with complete measurements, resulted in only 1 being incorrectly aged and 3 placed in a questionable category. Discriminant functions used with covert markings and the three 5th primary parameters were applied to 30 known-age juvenile, hand-reared redhead females, 28 were correctly aged, none was incorrectly aged, and only 2 were placed in the questionable category.

  8. System Identification Applied to Dynamic CFD Simulation and Wind Tunnel Data

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Klein, Vladislav; Frink, Neal T.; Vicroy, Dan D.

    2011-01-01

    Demanding aerodynamic modeling requirements for military and civilian aircraft have provided impetus for researchers to improve computational and experimental techniques. Model validation is a key component for these research endeavors so this study is an initial effort to extend conventional time history comparisons by comparing model parameter estimates and their standard errors using system identification methods. An aerodynamic model of an aircraft performing one-degree-of-freedom roll oscillatory motion about its body axes is developed. The model includes linear aerodynamics and deficiency function parameters characterizing an unsteady effect. For estimation of unknown parameters two techniques, harmonic analysis and two-step linear regression, were applied to roll-oscillatory wind tunnel data and to computational fluid dynamics (CFD) simulated data. The model used for this study is a highly swept wing unmanned aerial combat vehicle. Differences in response prediction, parameters estimates, and standard errors are compared and discussed

  9. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  10. Electronic-projecting Moire method applying CBR-technology

    NASA Astrophysics Data System (ADS)

    Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.

    2018-01-01

    Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.

  11. Clinical Parameters and Tools for Home-Based Assessment of Parkinson's Disease: Results from a Delphi study.

    PubMed

    Ferreira, Joaquim J; Santos, Ana T; Domingos, Josefa; Matthews, Helen; Isaacs, Tom; Duffen, Joy; Al-Jawad, Ahmed; Larsen, Frank; Artur Serrano, J; Weber, Peter; Thoms, Andrea; Sollinger, Stefan; Graessner, Holm; Maetzler, Walter

    2015-01-01

    Parkinson's disease (PD) is a neurodegenerative disorder with fluctuating symptoms. To aid the development of a system to evaluate people with PD (PwP) at home (SENSE-PARK system) there was a need to define parameters and tools to be applied in the assessment of 6 domains: gait, bradykinesia/hypokinesia, tremor, sleep, balance and cognition. To identify relevant parameters and assessment tools of the 6 domains, from the perspective of PwP, caregivers and movement disorders specialists. A 2-round Delphi study was conducted to select a core of parameters and assessment tools to be applied. This process included PwP, caregivers and movement disorders specialists. Two hundred and thirty-three PwP, caregivers and physicians completed the first round questionnaire, and 50 the second. Results allowed the identification of parameters and assessment tools to be added to the SENSE-PARK system. The most consensual parameters were: Falls and Near Falls; Capability to Perform Activities of Daily Living; Interference with Activities of Daily Living; Capability to Process Tasks; and Capability to Recall and Retrieve Information. The most cited assessment strategies included Walkers; the Evaluation of Performance Doing Fine Motor Movements; Capability to Eat; Assessment of Sleep Quality; Identification of Circumstances and Triggers for Loose of Balance and Memory Assessment. An agreed set of measuring parameters, tests, tools and devices was achieved to be part of a system to evaluate PwP at home. A pattern of different perspectives was identified for each stakeholder.

  12. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  13. Extended Kalman Filter for Estimation of Parameters in Nonlinear State-Space Models of Biochemical Networks

    PubMed Central

    Sun, Xiaodian; Jin, Li; Xiong, Momiao

    2008-01-01

    It is system dynamics that determines the function of cells, tissues and organisms. To develop mathematical models and estimate their parameters are an essential issue for studying dynamic behaviors of biological systems which include metabolic networks, genetic regulatory networks and signal transduction pathways, under perturbation of external stimuli. In general, biological dynamic systems are partially observed. Therefore, a natural way to model dynamic biological systems is to employ nonlinear state-space equations. Although statistical methods for parameter estimation of linear models in biological dynamic systems have been developed intensively in the recent years, the estimation of both states and parameters of nonlinear dynamic systems remains a challenging task. In this report, we apply extended Kalman Filter (EKF) to the estimation of both states and parameters of nonlinear state-space models. To evaluate the performance of the EKF for parameter estimation, we apply the EKF to a simulation dataset and two real datasets: JAK-STAT signal transduction pathway and Ras/Raf/MEK/ERK signaling transduction pathways datasets. The preliminary results show that EKF can accurately estimate the parameters and predict states in nonlinear state-space equations for modeling dynamic biochemical networks. PMID:19018286

  14. Effects of Chitin and Sepia Ink Hybrid Hemostatic Sponge on the Blood Parameters of Mice

    PubMed Central

    Zhang, Wei; Sun, Yu-Lin; Chen, Dao-Hai

    2014-01-01

    Chitin and sepia ink hybrid hemostatic sponge (CTSH sponge), a new biomedical material, was extensively studied for its beneficial biological properties of hemostasis and stimulation of healing. However, studies examining the safety of CTSH sponge in the blood system are lacking. This experiment aimed to examine whether CTSH sponge has negative effect on blood systems of mice, which were treated with a dosage of CTSH sponge (135 mg/kg) through a laparotomy. CTSH sponge was implanted into the abdominal subcutaneous and a laparotomy was used for blood sampling from abdominal aortic. Several kinds of blood parameters were detected at different time points, which were reflected by coagulation parameters including thrombin time (TT), prothrombin time (PT), activated partial thromboplatin time (APTT), fibrinogen (FIB) and platelet factor 4 (PF4); anticoagulation parameter including antithrombin III (AT-III); fibrinolytic parameters including plasminogen (PLG), fibrin degradation product (FDP) and D-dimer; hemorheology parameters including blood viscosity (BV) and plasma viscosity (PV). Results showed that CTSH sponge has no significant effect on the blood parameters of mice. The data suggested that CTSH sponge can be applied in the field of biomedical materials and has potential possibility to be developed into clinical drugs of hemostatic agents. PMID:24727395

  15. Desert plains classification based on Geomorphometrical parameters (Case study: Aghda, Yazd)

    NASA Astrophysics Data System (ADS)

    Tazeh, mahdi; Kalantari, Saeideh

    2013-04-01

    This research focuses on plains. There are several tremendous methods and classification which presented for plain classification. One of The natural resource based classification which is mostly using in Iran, classified plains into three types, Erosional Pediment, Denudation Pediment Aggradational Piedmont. The qualitative and quantitative factors to differentiate them from each other are also used appropriately. In this study effective Geomorphometrical parameters in differentiate landforms were applied for plain. Geomorphometrical parameters are calculable and can be extracted using mathematical equations and the corresponding relations on digital elevation model. Geomorphometrical parameters used in this study included Percent of Slope, Plan Curvature, Profile Curvature, Minimum Curvature, the Maximum Curvature, Cross sectional Curvature, Longitudinal Curvature and Gaussian Curvature. The results indicated that the most important affecting Geomorphometrical parameters for plain and desert classifications includes: Percent of Slope, Minimum Curvature, Profile Curvature, and Longitudinal Curvature. Key Words: Plain, Geomorphometry, Classification, Biophysical, Yazd Khezarabad.

  16. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  17. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  18. Role of Nuclear Morphometry in Breast Cancer and its Correlation with Cytomorphological Grading of Breast Cancer: A Study of 64 Cases

    PubMed Central

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2018-01-01

    Background: Fine needle aspiration cytology (FNAC) is a simple, rapid, inexpensive, and reliable method of diagnosis of breast mass. Cytoprognostic grading in breast cancers is important to identify high-grade tumors. Computer-assisted image morphometric analysis has been developed to quantitate as well as standardize various grading systems. Aims: To apply nuclear morphometry on cytological aspirates of breast cancer and evaluate its correlation with cytomorphological grading with derivation of suitable cutoff values between various grades. Settings and Designs: Descriptive cross-sectional hospital-based study. Materials and Methods: This study included 64 breast cancer cases (29 of grade 1, 22 of grade 2, and 13 of grade 3). Image analysis was performed on Papanicolaou stained FNAC slides by NIS –Elements Advanced Research software (Ver 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Results: Nuclear size parameters showed an increase in values with increasing cytological grades of carcinoma. Nuclear shape parameters were not found to be significantly different between the three grades. Among nuclear texture parameters, sum intensity, and sum brightness were found to be different between the three grades. Conclusion: Nuclear morphometry can be applied to augment the cytology grading of breast cancer and thus help in classifying patients into low and high-risk groups. PMID:29403169

  19. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  20. Spectroscopic Ellipsometry Studies of Ag and ZnO Thin Films and Their Interfaces for Thin Film Photovoltaics

    NASA Astrophysics Data System (ADS)

    Sainju, Deepak

    Many modern optical and electronic devices, including photovoltaic devices, consist of multilayered thin film structures. Spectroscopic ellipsometry (SE) is a critically important characterization technique for such multilayers. SE can be applied to measure key parameters related to the structural, optical, and electrical properties of the components of multilayers with high accuracy and precision. One of the key advantages of this non-destructive technique is its capability of monitoring the growth dynamics of thin films in-situ and in real time with monolayer level precision. In this dissertation, the techniques of SE have been applied to study the component layer materials and structures used as back-reflectors and as the transparent contact layers in thin film photovoltaic technologies, including hydrogenated silicon (Si:H), copper indium-gallium diselenide (CIGS), and cadmium telluride (CdTe). The component layer materials, including silver and both intrinsic and doped zinc oxide, are fabricated on crystalline silicon and glass substrates using magnetron sputtering techniques. These thin films are measured in-situ and in real time as well as ex-situ by spectroscopic ellipsometry in order to extract parameters related to the structural properties, such as bulk layer thickness and surface roughness layer thickness and their time evolution, the latter information specific to real time measurements. The index of refraction and extinction coefficient or complex dielectric function of a single unknown layer can also be obtained from the measurement versus photon energy. Applying analytical expressions for these optical properties versus photon energy, parameters that describe electronic transport, such as electrical resistivity and electron scattering time, can be extracted. The SE technique is also performed as the sample is heated in order to derive the effects of annealing on the optical properties and derived electrical transport parameters, as well as the intrinsic temperature dependence of these properties and parameters. One of the major achievements of this dissertation research is the characterization of the thickness and optical properties of the interface layer formed between the silver and zinc oxide layers in a back-reflector structure used in thin film photovoltaics. An understanding of the impact of these thin film material properties on solar cell device performance has been complemented by applying reflectance and transmittance spectroscopy as well as simulations of cell performance.

  1. Modeling multilayer x-ray reflectivity using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.

    2000-06-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.

  2. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  3. 47 CFR 22.409 - Developmental authorization for a new Public Mobile Service or technology.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... listing of any patents applied for, including copies of any patents issued; (4) Copies of any marketing... on each channel or frequency range and the technical parameters of such transmissions; and, (ii) Any...

  4. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    USDA-ARS?s Scientific Manuscript database

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  5. Determination of adsorption parameters in numerical simulation for polymer flooding

    NASA Astrophysics Data System (ADS)

    Bao, Pengyu; Li, Aifen; Luo, Shuai; Dang, Xu

    2018-02-01

    A study on the determination of adsorption parameters for polymer flooding simulation was carried out. The study mainly includes polymer static adsorption and dynamic adsorption. The law of adsorption amount changing with polymer concentration and core permeability was presented, and the one-dimensional numerical model of CMG was established under the support of a large number of experimental data. The adsorption laws of adsorption experiments were applied to the one-dimensional numerical model to compare the influence of two adsorption laws on the historical matching results. The results show that the static adsorption and dynamic adsorption abide by different rules, and differ greatly in adsorption. If the static adsorption results were directly applied to the numerical model, the difficulty of the historical matching will increase. Therefore, dynamic adsorption tests in the porous medium are necessary before the process of parameter adjustment in order to achieve the ideal history matching result.

  6. A dust spectral energy distribution model with hierarchical Bayesian inference - I. Formalism and benchmarking

    NASA Astrophysics Data System (ADS)

    Galliano, Frédéric

    2018-05-01

    This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.

  7. Industry - Military Energy Symposium, held 21-23 October 1980, San Antonio, Texas

    DTIC Science & Technology

    1980-10-21

    unless the best available technology is applied to many sources including those the size of airports . Further discussion of these issues will hopefully...particularly with naphthenic fuels. A similar weakness applies to correlations of net heat of combustion. Some additional correlating parameters...Viscosity Boost pump power Line size and weight Thermal Stability Gum, deposits, nozzle coking Specific Heat Avionics and engine oil cooling Aromatics

  8. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  9. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  10. SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH

    EPA Science Inventory

    While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...

  11. Stress evaluation of metallic material under steady state based on nonlinear critically refracted longitudinal wave

    NASA Astrophysics Data System (ADS)

    Mao, Hanling; Zhang, Yuhua; Mao, Hanying; Li, Xinxin; Huang, Zhenfeng

    2018-06-01

    This paper presents the study of applying the nonlinear ultrasonic wave to evaluate the stress state of metallic materials under steady state. The pre-stress loading method is applied to guarantee components with steady stress. Three kinds of nonlinear ultrasonic experiments based on critically refracted longitudinal wave are conducted on components which the critically refracted longitudinal wave propagates along x, x1 and x2 direction. Experimental results indicate the second and third order relative nonlinear coefficients monotonically increase with stress, and the normalized relationship is consistent with simplified dislocation models, which indicates the experimental result is logical. The combined ultrasonic nonlinear parameter is proposed, and three stress evaluation models at x direction are established based on three ultrasonic nonlinear parameters, which the estimation error is below 5%. Then two stress detection models at x1 and x2 direction are built based on combined ultrasonic nonlinear parameter, the stress synthesis method is applied to calculate the magnitude and direction of principal stress. The results show the prediction error is within 5% and the angle deviation is within 1.5°. Therefore the nonlinear ultrasonic technique based on LCR wave could be applied to nondestructively evaluate the stress of metallic materials under steady state which the magnitude and direction are included.

  12. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  13. Correlations between chromatographic parameters and bioactivity predictors of potential herbicides.

    PubMed

    Janicka, Małgorzata

    2014-08-01

    Different liquid chromatography techniques, including reversed-phase liquid chromatography on Purosphere RP-18e, IAM.PC.DD2 and Cosmosil Cholester columns and micellar liqud chromatography with a Purosphere RP-8e column and using buffered sodium dodecyl sulfate-acetonitrile as the mobile phase, were applied to study the lipophilic properties of 15 newly synthesized phenoxyacetic and carbamic acid derivatives, which are potential herbicides. Chromatographic lipophilicity descriptors were used to extrapolate log k parameters (log kw and log km) and log k values. Partitioning lipophilicity descriptors, i.e., log P coefficients in an n-octanol-water system, were computed from the molecular structures of the tested compounds. Bioactivity descriptors, including partition coefficients in a water-plant cuticle system and water-human serum albumin and coefficients for human skin partition and permeation were calculated in silico by ACD/ADME software using the linear solvation energy relationship of Abraham. Principal component analysis was applied to describe similarities between various chromatographic and partitioning lipophilicities. Highly significant, predictive linear relationships were found between chromatographic parameters and bioactivity descriptors. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Protein labeling reactions in electrochemical microchannel flow: Numerical simulation and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Debusschere, Bert J.; Najm, Habib N.; Matta, Alain; Knio, Omar M.; Ghanem, Roger G.; Le Maître, Olivier P.

    2003-08-01

    This paper presents a model for two-dimensional electrochemical microchannel flow including the propagation of uncertainty from model parameters to the simulation results. For a detailed representation of electroosmotic and pressure-driven microchannel flow, the model considers the coupled momentum, species transport, and electrostatic field equations, including variable zeta potential. The chemistry model accounts for pH-dependent protein labeling reactions as well as detailed buffer electrochemistry in a mixed finite-rate/equilibrium formulation. Uncertainty from the model parameters and boundary conditions is propagated to the model predictions using a pseudo-spectral stochastic formulation with polynomial chaos (PC) representations for parameters and field quantities. Using a Galerkin approach, the governing equations are reformulated into equations for the coefficients in the PC expansion. The implementation of the physical model with the stochastic uncertainty propagation is applied to protein-labeling in a homogeneous buffer, as well as in two-dimensional electrochemical microchannel flow. The results for the two-dimensional channel show strong distortion of sample profiles due to ion movement and consequent buffer disturbances. The uncertainty in these results is dominated by the uncertainty in the applied voltage across the channel.

  15. Biodegradation modelling of a dissolved gasoline plume applying independent laboratory and field parameters

    NASA Astrophysics Data System (ADS)

    Schirmer, Mario; Molson, John W.; Frind, Emil O.; Barker, James F.

    2000-12-01

    Biodegradation of organic contaminants in groundwater is a microscale process which is often observed on scales of 100s of metres or larger. Unfortunately, there are no known equivalent parameters for characterizing the biodegradation process at the macroscale as there are, for example, in the case of hydrodynamic dispersion. Zero- and first-order degradation rates estimated at the laboratory scale by model fitting generally overpredict the rate of biodegradation when applied to the field scale because limited electron acceptor availability and microbial growth are not considered. On the other hand, field-estimated zero- and first-order rates are often not suitable for predicting plume development because they may oversimplify or neglect several key field scale processes, phenomena and characteristics. This study uses the numerical model BIO3D to link the laboratory and field scales by applying laboratory-derived Monod kinetic degradation parameters to simulate a dissolved gasoline field experiment at the Canadian Forces Base (CFB) Borden. All input parameters were derived from independent laboratory and field measurements or taken from the literature a priori to the simulations. The simulated results match the experimental results reasonably well without model calibration. A sensitivity analysis on the most uncertain input parameters showed only a minor influence on the simulation results. Furthermore, it is shown that the flow field, the amount of electron acceptor (oxygen) available, and the Monod kinetic parameters have a significant influence on the simulated results. It is concluded that laboratory-derived Monod kinetic parameters can adequately describe field scale degradation, provided all controlling factors are incorporated in the field scale model. These factors include advective-dispersive transport of multiple contaminants and electron acceptors and large-scale spatial heterogeneities.

  16. Peristaltic Transport of Prandtl-Eyring Liquid in a Convectively Heated Curved Channel

    PubMed Central

    Hayat, Tasawar; Bibi, Shahida; Alsaadi, Fuad; Rafiq, Maimona

    2016-01-01

    Here peristaltic activity for flow of a Prandtl-Eyring material is modeled and analyzed for curved geometry. Heat transfer analysis is studied using more generalized convective conditions. The channel walls satisfy complaint walls properties. Viscous dissipation in the thermal equation accounted. Unlike the previous studies is for uniform magnetic field on this topic, the radial applied magnetic field has been utilized in the problems development. Solutions for stream function (ψ), velocity (u), and temperature (θ) for small parameter β have been derived. The salient features of heat transfer coefficient Z and trapping are also discussed for various parameters of interest including magnetic field, curvature, material parameters of fluid, Brinkman, Biot and compliant wall properties. Main observations of present communication have been included in the conclusion section. PMID:27304458

  17. [Information content of immunologic parameters in the evaluation of the effects of hazardous substances].

    PubMed

    Litovskaia, A V; Sadovskiĭ, V V; Vifleemskiĭ, A B

    1995-01-01

    Clinical and immunologic examination including 1 and 2 level tests covered 429 staffers of chemical enterprises and 1122 of those engaged into microbiological synthesis of proteins, both the groups exposed to some irritating gases and isocyanates. Using calculation of Kulbak's criterion, the studies selected informative parameters to diagnose immune disturbances caused by occupational hazards. For integral evaluation of immune state, the authors applied general immunologic parameter, meanings of which can serve as criteria for early diagnosis of various immune disorders and for definition of risk groups among industrial workers exposed to occupational biologic and chemical hazards.

  18. Piloted Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5, 20, 30, 45, and 60 degrees angle of attack, using the NASA 1A control law. Each maneuver is to be realized by the pilot applying square wave inputs to specific pilot station controls. Maneuver descriptions and complete specifications of the time/amplitude points defining each input are included, along with plots of the input time histories.

  19. Sensitivity of tire response to variations in material and geometric parameters

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    A computational procedure is presented for evaluating the analytic sensitivity derivatives of the tire response with respect to material and geometric parameters of the tire. The tire is modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The computational procedure is applied to the case of uniform inflation pressure on the Space Shuttle nose-gear tire when subjected to uniform inflation pressure. Numerical results are presented showing the sensitivity of the different response quantities to variations in the material characteristics of both the cord and the rubber.

  20. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.

    PubMed

    Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana

    2013-10-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.

  1. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors

    PubMed Central

    PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA

    2014-01-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172

  2. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  3. Including gauge-group parameters into the theory of interactions: an alternative mass-generating mechanism for gauge fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldaya, V.; Lopez-Ruiz, F. F.; Sanchez-Sastre, E.

    2006-11-03

    We reformulate the gauge theory of interactions by introducing the gauge group parameters into the model. The dynamics of the new 'Goldstone-like' bosons is accomplished through a non-linear {sigma}-model Lagrangian. They are minimally coupled according to a proper prescription which provides mass terms to the intermediate vector bosons without spoiling gauge invariance. The present formalism is explicitly applied to the Standard Model of electroweak interactions.

  4. Effect of Pressing Parameters on the Structure of Porous Materials Based on Cobalt and Nickel Powders

    NASA Astrophysics Data System (ADS)

    Shustov, V. S.; Rubtsov, N. M.; Alymov, M. I.; Ankudinov, A. B.; Evstratov, E. V.; Zelensky, V. A.

    2018-03-01

    Porous materials with a bulk porosity of more than 68% were synthesized by powder metallurgy methods from a cobalt-nickel mixture. The effect of the ratio of nickel and cobalt powders used in the synthesis of this porous material (including cases when either nickel or cobalt alone was applied) and the conditions of their compaction on structural parameters, such as open and closed porosities and pose size, was established.

  5. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  6. Experimental parameter identification of a multi-scale musculoskeletal model controlled by electrical stimulation: application to patients with spinal cord injury.

    PubMed

    Benoussaad, Mourad; Poignet, Philippe; Hayashibe, Mitsuhiro; Azevedo-Coste, Christine; Fattal, Charles; Guiraud, David

    2013-06-01

    We investigated the parameter identification of a multi-scale physiological model of skeletal muscle, based on Huxley's formulation. We focused particularly on the knee joint controlled by quadriceps muscles under electrical stimulation (ES) in subjects with a complete spinal cord injury. A noninvasive and in vivo identification protocol was thus applied through surface stimulation in nine subjects and through neural stimulation in one ES-implanted subject. The identification protocol included initial identification steps, which are adaptations of existing identification techniques to estimate most of the parameters of our model. Then we applied an original and safer identification protocol in dynamic conditions, which required resolution of a nonlinear programming (NLP) problem to identify the serial element stiffness of quadriceps. Each identification step and cross validation of the estimated model in dynamic condition were evaluated through a quadratic error criterion. The results highlighted good accuracy, the efficiency of the identification protocol and the ability of the estimated model to predict the subject-specific behavior of the musculoskeletal system. From the comparison of parameter values between subjects, we discussed and explored the inter-subject variability of parameters in order to select parameters that have to be identified in each patient.

  7. Atmospheric, Cloud, and Surface Parameters Retrieved from Satellite Ultra-spectral Infrared Sounder Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Yang, Ping; Schluessel, Peter; Strow, Larrabee

    2007-01-01

    An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. This physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multivariable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. This retrieval algorithm is applied to the MetOp satellite Infrared Atmospheric Sounding Interferometer (IASI) launched on October 19, 2006. IASI possesses an ultra-spectral resolution of 0.25 cm(exp -1) and a spectral coverage from 645 to 2760 cm(exp -1). Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI measurements are obtained and presented.

  8. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  9. QUS devices for assessment of osteoporosis

    NASA Astrophysics Data System (ADS)

    Langton, Christian

    2002-05-01

    The acronym QUS (Quantitative Ultrasound) is now widely used to describe ultrasound assessment of osteoporosis, a disease primarily manifested by fragility fractures of the wrist and hip along with shortening of the spine. There is currently available a plethora of commercial QUS devices, measuring various anatomic sites including the heel, finger, and tibia. Largely through commercial rather than scientific drivers, the parameters reported often differ significantly from the two fundamental parameters of velocity and attenuation. Attenuation at the heel is generally reported as BUA (broadband ultrasound attenuation, the linearly regressed increase in attenuation between 200 and 600 kHz). Velocity derivatives include bone, heel, TOF, and AdV. Further, velocity and BUA parameters may be mathematically combined to provide proprietary parameters including ``stiffness'' and ``QUI.'' In terms of clinical utility, the situation is further complicated by ultrasound being inherently dependent upon ``bone quality'' (e.g., structure) in addition to ``bone quantity'' (generally expressed as BMD, bone mineral density). Hence the BMD derived WHO criteria for osteoporosis and osteopenia may not be directly applied to QUS. There is therefore an urgent need to understand the fundamental dependence of QUS parameters, to perform calibration and cross-correlation studies of QUS devices, and to define its clinical utility.

  10. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  11. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  12. Structural investigation of the Grenville Province by radar and other imaging and nonimaging sensors

    NASA Technical Reports Server (NTRS)

    Lowman, P. D., Jr.; Blodget, H. W.; Webster, W. J., Jr.; Paia, S.; Singhroy, V. H.; Slaney, V. R.

    1984-01-01

    The structural investigation of the Canadian Shield by orbital radar and LANDSAT, is outlined. The area includes parts of the central metasedimentary belt and the Ontario gneiss belt, and major structures as well-expressed topographically. The primary objective is to apply SIR-B data to the mapping of this key part of the Grenville orogen, specifically ductile fold structures and associated features, and igneous, metamorphic, and sedimentary rock (including glacial and recent sediments). Secondary objectives are to support the Canadian RADARSAT project by evaluating the baseline parameters of a Canadian imaging radar satellite planned for late in the decade. The baseline parameters include optimum incidence and azimuth angles. The experiment is to develop techniques for the use of multiple data sets.

  13. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  14. Progress in Turbulence Detection via GNSS Occultation Data

    NASA Technical Reports Server (NTRS)

    Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.

    2012-01-01

    The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.

  15. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network

    PubMed Central

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone. PMID:27195005

  16. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network.

    PubMed

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone.

  17. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  18. The equation of state of Song and Mason applied to fluorine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslami, H.; Boushehri, A.

    1999-03-01

    An analytical equation of state is applied to calculate the compressed and saturation thermodynamic properties of fluorine. The equation of state is that of Song and Mason. It is based on a statistical mechanical perturbation theory of hard convex bodies and is a fifth-order polynomial in the density. There exist three temperature-dependent parameters: the second virial coefficient, an effective molecular volume, and a scaling factor for the average contact pair distribution function of hard convex bodies. The temperature-dependent parameters can be calculated if the intermolecular pair potential is known. However, the equation is usable with much less input than themore » full intermolecular potential, since the scaling factor and effective volume are nearly universal functions when expressed in suitable reduced units. The equation of state has been applied to calculate thermodynamic parameters including the critical constants, the vapor pressure curve, the compressibility factor, the fugacity coefficient, the enthalpy, the entropy, the heat capacity at constant pressure, the ratio of heat capacities, the Joule-Thomson coefficient, the Joule-Thomson inversion curve, and the speed of sound for fluorine. The agreement with experiment is good.« less

  19. Applying machine learning to identify autistic adults using imitation: An exploratory study.

    PubMed

    Li, Baihua; Sharma, Arjun; Meng, James; Purushwalkam, Senthil; Gowen, Emma

    2017-01-01

    Autism spectrum condition (ASC) is primarily diagnosed by behavioural symptoms including social, sensory and motor aspects. Although stereotyped, repetitive motor movements are considered during diagnosis, quantitative measures that identify kinematic characteristics in the movement patterns of autistic individuals are poorly studied, preventing advances in understanding the aetiology of motor impairment, or whether a wider range of motor characteristics could be used for diagnosis. The aim of this study was to investigate whether data-driven machine learning based methods could be used to address some fundamental problems with regard to identifying discriminative test conditions and kinematic parameters to classify between ASC and neurotypical controls. Data was based on a previous task where 16 ASC participants and 14 age, IQ matched controls observed then imitated a series of hand movements. 40 kinematic parameters extracted from eight imitation conditions were analysed using machine learning based methods. Two optimal imitation conditions and nine most significant kinematic parameters were identified and compared with some standard attribute evaluators. To our knowledge, this is the first attempt to apply machine learning to kinematic movement parameters measured during imitation of hand movements to investigate the identification of ASC. Although based on a small sample, the work demonstrates the feasibility of applying machine learning methods to analyse high-dimensional data and suggest the potential of machine learning for identifying kinematic biomarkers that could contribute to the diagnostic classification of autism.

  20. Identification of quasi-steady compressor characteristics from transient data

    NASA Technical Reports Server (NTRS)

    Nunes, K. B.; Rock, S. M.

    1984-01-01

    The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.

  1. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  2. Techno-economical optimization of Reactive Blue 19 removal by combined electrocoagulation/coagulation process through MOPSO using RSM and ANFIS models.

    PubMed

    Taheri, M; Alavi Moghaddam, M R; Arami, M

    2013-10-15

    In this research, Response Surface Methodology (RSM) and Adaptive Neuro Fuzzy Inference System (ANFIS) models were applied for optimization of Reactive Blue 19 removal using combined electrocoagulation/coagulation process through Multi-Objective Particle Swarm Optimization (MOPSO). By applying RSM, the effects of five independent parameters including applied current, reaction time, initial dye concentration, initial pH and dosage of Poly Aluminum Chloride were studied. According to the RSM results, all the independent parameters are equally important in dye removal efficiency. In addition, ANFIS was applied for dye removal efficiency and operating costs modeling. High R(2) values (≥85%) indicate that the predictions of RSM and ANFIS models are acceptable for both responses. ANFIS was also used in MOPSO for finding the best techno-economical Reactive Blue 19 elimination conditions according to RSM design. Through MOPSO and the selected ANFIS model, Minimum and maximum values of 58.27% and 99.67% dye removal efficiencies were obtained, respectively. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. The Effects of Low Intensity Laser on Clinical and Electrophysiological Parameters of Carpal Tunnel Syndrome

    PubMed Central

    Rayegani, Seyed Mansoor; Bahrami, Mohammad Hasan; Eliaspour, Darisuh; Raeissadat, Seyed Ahmad; Shafi Tabar Samakoosh, Mostafa; Sedihgipour, Leyla; Kargozar, Elham

    2013-01-01

    Introduction: Carpal Tunnel Syndrome (CTS) is the most common type of entrapment neuropathy. Conservative therapy is usually considered as the first step in the management of CTS. Low Level Laser Therapy (LLLT) is among the new physical modalities, which has shown therapeutic effects in CTS. The aim of the present study was to compare the effects of applying LASER and splinting together with splinting alone in patients with CTS. Methods: Fifty patients with mild and moderate CTS who met inclusion criteria were included in this study. The disease was confirmed by electrodiagnostic study (EDx) and clinical findings. Patients were randomly divided into 3 groups. Group A received LLLT and splinting. Group B received sham LLLT+ splinting and group C received only splints. Group A received LLLT (50 mw and 880nm with total dose of 6 joule/cm2). Clinical and EDx parameters were evaluated before and after treatment (3 weeks and 2 months later). Results: Electrophysiologic parameters and clinical findings including CTS provocative tests, Symptoms severity score (SSS), Functional Severity Score (FSS) and Visual Analogue Score (VAS) were improved in all three groups at 3 weeks and 2 months after treatment. No significant changes were noticed between the three groups regarding clinical and EDX parameters. Conclusion: We found no superiority in applying Low Intensity Laser accompanying splinting to traditional treatment which means splinting alone in patients with CTS. However, future studies investigating LLLT with parameters other than the one used in this study may reveal different results in favor of LLLT. PMID:25606328

  4. Safety Assessment of Tretinoin Loaded Nano Emulsion and Nanostructured Lipid Carriers: A Non-invasive Trial on Human Volunteers.

    PubMed

    Nasrollahi, Saman Ahmad; Hassanzade, Hurnaz; Moradi, Azadeh; Sabouri, Mahsa; Samadi, Aniseh; Kashani, Mansour Nassiri; Firooz, Alireza

    2017-01-01

    Topical application of tretinoin (TRE) is followed by a high incidence of side effects. One method to overcome the problem is loading TRE into lipid nanoparticles. The potential safety of the nanoparticle materials has been always considered as a major concern. In this in vivo study, changes in human skin biophysical parameters including hydration, TEWL, erythema, and pH have been used to determine the safety of tretinoin loaded nano emulsion (NE) and nanostructured lipid carriers (NLC). TRE loaded NE and NLC were prepared using a high pressure homogenizer. Skin biophysical parameters were measured on the volar forearms of twenty healthy volunteers, before and after applying TRE-NE and TRE-NLC lotions. All the measurements were done using respective probes of MPA 580Cutometer®. We obtained particles of nanometric size (<130 nm) with narrow distribution and optimal physical stability. None of the formulations made any statistically significant change in any of the measured skin properties. P-values were 0.646, 0.139, 0.386, 0.169 after applying TRE-NE and 0.508, 0.051, 0.139, 0.333 after applying TRE-NLC, respectively. Both formulations are reasonably safe to apply on human skin and topical application of TRE-NE and TRE-NLC had almost similar effects on skin biophysical parameters. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Error reduction and representation in stages (ERRIS) in hydrological modelling for ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.

    2016-09-01

    This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.

  6. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  7. The diagnostic capability of laser induced fluorescence in the characterization of excised breast tissues

    NASA Astrophysics Data System (ADS)

    Galmed, A. H.; Elshemey, Wael M.

    2017-08-01

    Differentiating between normal, benign and malignant excised breast tissues is one of the major worldwide challenges that need a quantitative, fast and reliable technique in order to avoid personal errors in diagnosis. Laser induced fluorescence (LIF) is a promising technique that has been applied for the characterization of biological tissues including breast tissue. Unfortunately, only few studies have adopted a quantitative approach that can be directly applied for breast tissue characterization. This work provides a quantitative means for such characterization via introduction of several LIF characterization parameters and determining the diagnostic accuracy of each parameter in the differentiation between normal, benign and malignant excised breast tissues. Extensive analysis on 41 lyophilized breast samples using scatter diagrams, cut-off values, diagnostic indices and receiver operating characteristic (ROC) curves, shows that some spectral parameters (peak height and area under the peak) are superior for characterization of normal, benign and malignant breast tissues with high sensitivity (up to 0.91), specificity (up to 0.91) and accuracy ranking (highly accurate).

  8. A python framework for environmental model uncertainty analysis

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Doherty, John E.

    2016-01-01

    We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.

  9. ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.

    2015-08-01

    The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear materialmore » movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.« less

  10. Distributed dual-parameter optical fiber sensor based on cascaded microfiber Fabry-Pérot interferometers

    NASA Astrophysics Data System (ADS)

    Xiang, Yang; Luo, Yiyang; Zhang, Wei; Liu, Deming; Sun, Qizhen

    2017-04-01

    We propose and demonstrate a distributed fiber sensor based on cascaded microfiber Fabry-Perot interferometers (MFPI) for simultaneous refractive index (SRI) and temperature measurement. By employing MFPI which is fabricated by taper-drawing the center of a uniform fiber Bragg grating (FBG) on standard fiber into a section of microfiber, dual parameters including SRI and temperature can be detected through demodulating the reflection spectrum of the MFPI. Further, wavelength-division-multiplexing (WDM) is applied to realize distributed dual-parameter fiber sensor by using cascaded MFPIs with different Bragg wavelengths. A prototype sensor system with 5 cascaded MFPIs is constructed to experimentally demonstrate the sensing performance.

  11. Acute effect of Vagus nerve stimulation parameters on cardiac chronotropic, inotropic, and dromotropic responses

    NASA Astrophysics Data System (ADS)

    Ojeda, David; Le Rolle, Virginie; Romero-Ugalde, Hector M.; Gallet, Clément; Bonnet, Jean-Luc; Henry, Christine; Bel, Alain; Mabo, Philippe; Carrault, Guy; Hernández, Alfredo I.

    2017-11-01

    Vagus nerve stimulation (VNS) is an established therapy for drug-resistant epilepsy and depression, and is considered as a potential therapy for other pathologies, including Heart Failure (HF) or inflammatory diseases. In the case of HF, several experimental studies on animals have shown an improvement in the cardiac function and a reverse remodeling of the cardiac cavity when VNS is applied. However, recent clinical trials have not been able to reproduce the same response in humans. One of the hypothesis to explain this lack of response is related to the way in which stimulation parameters are defined. The combined effect of VNS parameters is still poorly-known, especially in the case of VNS synchronously delivered with cardiac activity. In this paper, we propose a methodology to analyze the acute cardiovascular effects of VNS parameters individually, as well as their interactive effects. A Latin hypercube sampling method was applied to design a uniform experimental plan. Data gathered from this experimental plan was used to produce a Gaussian process regression (GPR) model in order to estimate unobserved VNS sequences. Finally, a Morris screening sensitivity analysis method was applied to each obtained GPR model. Results highlight dominant effects of pulse current, pulse width and number of pulses over frequency and delay and, more importantly, the degree of interactions between these parameters on the most important acute cardiovascular responses. In particular, high interacting effects between current and pulse width were found. Similar sensitivity profiles were observed for chronotropic, dromotropic and inotropic effects. These findings are of primary importance for the future development of closed-loop, personalized neuromodulator technologies.

  12. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  13. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  14. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  15. Inverse models: A necessary next step in ground-water modeling

    USGS Publications Warehouse

    Poeter, E.P.; Hill, M.C.

    1997-01-01

    Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.

  16. Fast and Versatile Fabrication of PMMA Microchip Electrophoretic Devices by Laser Engraving

    PubMed Central

    Gabriel, Ellen Flávia Moreira; Coltro, Wendell Karlos Tomazelli; Garcia, Carlos D.

    2014-01-01

    This paper describes the effects of different modes and engraving parameters on the dimensions of microfluidic structures produced in PMMA using laser engraving. The engraving modes included raster and vector while the explored engraving parameters included power, speed, frequency, resolution, line-width and number of passes. Under the optimum conditions, the technique was applied to produce channels suitable for CE separations. Taking advantage of the possibility to cut-through the substrates, the laser was also used to define solution reservoirs (buffer, sample, and waste) and a PDMS-based decoupler. The final device was used to perform the analysis of a model mixture of phenolic compounds within 200 s with baseline resolution. PMID:25113407

  17. Development of adaptive control applied to chaotic systems

    NASA Astrophysics Data System (ADS)

    Rhode, Martin Andreas

    1997-12-01

    Continuous-time derivative control and adaptive map-based recursive feedback control techniques are used to control chaos in a variety of systems and in situations that are of practical interest. The theoretical part of the research includes the review of fundamental concept of control theory in the context of its applications to deterministic chaotic systems, the development of a new adaptive algorithm to identify the linear system properties necessary for control, and the extension of the recursive proportional feedback control technique, RPF, to high dimensional systems. Chaos control was applied to models of a thermal pulsed combustor, electro-chemical dissolution and the hyperchaotic Rossler system. Important implications for combustion engineering were suggested by successful control of the model of the thermal pulsed combustor. The system was automatically tracked while maintaining control into regions of parameter and state space where no stable attractors exist. In a simulation of the electrochemical dissolution system, application of derivative control to stabilize a steady state, and adaptive RPF to stabilize a period one orbit, was demonstrated. The high dimensional adaptive control algorithm was applied in a simulation using the Rossler hyperchaotic system, where a period-two orbit with two unstable directions was stabilized and tracked over a wide range of a system parameter. In the experimental part, the electrochemical system was studied in parameter space, by scanning the applied potential and the frequency of the rotating copper disk. The automated control algorithm is demonstrated to be effective when applied to stabilize a period-one orbit in the experiment. We show the necessity of small random perturbations applied to the system in order to both learn the dynamics and control the system at the same time. The simultaneous learning and control capability is shown to be an important part of the active feedback control.

  18. Predictive Feature Selection for Genetic Policy Search

    DTIC Science & Technology

    2014-05-22

    inverted pendulum balancing problem (Gomez and Miikkulainen, 1999), where the agent must learn a policy in a continuous state space using discrete...algorithms to automate the process of training and/or designing NNs, mitigate these drawbacks and allow NNs to be easily applied to RL domains (Sher, 2012...racing simulator and the double inverted pendulum balance environments. It also includes parameter settings for all algorithms included in the study

  19. Hydrazine Gas Generator Program. [space shuttles

    NASA Technical Reports Server (NTRS)

    Kusak, L.; Marcy, R. D.

    1975-01-01

    The design and fabrication of a flight gas generator for the space shuttle were investigated. Critical performance parameters and stability criteria were evaluated as well as a scaling laws that could be applied in designing the flight gas generator. A test program to provide the necessary design information was included. A structural design, including thermal and stress analysis, and two gas generators were fabricated based on the results. Conclusions are presented.

  20. Reliable change, sensitivity, and specificity of a multidimensional concussion assessment battery: implications for caution in clinical practice.

    PubMed

    Register-Mihalik, Johna K; Guskiewicz, Kevin M; Mihalik, Jason P; Schmidt, Julianne D; Kerr, Zachary Y; McCrea, Michael A

    2013-01-01

    To provide reliable change confidence intervals for common clinical concussion measures using a healthy sample of collegiate athletes and to apply these reliable change parameters to a sample of concussed collegiate athletes. Two independent samples were included in the study and evaluated on common clinical measures of concussion. The healthy sample included male, collegiate football student-athletes (n = 38) assessed at 2 time points. The concussed sample included college-aged student-athletes (n = 132) evaluated before and after a concussion. Outcome measures included symptom severity scores, Automated Neuropsychological Assessment Metrics throughput scores, and Sensory Organization Test composite scores. Application of the reliable change parameters suggests that a small percentage of concussed participants were impaired on each measure. We identified a low sensitivity of the entire battery (all measures combined) of 50% but high specificity of 96%. Clinicians should be trained in understanding clinical concussion measures and should be aware of evidence suggesting the multifaceted battery is more sensitive than any single measure. Clinicians should be cautioned that sensitivity to balance and neurocognitive impairments was low for each individual measure. Applying the confidence intervals to our injured sample suggests that these measures do not adequately identify postconcussion impairments when used in isolation.

  1. Study of the operating parameters of a helicon plasma discharge source using PIC-MCC simulation technique

    NASA Astrophysics Data System (ADS)

    Jaafarian, Rokhsare; Ganjovi, Alireza; Etaati, Gholamreza

    2018-01-01

    In this work, a Particle in Cell-Monte Carlo Collision simulation technique is used to study the operating parameters of a typical helicon plasma source. These parameters mainly include the gas pressure, externally applied static magnetic field, the length and radius of the helicon antenna, and the frequency and voltage amplitude of the applied RF power on the helicon antenna. It is shown that, while the strong radial gradient of the formed plasma density in the proximity of the plasma surface is substantially proportional to the energy absorption from the existing Trivelpiece-Gould (TG) modes, the observed high electron temperature in the helicon source at lower static magnetic fields is significant evidence for the energy absorption from the helicon modes. Furthermore, it is found that, at higher gas pressures, both the plasma electron density and temperature are reduced. Besides, it is shown that, at higher static magnetic fields, owing to the enhancement of the energy absorption by the plasma charged species, the plasma electron density is linearly increased. Moreover, it is seen that, at the higher spatial dimensions of the antenna, both the plasma electron density and temperature are reduced. Additionally, while, for the applied frequencies of 13.56 MHz and 27.12 MHz on the helicon antenna, the TG modes appear, for the applied frequency of 18.12 MHz on the helicon antenna, the existence of helicon modes is proved. Moreover, by increasing the applied voltage amplitude on the antenna, the generation of mono-energetic electrons is more probable.

  2. Assessment of Spatial and Temporal Variation of Surface Water Quality in Streams Affected by Coalbed Methane Development

    NASA Astrophysics Data System (ADS)

    Chitrakar, S.; Miller, S. N.; Liu, T.; Caffrey, P. A.

    2015-12-01

    Water quality data have been collected from three representative stream reaches in a coalbed methane (CBM) development area for over five years to improve the understanding of salt loading in the system. These streams are located within Atlantic Rim development area of the Muddy Creek in south-central Wyoming. Significant development of CBM wells is ongoing in the study area. Three representative sampling stream reaches included the Duck Pond Draw and Cow Creek, which receive co-produced water, and; South Fork Creek, and upstream Cow Creek which do not receive co-produced water. Water samples were assayed for various parameters which included sodium, calcium, magnesium, fluoride, chlorine, nitrate, O-phosphate, sulfate, carbonate, bicarbonates, and other water quality parameters such as pH, conductivity, and TDS. Based on these water quality parameters we have investigated various hydrochemical and geochemical processes responsible for the high variability in water quality in the region. However, effective interpretation of complex databases to understand aforementioned processes has been a challenging task due to the system's complexity. In this work we applied multivariate statistical techniques including cluster analysis (CA), principle component analysis (PCA) and discriminant analysis (DA) to analyze water quality data and identify similarities and differences among our locations. First, CA technique was applied to group the monitoring sites based on the multivariate similarities. Second, PCA technique was applied to identify the prevalent parameters responsible for the variation of water quality in each group. Third, the DA technique was used to identify the most important factors responsible for variation of water quality during low flow season and high flow season. The purpose of this study is to improve the understanding of factors or sources influencing the spatial and temporal variation of water quality. The ultimate goal of this whole research is to develop coupled salt loading and GIS-based hydrological modelling tool that will be able to simulate the salt loadings under various user defined scenarios in the regions undergoing CBM development. Therefore, the findings from this study will be used to formulate the predominant processes responsible for solute loading.

  3. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  4. Cascades and dissipation ratio in rotating magnetohydrodynamic turbulence at low magnetic Prandtl number.

    PubMed

    Plunian, Franck; Stepanov, Rodion

    2010-10-01

    A phenomenology of isotropic magnetohydrodynamic (MHD) turbulence subject to both rotation and applied magnetic field is presented. It is assumed that the triple correlation decay time is the shortest between the eddy turn-over time and the ones associated to the rotating frequency and the Alfvén wave period. For Pm=1 it leads to four kinds of piecewise spectra, depending on four parameters: injection rate of energy, magnetic diffusivity, rotation rate, and applied field. With a shell model of MHD turbulence (including rotation and applied magnetic field), spectra for Pm ≤ 1 are presented, together with the ratio between magnetic and viscous dissipations.

  5. Characterizing Weak-Link Effects in Mo/Au Transition-Edge Sensors

    NASA Technical Reports Server (NTRS)

    Smith, Stephen

    2011-01-01

    We are developing Mo/Au bilayer transition-edge sensors (TESs) for applications in X-ray astronomy. Critical current measurements on these TESs show they act as weak superconducting links exhibiting oscillatory, Fraunhofer-like, behavior with applied magnetic field. In this contribution we investigate the implications of this behavior for TES detectors, under operational bias conditions. This includes characterizing the logarithmic resistance sensitivity with temperature, (alpha, and current, beta, as a function of applied magnetic field and bias point within the resistive transition. Results show that these important device parameters exhibit similar oscillatory behavior with applied magnetic field, which in turn affects the signal responsivity, noise and energy resolution.

  6. Study of nuclear morphometry on cytology specimens of benign and malignant breast lesions: A study of 122 cases

    PubMed Central

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2017-01-01

    Background: Breast cancer has emerged as a leading site of cancer among women in India. Fine needle aspiration cytology (FNAC) has been routinely applied in assessment of breast lesions. Cytological evaluation in breast lesions is subjective with a “gray zone” of 6.9–20%. Quantitative evaluation of nuclear size, shape, texture, and density parameters by morphometry can be of diagnostic help in breast tumor. Aims: To apply nuclear morphometry on cytological breast aspirates and assess its role in differentiating between benign and malignant breast lesions with derivation of suitable cut-off values between the two groups. Settings and Designs: The present study was a descriptive cross-sectional hospital-based study of nuclear morphometric parameters of benign and malignant cases. Materials and Methods: The study included 50 benign breast disease (BBD), 8 atypical ductal hyperplasia (ADH), and 64 carcinoma cases. Image analysis was performed on Papanicolaou-stained FNAC slides by Nikon Imaging Software (NIS)–Elements Advanced Research software (Version 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Results: Nuclear morphometry could differentiate between benign and malignant aspirates with a gradually increasing nuclear size parameters from BBD to ADH to carcinoma. Cut-off values of 31.93 μm2, 6.325 μm, 5.865 μm, 7.855 μm, and 21.55 μm for mean nuclear area, equivalent diameter, minimum feret, maximum ferret, and perimeter, respectively, were derived between benign and malignant cases, which could correctly classify 7 out of 8 ADH cases. Conclusion: Nuclear morphometry is a highly objective tool that could be used to supplement FNAC in differentiating benign from malignant lesions, with an important role in cases with diagnostic dilemma. PMID:28182052

  7. Study of nuclear morphometry on cytology specimens of benign and malignant breast lesions: A study of 122 cases.

    PubMed

    Kashyap, Anamika; Jain, Manjula; Shukla, Shailaja; Andley, Manoj

    2017-01-01

    Breast cancer has emerged as a leading site of cancer among women in India. Fine needle aspiration cytology (FNAC) has been routinely applied in assessment of breast lesions. Cytological evaluation in breast lesions is subjective with a "gray zone" of 6.9-20%. Quantitative evaluation of nuclear size, shape, texture, and density parameters by morphometry can be of diagnostic help in breast tumor. To apply nuclear morphometry on cytological breast aspirates and assess its role in differentiating between benign and malignant breast lesions with derivation of suitable cut-off values between the two groups. The present study was a descriptive cross-sectional hospital-based study of nuclear morphometric parameters of benign and malignant cases. The study included 50 benign breast disease (BBD), 8 atypical ductal hyperplasia (ADH), and 64 carcinoma cases. Image analysis was performed on Papanicolaou-stained FNAC slides by Nikon Imaging Software (NIS)-Elements Advanced Research software (Version 4.00). Nuclear morphometric parameters analyzed included 5 nuclear size, 2 shape, 4 texture, and 2 density parameters. Nuclear morphometry could differentiate between benign and malignant aspirates with a gradually increasing nuclear size parameters from BBD to ADH to carcinoma. Cut-off values of 31.93 μm 2 , 6.325 μm, 5.865 μm, 7.855 μm, and 21.55 μm for mean nuclear area, equivalent diameter, minimum feret, maximum ferret, and perimeter, respectively, were derived between benign and malignant cases, which could correctly classify 7 out of 8 ADH cases. Nuclear morphometry is a highly objective tool that could be used to supplement FNAC in differentiating benign from malignant lesions, with an important role in cases with diagnostic dilemma.

  8. Reconstruction of palaeoatmospheric carbon dioxide using stomatal densities of various beech plants (Fagaceae): testing and application of a mechanistic model

    NASA Astrophysics Data System (ADS)

    Grein, M.; Roth-Nebelsick, A.; Konrad, W.

    2006-12-01

    A mechanistic model (Konrad &Roth-Nebelsick a, in prep.) was applied for the reconstruction of atmospheric carbon dioxide using stomatal densities and photosynthesis parameters of extant and fossil Fagaceae. The model is based on an approach which couples diffusion and the biochemical process of photosynthesis. Atmospheric CO2 is calculated on the basis of stomatal diffusion and photosynthesis parameters of the considered taxa. The considered species include the castanoid Castanea sativa, two quercoids Quercus petraea and Quercus rhenana and an intermediate species Eotrigonobalanus furcinervis. In the case of Quercus petraea literature data were used. Stomatal data of Eotrigonobalanus furcinervis, Quercus rhenana and Castanea sativa were determined by the authors. Data of the extant Castanea sativa were collected by applying a peeling method and by counting of stomatal densities on the digitalized images of the peels. Additionally, isotope data of leaf samples of Castanea sativa were determined to estimate the ratio of intercellular to ambient carbon dioxide. The CO2 values calculated by the model (on the basis of stomatal data and measured or estimated biochemical parameters) are in good agreement with literature data, with the exception of the Late Eocene. The results thus demonstrate that the applied approach is principally suitable for reconstructing palaeoatmospheric CO2.

  9. Research of carbon composite material for nonlinear finite element method

    NASA Astrophysics Data System (ADS)

    Kim, Jung Ho; Garg, Mohit; Kim, Ji Hoon

    2012-04-01

    Works on the absorption of collision energy in the structural members are carried out widely with various material and cross-sections. And, with ever increasing safety concerns, they are presently applied in various fields including railroad trains, air crafts and automobiles. In addition to this, problem of lighting structural members became important subject by control of exhaust gas emission, fuel economy and energy efficiency. CFRP(Carbon Fiber Reinforced Plastics) usually is applying the two primary structural members because of different result each design parameter as like stacking thickness, stacking angle, moisture absorption ect. We have to secure the data for applying primary structural members. But it always happens to test design parameters each for securing the data. So, it has much more money and time. We can reduce the money and the time, if can ensure the CFRP material properties each design parameters. In this study, we experiment the coupon test each tension, compression and shear using CFRP prepreg sheet and simulate non-linear analyze at the sources - test result, Caron longitudinal modulus and matrix poisson's ratio using GENOAMQC is specialized at Composite analysis. And then we predict the result that specimen manufacture changing stacking angle and experiment in such a way of test method using GENOA-MCQ.

  10. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  11. Nondimensional Parameters and Equations for Nonlinear and Bifurcation Analyses of Thin Anisotropic Quasi-Shallow Shells

    NASA Technical Reports Server (NTRS)

    Nemeth, Michael P.

    2010-01-01

    A comprehensive development of nondimensional parameters and equations for nonlinear and bifurcations analyses of quasi-shallow shells, based on the Donnell-Mushtari-Vlasov theory for thin anisotropic shells, is presented. A complete set of field equations for geometrically imperfect shells is presented in terms general of lines-of-curvature coordinates. A systematic nondimensionalization of these equations is developed, several new nondimensional parameters are defined, and a comprehensive stress-function formulation is presented that includes variational principles for equilibrium and compatibility. Bifurcation analysis is applied to the nondimensional nonlinear field equations and a comprehensive set of bifurcation equations are presented. An extensive collection of tables and figures are presented that show the effects of lamina material properties and stacking sequence on the nondimensional parameters.

  12. Biological optimization systems for enhancing photosynthetic efficiency and methods of use

    DOEpatents

    Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim

    2012-11-06

    Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.

  13. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  14. Solid State Joining of Magnesium to Steel

    NASA Astrophysics Data System (ADS)

    Jana, Saumyadeep; Hovanski, Yuri; Pilli, Siva P.; Field, David P.; Yu, Hao; Pan, Tsung-Yu; Santella, M. L.

    Friction stir welding and ultrasonic welding techniques were applied to join automotive magnesium alloys to steel sheet. The effect of tooling and process parameters on the post-weld microstructure, texture and mechanical properties was investigated. Static and dynamic loading were utilized to investigate the joint strength of both cast and wrought magnesium alloys including their susceptibility and degradation under corrosive media. The conditions required to produce joint strengths in excess of 75% of the base metal strength were determined, and the effects of surface coatings, tooling and weld parameters on weld properties are presented.

  15. Aggregate Load Controllers and Associated Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.

    Aggregate load controllers and associated methods are described. According to one aspect, a method of operating an aggregate load controller includes using an aggregate load controller having an initial state, applying a stimulus to a plurality of thermostatic controllers which are configured to control a plurality of respective thermostatic loads which receive electrical energy from an electrical utility to operate in a plurality of different operational modes, accessing data regarding a response of the thermostatic loads as a result of the applied stimulus, using the data regarding the response, determining a value of at least one design parameter of themore » aggregate load controller, and using the determined value of the at least one design parameter, configuring the aggregate load controller to control amounts of the electrical energy which are utilized by the thermostatic loads.« less

  16. Application of a compressible flow solver and barotropic cavitation model for the evaluation of the suction head in a low specific speed centrifugal pump impeller channel

    NASA Astrophysics Data System (ADS)

    Limbach, P.; Müller, T.; Skoda, R.

    2015-12-01

    Commonly, for the simulation of cavitation in centrifugal pumps incompressible flow solvers with VOF kind cavitation models are applied. Since the source/sink terms of the void fraction transport equation are based on simplified bubble dynamics, empirical parameters may need to be adjusted to the particular pump operating point. In the present study a barotropic cavitation model, which is based solely on thermodynamic fluid properties and does not include any empirical parameters, is applied on a single flow channel of a pump impeller in combination with a time-explicit viscous compressible flow solver. The suction head curves (head drop) are compared to the results of an incompressible implicit standard industrial CFD tool and are predicted qualitatively correct by the barotropic model.

  17. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  18. Completion of the universal I-Love-Q relations in compact stars including the mass

    NASA Astrophysics Data System (ADS)

    Reina, Borja; Sanchis-Gual, Nicolas; Vera, Raül; Font, José A.

    2017-09-01

    In a recent paper, we applied a rigorous perturbed matching framework to show the amendment of the mass of rotating stars in Hartle's model. Here, we apply this framework to the tidal problem in binary systems. Our approach fully accounts for the correction to the Love numbers needed to obtain the universal I-Love-Q relations. We compute the corrected mass versus radius configurations of rotating quark stars, revisiting a classical paper on the subject. These corrections allow us to find a universal relation involving the second-order contribution to the mass δM. We thus complete the set of universal relations for the tidal problem in binary systems, involving four perturbation parameters, namely I, Love, Q and δM. These relations can be used to obtain the perturbation parameters directly from observational data.

  19. Prioritized Contact Transport Stream

    NASA Technical Reports Server (NTRS)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  20. Parameter Identification Flight Test Maneuvers for Closed Loop Modeling of the F-18 High Alpha Research Vehicle (HARV)

    NASA Technical Reports Server (NTRS)

    Batterson, James G. (Technical Monitor); Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for longitudinal and lateral linear model parameter estimation at 5,20,30,45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Thrust Vectoring (TV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time / amplitude points defining each input are included, along with plots of the input time histories.

  1. Least-Squares Self-Calibration of Imaging Array Data

    NASA Technical Reports Server (NTRS)

    Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.

    2004-01-01

    When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.

  2. Model-Based Collaborative Filtering Analysis of Student Response Data: Machine-Learning Item Response Theory

    ERIC Educational Resources Information Center

    Bergner, Yoav; Droschler, Stefan; Kortemeyer, Gerd; Rayyan, Saif; Seaton, Daniel; Pritchard, David E.

    2012-01-01

    We apply collaborative filtering (CF) to dichotomously scored student response data (right, wrong, or no interaction), finding optimal parameters for each student and item based on cross-validated prediction accuracy. The approach is naturally suited to comparing different models, both unidimensional and multidimensional in ability, including a…

  3. Generating nonlinear FM chirp radar signals by multiple integrations

    DOEpatents

    Doerry, Armin W [Albuquerque, NM

    2011-02-01

    A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.

  4. Characterization of DBD Plasma Actuators Performance Without External Flow - Part I: Thrust-Voltage Quadratic Relationship in Logarithmic Space for Sinusoidal Excitation

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2016-01-01

    Results of characterization of Dielectric Barrier Discharge (DBD) plasma actuators without external flow are presented. The results include aerodynamic and electric performance of the actuators without external flow for different geometrical parameters, dielectric materials and applied voltage level and wave form.

  5. Ionic-liquid-impregnated resin for the microwave-assisted solid-liquid extraction of triazine herbicides in honey.

    PubMed

    Wu, Lijie; Song, Ying; Hu, Mingzhu; Yu, Cui; Zhang, Hanqi; Yu, Aimin; Ma, Qiang; Wang, Ziming

    2015-09-01

    Microwave-assisted ionic-liquid-impregnated resin solid-liquid extraction was developed for the extraction of triazine herbicides, including cyanazine, metribuzin, desmetryn, secbumeton, terbumeton, terbuthylazine, dimethametryn, and dipropetryn in honey samples. The ionic-liquid-impregnated resin was prepared by immobilizing 1-hexyl-3-methylimidazolium hexafluorophosphate in the microspores of resin. The resin was used as the extraction adsorbent. The extraction and enrichment of analytes were performed in a single step. The extraction time can be shortened greatly with the help of microwave. The effects of experimental parameters including type of resin, type of ionic liquid, mass ratio of resin to ionic liquid, extraction time, amount of the impregnated resin, extraction temperature, salt concentration, and desorption conditions on the extraction efficiency, were investigated. A Box-Behnken design was applied to the selection of the experimental parameters. The recoveries were in the range of 80.1 to 103.4% and the relative standard deviations were lower than 6.8%. The present method was applied to the analysis of honey samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Approach to fitting parameters and clustering for characterising measured voltage dips based on two-dimensional polarisation ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, E.

    An alternative approach to characterise real voltage dips is proposed and evaluated in this study. The proposed methodology is based on voltage-space vector solutions, identifying parameters for ellipses trajectories by using the least-squares algorithm applied on a sliding window along the disturbance. The most likely patterns are then estimated through a clustering process based on the k-means algorithm. The objective is to offer an efficient and easily implemented alternative to characterise faults and visualise the most likely instantaneous phase-voltage evolution during events through their corresponding voltage-space vector trajectories. This novel solution minimises the data to be stored but maintains extensivemore » information about the dips including starting and ending transients. The proposed methodology has been applied satisfactorily to real voltage dips obtained from intensive field-measurement campaigns carried out in a Spanish wind power plant up to a time period of several years. A comparison to traditional minimum root mean square-voltage and time-duration classifications is also included in this study.« less

  7. Analysis of pumping tests: Significance of well diameter, partial penetration, and noise

    USGS Publications Warehouse

    Heidari, M.; Ghiassi, K.; Mehnert, E.

    1999-01-01

    The nonlinear least squares (NLS) method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating pumping wells, and with partially penetrating piezometers or observation wells. It was demonstrated that noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced an exact or acceptable set of parameters when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters, particularly that of specific storage, decreased with increases in the noise level in the observed drawdown data. With consideration of the well radii, the noiseless drawdown data from the pumping well in an unconfined aquifer produced good estimates of horizontal and vertical hydraulic conductivities and specific yield, but the estimated specific storage was unacceptable. When noisy data from the pumping well were used, an acceptable set of parameters was not obtained. Further experiments with noisy drawdown data in an unconfined aquifer revealed that when the well diameter was included in the analysis, hydraulic conductivity, specific yield and vertical hydraulic conductivity may be estimated rather effectively from piezometers located over a range of distances from the pumping well. Estimation of specific storage became less reliable for piezemeters located at distances greater than the initial saturated thickness of the aquifer. Application of the NLS to field pumping and recovery data from a confined aquifer showed that the estimated parameters from the two tests were in good agreement only when the well diameter was included in the analysis. Without consideration of well radii, the estimated values of hydraulic conductivity from the pumping and recovery tests were off by a factor of four.The nonlinear least squares method was applied to pumping and recovery aquifer test data in confined and unconfined aquifers with finite diameter and partially penetrating piezometers and observation wells. Noiseless and moderately noisy drawdown data from observation points located less than two saturated thicknesses of the aquifer from the pumping well produced a set of parameters that agrees very well with piezometer test data when the diameter of the pumping well was included in the analysis. The accuracy of the estimated parameters decreased with increasing noise level.

  8. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  9. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  10. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  11. A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data

    NASA Astrophysics Data System (ADS)

    Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.

    2006-06-01

    Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.

  12. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  13. Probabilistic calibration of the distributed hydrological model RIBS applied to real-time flood forecasting: the Harod river basin case study (Israel)

    NASA Astrophysics Data System (ADS)

    Nesti, Alice; Mediero, Luis; Garrote, Luis; Caporali, Enrica

    2010-05-01

    An automatic probabilistic calibration method for distributed rainfall-runoff models is presented. The high number of parameters in hydrologic distributed models makes special demands on the optimization procedure to estimate model parameters. With the proposed technique it is possible to reduce the complexity of calibration while maintaining adequate model predictions. The first step of the calibration procedure of the main model parameters is done manually with the aim to identify their variation range. Afterwards a Monte-Carlo technique is applied, which consists on repetitive model simulations with randomly generated parameters. The Monte Carlo Analysis Toolbox (MCAT) includes a number of analysis methods to evaluate the results of these Monte Carlo parameter sampling experiments. The study investigates the use of a global sensitivity analysis as a screening tool to reduce the parametric dimensionality of multi-objective hydrological model calibration problems, while maximizing the information extracted from hydrological response data. The method is applied to the calibration of the RIBS flood forecasting model in the Harod river basin, placed on Israel. The Harod basin has an extension of 180 km2. The catchment has a Mediterranean climate and it is mainly characterized by a desert landscape, with a soil that is able to absorb large quantities of rainfall and at the same time is capable to generate high peaks of discharge. Radar rainfall data with 6 minute temporal resolution are available as input to the model. The aim of the study is the validation of the model for real-time flood forecasting, in order to evaluate the benefits of improved precipitation forecasting within the FLASH European project.

  14. A Multialgorithm Approach to Land Surface Modeling of Suspended Sediment in the Colorado Front Range

    PubMed Central

    Stewart, J. R.; Kasprzyk, J. R.; Rajagopalan, B.; Minear, J. T.; Raseman, W. J.

    2017-01-01

    Abstract A new paradigm of simulating suspended sediment load (SSL) with a Land Surface Model (LSM) is presented here. Five erosion and SSL algorithms were applied within a common LSM framework to quantify uncertainties and evaluate predictability in two steep, forested catchments (>1,000 km2). The algorithms were chosen from among widely used sediment models, including empirically based: monovariate rating curve (MRC) and the Modified Universal Soil Loss Equation (MUSLE); stochastically based: the Load Estimator (LOADEST); conceptually based: the Hydrologic Simulation Program—Fortran (HSPF); and physically based: the Distributed Hydrology Soil Vegetation Model (DHSVM). The algorithms were driven by the hydrologic fluxes and meteorological inputs generated from the Variable Infiltration Capacity (VIC) LSM. A multiobjective calibration was applied to each algorithm and optimized parameter sets were validated over an excluded period, as well as in a transfer experiment to a nearby catchment to explore parameter robustness. Algorithm performance showed consistent decreases when parameter sets were applied to periods with greatly differing SSL variability relative to the calibration period. Of interest was a joint calibration of all sediment algorithm and streamflow parameters simultaneously, from which trade‐offs between streamflow performance and partitioning of runoff and base flow to optimize SSL timing were noted, decreasing the flexibility and robustness of the streamflow to adapt to different time periods. Parameter transferability to another catchment was most successful in more process‐oriented algorithms, the HSPF and the DHSVM. This first‐of‐its‐kind multialgorithm sediment scheme offers a unique capability to portray acute episodic loading while quantifying trade‐offs and uncertainties across a range of algorithm structures. PMID:29399268

  15. Fast and versatile fabrication of PMMA microchip electrophoretic devices by laser engraving.

    PubMed

    Moreira Gabriel, Ellen Flávia; Tomazelli Coltro, Wendell Karlos; Garcia, Carlos D

    2014-08-01

    This paper describes the effects of different modes and engraving parameters on the dimensions of microfluidic structures produced in PMMA using laser engraving. The engraving modes included raster and vector, while the explored engraving parameters included power, speed, frequency, resolution, line-width, and number of passes. Under the optimum conditions, the technique was applied to produce channels suitable for CE separations. Taking advantage of the possibility to cut-through the substrates, the laser was also used to define solution reservoirs (buffer, sample, and waste) and a PDMS-based decoupler. The final device was used to perform the analysis of a model mixture of phenolic compounds within 200 s with baseline resolution. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Applied Meteorology Unit (AMU) Quarterly Report - Fourth Quarter FY-09

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark

    2009-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the fourth quarter of Fiscal Year 2009 (July - September 2009). Tasks reports include: (1) Peak Wind Tool for User Launch Commit Criteria (LCC), (2) Objective Lightning Probability Tool. Phase III, (3) Peak Wind Tool for General Forecasting. Phase II, (4) Update and Maintain Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS), (5) Verify MesoNAM Performance (6) develop a Graphical User Interface to update selected parameters for the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLlT)

  17. Apparatus for measuring the local void fraction in a flowing liquid containing a gas

    DOEpatents

    Dunn, P.F.

    1979-07-17

    The local void fraction in liquid containing a gas is measured by placing an impedance-variation probe in the liquid, applying a controlled voltage or current to the probe, and measuring the probe current or voltage. A circuit for applying the one electrical parameter and measuring the other includes a feedback amplifier that minimizes the effect of probe capacitance and a digitizer to provide a clean signal. Time integration of the signal provides a measure of the void fraction, and an oscilloscope display also shows bubble size and distribution.

  18. Apparatus for measuring the local void fraction in a flowing liquid containing a gas

    DOEpatents

    Dunn, Patrick F.

    1981-01-01

    The local void fraction in liquid containing a gas is measured by placing an impedance-variation probe in the liquid, applying a controlled voltage or current to the probe, and measuring the probe current or voltage. A circuit for applying the one electrical parameter and measuring the other includes a feedback amplifier that minimizes the effect of probe capacitance and a digitizer to provide a clean signal. Time integration of the signal provides a measure of the void fraction, and an oscilloscope display also shows bubble size and distribution.

  19. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  20. Surface Irregularity Factor as a Parameter to Evaluate the Fatigue Damage State of CFRP

    PubMed Central

    Zuluaga-Ramírez, Pablo; Frövel, Malte; Belenguer, Tomás; Salazar, Félix

    2015-01-01

    This work presents an optical non-contact technique to evaluate the fatigue damage state of CFRP structures measuring the irregularity factor of the surface. This factor includes information about surface topology and can be measured easily on field, by techniques such as optical perfilometers. The surface irregularity factor has been correlated with stiffness degradation, which is a well-accepted parameter for the evaluation of the fatigue damage state of composite materials. Constant amplitude fatigue loads (CAL) and realistic variable amplitude loads (VAL), representative of real in- flight conditions, have been applied to “dog bone” shaped tensile specimens. It has been shown that the measurement of the surface irregularity parameters can be applied to evaluate the damage state of a structure, and that it is independent of the type of fatigue load that has caused the damage. As a result, this measurement technique is applicable for a wide range of inspections of composite material structures, from pressurized tanks with constant amplitude loads, to variable amplitude loaded aeronautical structures such as wings and empennages, up to automotive and other industrial applications. PMID:28793655

  1. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS.

    PubMed

    Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu

    2017-01-23

    Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  2. A Designed Experiments Approach to Optimizing MALDI-TOF MS Spectrum Processing Parameters Enhances Detection of Antibiotic Resistance in Campylobacter jejuni

    PubMed Central

    Penny, Christian; Grothendick, Beau; Zhang, Lin; Borror, Connie M.; Barbano, Duane; Cornelius, Angela J.; Gilpin, Brent J.; Fagerquist, Clifton K.; Zaragoza, William J.; Jay-Russell, Michele T.; Lastovica, Albert J.; Ragimbeau, Catherine; Cauchie, Henry-Michel; Sandrin, Todd R.

    2016-01-01

    MALDI-TOF MS has been utilized as a reliable and rapid tool for microbial fingerprinting at the genus and species levels. Recently, there has been keen interest in using MALDI-TOF MS beyond the genus and species levels to rapidly identify antibiotic resistant strains of bacteria. The purpose of this study was to enhance strain level resolution for Campylobacter jejuni through the optimization of spectrum processing parameters using a series of designed experiments. A collection of 172 strains of C. jejuni were collected from Luxembourg, New Zealand, North America, and South Africa, consisting of four groups of antibiotic resistant isolates. The groups included: (1) 65 strains resistant to cefoperazone (2) 26 resistant to cefoperazone and beta-lactams (3) 5 strains resistant to cefoperazone, beta-lactams, and tetracycline, and (4) 76 strains resistant to cefoperazone, teicoplanin, amphotericin, B and cephalothin. Initially, a model set of 16 strains (three biological replicates and three technical replicates per isolate, yielding a total of 144 spectra) of C. jejuni was subjected to each designed experiment to enhance detection of antibiotic resistance. The most optimal parameters were applied to the larger collection of 172 isolates (two biological replicates and three technical replicates per isolate, yielding a total of 1,031 spectra). We observed an increase in antibiotic resistance detection whenever either a curve based similarity coefficient (Pearson or ranked Pearson) was applied rather than a peak based (Dice) and/or the optimized preprocessing parameters were applied. Increases in antimicrobial resistance detection were scored using the jackknife maximum similarity technique following cluster analysis. From the first four groups of antibiotic resistant isolates, the optimized preprocessing parameters increased detection respective to the aforementioned groups by: (1) 5% (2) 9% (3) 10%, and (4) 2%. An additional second categorization was created from the collection consisting of 31 strains resistant to beta-lactams and 141 strains sensitive to beta-lactams. Applying optimal preprocessing parameters, beta-lactam resistance detection was increased by 34%. These results suggest that spectrum processing parameters, which are rarely optimized or adjusted, affect the performance of MALDI-TOF MS-based detection of antibiotic resistance and can be fine-tuned to enhance screening performance. PMID:27303397

  3. On-line implementation of nonlinear parameter estimation for the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Buckland, Julia H.; Musgrave, Jeffrey L.; Walker, Bruce K.

    1992-01-01

    We investigate the performance of a nonlinear estimation scheme applied to the estimation of several parameters in a performance model of the Space Shuttle Main Engine. The nonlinear estimator is based upon the extended Kalman filter which has been augmented to provide estimates of several key performance variables. The estimated parameters are directly related to the efficiency of both the low pressure and high pressure fuel turbopumps. Decreases in the parameter estimates may be interpreted as degradations in turbine and/or pump efficiencies which can be useful measures for an online health monitoring algorithm. This paper extends previous work which has focused on off-line parameter estimation by investigating the filter's on-line potential from a computational standpoint. ln addition, we examine the robustness of the algorithm to unmodeled dynamics. The filter uses a reduced-order model of the engine that includes only fuel-side dynamics. The on-line results produced during this study are comparable to off-line results generated previously. The results show that the parameter estimates are sensitive to dynamics not included in the filter model. Off-line results using an extended Kalman filter with a full order engine model to address the robustness problems of the reduced-order model are also presented.

  4. Defining Coastal Storm and Quantifying Storms Applying Coastal Storm Impulse Parameter

    NASA Astrophysics Data System (ADS)

    Mahmoudpour, Nader

    2014-05-01

    What defines a storm condition and what would initiate a "storm" has not been uniquely defined among scientists and engineers. Parameters that have been used to define a storm condition can be mentioned as wind speed, beach erosion and storm hydrodynamics parameters such as wave height and water levels. Some of the parameters are storm consequential such as beach erosion and some are not directly related to the storm hydrodynamics such as wind speed. For the purpose of the presentation, the different storm conditions based on wave height, water levels, wind speed and beach erosion will be discussed and assessed. However, it sounds more scientifically to have the storm definition based on the hydrodynamic parameters such as wave height, water level and storm duration. Once the storm condition is defined and storm has initiated, the severity of the storm would be a question to forecast and evaluate the hazard and analyze the risk in order to determine the appropriate responses. The correlation of storm damages to the meteorological and hydrodynamics parameters can be defined as a storm scale, storm index or storm parameter and it is needed to simplify the complexity of variation involved developing the scale for risk analysis and response management. A newly introduced Coastal Storm Impulse (COSI) parameter quantifies storms into one number for a specific location and storm event. The COSI parameter is based on the conservation of linear, horizontal momentum to combine storm surge, wave dynamics, and currents over the storm duration. The COSI parameter applies the principle of conservation of momentum to physically combine the hydrodynamic variables per unit width of shoreline. This total momentum is then integrated over the duration of the storm to determine the storm's impulse to the coast. The COSI parameter employs the mean, time-averaged nonlinear (Fourier) wave momentum flux, over the wave period added to the horizontal storm surge momentum above the Mean High Water (MHW) integrated over the storm duration. The COSI parameter methodology has been applied to a 10-year data set from 1994 to 2003 at US Army Corps of Engineers, Field Research Facility (FRF) located on the Atlantic Ocean in Duck, North Carolina. The storm duration was taken as the length of time (hours) that the spectral significant wave heights were equal or greater than 1.6 meters for at least a 12 hour, continuous period. Wave heights were measured in 8 meters water depth and water levels measured at the NOAA/NOS tide gauge at the end of the FRF pier. The 10-year data set were analyzed applying the aforementioned storm criteria and produced 148 coastal events including Hurricanes and Northeasters. The results of this analysis and application of the COSI parameter to determine "Extra Ordinary" storms in Federal Projects for the Gulf of Mexico, 2012 hurricane season will be discussed at the time of presentation.

  5. The Model Parameter Estimation Experiment (MOPEX): Its structure, connection to other international initiatives and future directions

    USGS Publications Warehouse

    Wagener, T.; Hogue, T.; Schaake, J.; Duan, Q.; Gupta, H.; Andreassian, V.; Hall, A.; Leavesley, G.

    2006-01-01

    The Model Parameter Estimation Experiment (MOPEX) is an international project aimed at developing enhanced techniques for the a priori estimation of parameters in hydrological models and in land surface parameterization schemes connected to atmospheric models. The MOPEX science strategy involves: database creation, a priori parameter estimation methodology development, parameter refinement or calibration, and the demonstration of parameter transferability. A comprehensive MOPEX database has been developed that contains historical hydrometeorological data and land surface characteristics data for many hydrological basins in the United States (US) and in other countries. This database is being continuously expanded to include basins from various hydroclimatic regimes throughout the world. MOPEX research has largely been driven by a series of international workshops that have brought interested hydrologists and land surface modellers together to exchange knowledge and experience in developing and applying parameter estimation techniques. With its focus on parameter estimation, MOPEX plays an important role in the international context of other initiatives such as GEWEX, HEPEX, PUB and PILPS. This paper outlines the MOPEX initiative, discusses its role in the scientific community, and briefly states future directions.

  6. Dipole and nondipole photoionization of molecular hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmermann, B.; McKoy, V.; Southworth, S. H.

    2015-05-01

    We describe a theoretical approach to molecular photoionization that includes first-order corrections to the dipole approximation. The theoretical formalism is presented and applied to photoionization of H-2 over the 20-to 180-eV photon energy range. The angle-integrated cross section sigma, the electric dipole anisotropy parameter beta(e), the molecular alignment anisotropy parameter beta(m), and the first-order nondipole asymmetry parameters gamma and delta were calculated within the single-channel, static-exchange approximation. The calculated parameters are compared with previous measurements of sigma and beta(m) and the present measurements of beta(e) and gamma + 3 delta. The dipole and nondipole angular distribution parameters were determined simultaneouslymore » using an efficient, multiangle measurement technique. Good overall agreement is observed between the magnitudes and spectral variations of the calculated and measured parameters. The nondipole asymmetries of He 1s and Ne 2p photoelectrons were also measured in the course of this work.« less

  7. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  8. Tactile Imaging Markers to Characterize Female Pelvic Floor Conditions.

    PubMed

    van Raalte, Heather; Egorov, Vladimir

    2015-08-01

    The Vaginal Tactile Imager (VTI) records pressure patterns from vaginal walls under an applied tissue deformation and during pelvic floor muscle contractions. The objective of this study is to validate tactile imaging and muscle contraction parameters (markers) sensitive to the female pelvic floor conditions. Twenty-two women with normal and prolapse conditions were examined by a vaginal tactile imaging probe. We identified 9 parameters which were sensitive to prolapse conditions ( p < 0.05 for one-way ANOVA and/or p < 0.05 for t -test with correlation factor r from -0.73 to -0.56). The list of parameters includes pressure, pressure gradient and dynamic pressure response during muscle contraction at identified locations. These parameters may be used for biomechanical characterization of female pelvic floor conditions to support an effective management of pelvic floor prolapse.

  9. Tactile Imaging Markers to Characterize Female Pelvic Floor Conditions

    PubMed Central

    van Raalte, Heather; Egorov, Vladimir

    2015-01-01

    The Vaginal Tactile Imager (VTI) records pressure patterns from vaginal walls under an applied tissue deformation and during pelvic floor muscle contractions. The objective of this study is to validate tactile imaging and muscle contraction parameters (markers) sensitive to the female pelvic floor conditions. Twenty-two women with normal and prolapse conditions were examined by a vaginal tactile imaging probe. We identified 9 parameters which were sensitive to prolapse conditions (p < 0.05 for one-way ANOVA and/or p < 0.05 for t-test with correlation factor r from −0.73 to −0.56). The list of parameters includes pressure, pressure gradient and dynamic pressure response during muscle contraction at identified locations. These parameters may be used for biomechanical characterization of female pelvic floor conditions to support an effective management of pelvic floor prolapse. PMID:26389014

  10. Application of nonlinear least-squares regression to ground-water flow modeling, west-central Florida

    USGS Publications Warehouse

    Yobbi, D.K.

    2000-01-01

    A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.

  11. Thermal design of AOTV heatshields for a conical drag brake

    NASA Technical Reports Server (NTRS)

    Pitts, W. C.; Murbach, M. S.

    1985-01-01

    Results are presented from an on-going study of the thermal performance of thermal protection systems for a conical drag brake type AOTV. Three types of heatshield are considered: rigid ceramic insulation, flexible ceramic blankets, and ceramic cloths. The results for the rigid insulation apply to other types of AOTV as well. Charts are presented in parametric form so that they may be applied to a variety of missions and vehicle configurations. The parameters considered include: braking maneuver heat flux and total heat load, heatshield material and thickness, heatshield thermal mass and conductivity, absorptivity and emissivity of surfaces, thermal mass of support structure, and radiation transmission through thin heatshields. Results of temperature calculations presented show trends with and sensitivities to these parameters. The emphasis is on providing information that will be useful in estimating the minimum required mass of these heatshield materials.

  12. Modeling electron emission and surface effects from diamond cathodes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Smithe, D.; Cary, J. R.; Ben-Zvi, I.; Rao, T.; Smedley, J.; Wang, E.

    2015-02-01

    We developed modeling capabilities, within the Vorpal particle-in-cell code, for three-dimensional simulations of surface effects and electron emission from semiconductor photocathodes. They include calculation of emission probabilities using general, piece-wise continuous, space-time dependent surface potentials, effective mass, and band bending field effects. We applied these models, in combination with previously implemented capabilities for modeling charge generation and transport in diamond, to investigate the emission dependence on applied electric field in the range from approximately 2 MV/m to 17 MV/m along the [100] direction. The simulation results were compared to experimental data. For the considered parameter regime, conservation of transverse electron momentum (in the plane of the emission surface) allows direct emission from only two (parallel to [100]) of the six equivalent lowest conduction band valleys. When the electron affinity χ is the only parameter varied in the simulations, the value χ = 0.31 eV leads to overall qualitative agreement with the probability of emission deduced from experiments. Including band bending in the simulations improves the agreement with the experimental data, particularly at low applied fields, but not significantly. Using surface potentials with different profiles further allows us to investigate the emission as a function of potential barrier height, width, and vacuum level position. However, adding surface patches with different levels of hydrogenation, modeled with position-dependent electron affinity, leads to the closest agreement with the experimental data.

  13. Effects of Processing Parameters on the Forming Quality of C-Shaped Thermosetting Composite Laminates in Hot Diaphragm Forming Process

    NASA Astrophysics Data System (ADS)

    Bian, X. X.; Gu, Y. Z.; Sun, J.; Li, M.; Liu, W. P.; Zhang, Z. G.

    2013-10-01

    In this study, the effects of processing temperature and vacuum applying rate on the forming quality of C-shaped carbon fiber reinforced epoxy resin matrix composite laminates during hot diaphragm forming process were investigated. C-shaped prepreg preforms were produced using a home-made hot diaphragm forming equipment. The thickness variations of the preforms and the manufacturing defects after diaphragm forming process, including fiber wrinkling and voids, were evaluated to understand the forming mechanism. Furthermore, both interlaminar slipping friction and compaction behavior of the prepreg stacks were experimentally analyzed for showing the importance of the processing parameters. In addition, autoclave processing was used to cure the C-shaped preforms to investigate the changes of the defects before and after cure process. The results show that the C-shaped prepreg preforms with good forming quality can be achieved through increasing processing temperature and reducing vacuum applying rate, which obviously promote prepreg interlaminar slipping process. The process temperature and forming rate in hot diaphragm forming process strongly influence prepreg interply frictional force, and the maximum interlaminar frictional force can be taken as a key parameter for processing parameter optimization. Autoclave process is effective in eliminating voids in the preforms and can alleviate fiber wrinkles to a certain extent.

  14. Localised anodic oxidation of aluminium material using a continuous electrolyte jet

    NASA Astrophysics Data System (ADS)

    Kuhn, D.; Martin, A.; Eckart, C.; Sieber, M.; Morgenstern, R.; Hackert-Oschätzchen, M.; Lampke, T.; Schubert, A.

    2017-03-01

    Anodic oxidation of aluminium and its alloys is often used as protection against material wearout and corrosion. Therefore, anodic oxidation of aluminium is applied to produce functional oxide layers. The structure and properties of the oxide layers can be influenced by various factors. These factors include for example the properties of the substrate material, like alloy elements and heat treatment or process parameters, like operating temperature, electric parameters or the type of the used electrolyte. In order to avoid damage to the work-piece surface caused by covering materials in masking applications, to minimize the use of resources and to modify the surface in a targeted manner, the anodic oxidation has to be localised to partial areas. Within this study a proper alternative without preparing the substrate by a mask is investigated for generating locally limited anodic oxidation by using a continuous electrolyte jet. Therefore aluminium material EN AW 7075 is machined by applying a continuous electrolyte jet of oxalic acid. Experiments were carried out by varying process parameters like voltage or processing time. The realised oxide spots on the aluminium surface were investigated by optical microscopy, SEM and EDX line scanning. Furthermore, the dependencies of the oxide layer properties from the process parameters are shown.

  15. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences.

    PubMed

    Rivolo, Simone; Asrress, Kaleab N; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø; Grøndal, Anne K; Hønge, Jesper L; Kim, Won Y; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P; Lee, Jack

    2014-09-01

    Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky-Golay filter, to reduce the high frequency acquisition noise. The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%).

  16. Determination of orbits of comets: P/Kearns-Kwee, P/Gunn, including nongravitational effects in the comets' motion

    NASA Technical Reports Server (NTRS)

    Todorovic-Juchniewicz, Bozenna; Sitarski, Grzegorz

    1992-01-01

    To improve the orbits, all the positional observations of the comets were collected. The observations were selected and weighted according to objective mathematical criteria and the mean residuals a priori were calculated for both comets. We took into account nongravitational effects in the comets' motion using Marsden's method applied in two ways: either determining the three constant parameters, A(sub 1), A(sub 2), A(sub 3) or the four parameters A, eta, I, phi connected with the rotating nucleus of the comet. To link successfully all the observations, we had to assume for both comets that A(t) = A(sub O)exp(-B x t) where B was an additional nongravitational parameter.

  17. Effects of space environment on composites: An analytical study of critical experimental parameters

    NASA Technical Reports Server (NTRS)

    Gupta, A.; Carroll, W. F.; Moacanin, J.

    1979-01-01

    A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.

  18. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    NASA Astrophysics Data System (ADS)

    Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal

    Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.

  19. Identification of atypical flight patterns

    NASA Technical Reports Server (NTRS)

    Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)

    2005-01-01

    Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.

  20. Sandwich Structure Risk Reduction in Support of the Payload Adapter Fitting

    NASA Technical Reports Server (NTRS)

    Nettles, A. T.; Jackson, J. R.; Guin, W. E.

    2018-01-01

    Reducing risk for utilizing honeycomb sandwich structure for the Space Launch System payload adapter fitting includes determining what parameters need to be tested for damage tolerance to ensure a safe structure. Specimen size and boundary conditions are the most practical parameters to use in damage tolerance inspection. The effect of impact over core splices and foreign object debris between the facesheet and core is assessed. Effects of enhanced damage tolerance by applying an outer layer of carbon fiber woven cloth is examined. A simple repair technique for barely visible impact damage that restores all compression strength is presented.

  1. IAU MDC Photographic Meteor Orbits Database: Version 2013

    NASA Astrophysics Data System (ADS)

    Neslušan, L.; Porubčan, V.; Svoreň, J.

    2014-05-01

    A new 2013 version of the IAU MDC photographic meteor orbits database which is an upgrade of the current 2003 version (Lindblad et al. 2003, EMP 93:249-260) is presented. To the 2003 version additional 292 orbits are added, thus the new version of the database consists of 4,873 meteors with their geophysical and orbital parameters compiled in 41 catalogues. For storing the data, a new format enabling a more simple treatment with the parameters, including the errors of their determination is applied. The data can be downloaded from the IAU MDC web site: http://www.astro.sk/IAUMDC/Ph2013/

  2. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  3. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors

    PubMed Central

    2017-01-01

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest. PMID:28745040

  4. Evolution of a mini-scale biphasic dissolution model: Impact of model parameters on partitioning of dissolved API and modelling of in vivo-relevant kinetics.

    PubMed

    Locher, Kathrin; Borghardt, Jens M; Frank, Kerstin J; Kloft, Charlotte; Wagner, Karl G

    2016-08-01

    Biphasic dissolution models are proposed to have good predictive power for the in vivo absorption. The aim of this study was to improve our previously introduced mini-scale dissolution model to mimic in vivo situations more realistically and to increase the robustness of the experimental model. Six dissolved APIs (BCS II) were tested applying the improved mini-scale biphasic dissolution model (miBIdi-pH-II). The influence of experimental model parameters including various excipients, API concentrations, dual paddle and its rotation speed was investigated. The kinetics in the biphasic model was described applying a one- and four-compartment pharmacokinetic (PK) model. The improved biphasic dissolution model was robust related to differing APIs and excipient concentrations. The dual paddle guaranteed homogenous mixing in both phases; the optimal rotation speed was 25 and 75rpm for the aqueous and the octanol phase, respectively. A one-compartment PK model adequately characterised the data of fully dissolved APIs. A four-compartment PK model best quantified dissolution, precipitation, and partitioning also of undissolved amounts due to realistic pH profiles. The improved dissolution model is a powerful tool for investigating the interplay between dissolution, precipitation and partitioning of various poorly soluble APIs (BCS II). In vivo-relevant PK parameters could be estimated applying respective PK models. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Rough Electrode Creates Excess Capacitance in Thin-Film Capacitors.

    PubMed

    Torabi, Solmaz; Cherry, Megan; Duijnstee, Elisabeth A; Le Corre, Vincent M; Qiu, Li; Hummelen, Jan C; Palasantzas, George; Koster, L Jan Anton

    2017-08-16

    The parallel-plate capacitor equation is widely used in contemporary material research for nanoscale applications and nanoelectronics. To apply this equation, flat and smooth electrodes are assumed for a capacitor. This essential assumption is often violated for thin-film capacitors because the formation of nanoscale roughness at the electrode interface is very probable for thin films grown via common deposition methods. In this work, we experimentally and theoretically show that the electrical capacitance of thin-film capacitors with realistic interface roughness is significantly larger than the value predicted by the parallel-plate capacitor equation. The degree of the deviation depends on the strength of the roughness, which is described by three roughness parameters for a self-affine fractal surface. By applying an extended parallel-plate capacitor equation that includes the roughness parameters of the electrode, we are able to calculate the excess capacitance of the electrode with weak roughness. Moreover, we introduce the roughness parameter limits for which the simple parallel-plate capacitor equation is sufficiently accurate for capacitors with one rough electrode. Our results imply that the interface roughness beyond the proposed limits cannot be dismissed unless the independence of the capacitance from the interface roughness is experimentally demonstrated. The practical protocols suggested in our work for the reliable use of the parallel-plate capacitor equation can be applied as general guidelines in various fields of interest.

  6. Constraining Controls on the Emplacement of Long Lava Flows on Earth and Mars Through Modeling in ArcGIS

    NASA Astrophysics Data System (ADS)

    Golder, K.; Burr, D. M.; Tran, L.

    2017-12-01

    Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then plan to convert the model into Python, for easy modification and portability within the community.

  7. A robust BAO extractor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noda, Eugenio; Pietroni, Massimo; Peloso, Marco, E-mail: eugenio.noda@pr.infn.it, E-mail: peloso@physics.umn.edu, E-mail: massimo.pietroni@unipr.it

    2017-08-01

    We define a procedure to extract the oscillating part of a given nonlinear Power Spectrum, and derive an equation describing its evolution including the leading effects at all scales. The intermediate scales are taken into account by standard perturbation theory, the long range (IR) displacements are included by using consistency relations, and the effect of small (UV) scales is included via effective coefficients computed in simulations. We show that the UV effects are irrelevant in the evolution of the oscillating part, while they play a crucial role in reproducing the smooth component. Our 'extractor' operator can be applied to simulationsmore » and real data in order to extract the Baryonic Acoustic Oscillations (BAO) without any fitting function and nuisance parameter. We conclude that the nonlinear evolution of BAO can be accurately reproduced at all scales down to 0 z = by our fast analytical method, without any need of extra parameters fitted from simulations.« less

  8. Cloud, Aerosol, and Volcanic Ash Retrievals Using ASTR and SLSTR with ORAC

    NASA Astrophysics Data System (ADS)

    McGarragh, Gregory; Poulsen, Caroline; Povey, Adam; Thomas, Gareth; Christensen, Matt; Sus, Oliver; Schlundt, Cornelia; Stapelberg, Stefan; Stengel, Martin; Grainger, Don

    2015-12-01

    The Optimal Retrieval of Aerosol and Cloud (ORAC) is a generalized optimal estimation system that retrieves cloud, aerosol and volcanic ash parameters using satellite imager measurements in the visible to infrared. Use of the same algorithm for different sensors and parameters leads to consistency that facilitates inter-comparison and interaction studies. ORAC currently supports ATSR, AVHRR, MODIS and SEVIRI. In this proceeding we discuss the ORAC retrieval algorithm applied to ATSR data including the retrieval methodology, the forward model, uncertainty characterization and discrimination/classification techniques. Application of ORAC to SLSTR data is discussed including the additional features that SLSTR provides relative to the ATSR heritage. The ORAC level 2 and level 3 results are discussed and an application of level 3 results to the study of cloud/aerosol interactions is presented.

  9. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  10. Introduction to the problem

    NASA Technical Reports Server (NTRS)

    Ramohalli, Kumar

    1989-01-01

    Solid propellant rockets were used extensively in space missions ranging from large boosters to orbit-raising upper stages. The smaller motors find exclusive use in various earth-based applications. The advantage of the solids include simplicity, readiness, volumetric efficiency, and storability. Important recent progress in related fields (combustion, rheology, micro-instrumentation/diagnostics, and chaos theory) can be applied to solid rockets to derive maximum advantage and avoid waste. Main objectives of research in solid propellants include: to identify critical parameters, to establish specification rules, and to develop quantitative criteria.

  11. Life cycle assessment and residue leaching: The importance of parameter, scenario and leaching data selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allegrini, E., E-mail: elia@env.dtu.dk; Butera, S.; Kosson, D.S.

    Highlights: • Relevance of metal leaching in waste management system LCAs was assessed. • Toxic impacts from leaching could not be disregarded. • Uncertainty of toxicity, due to background activities, determines LCA outcomes. • Parameters such as pH and L/S affect LCA results. • Data modelling consistency and coverage within an LCA are crucial. - Abstract: Residues from industrial processes and waste management systems (WMSs) have been increasingly reutilised, leading to landfilling rate reductions and the optimisation of mineral resource utilisation in society. Life cycle assessment (LCA) is a holistic methodology allowing for the analysis of systems and products andmore » can be applied to waste management systems to identify environmental benefits and critical aspects thereof. From an LCA perspective, residue utilisation provides benefits such as avoiding the production and depletion of primary materials, but it can lead to environmental burdens, due to the potential leaching of toxic substances. In waste LCA studies where residue utilisation is included, leaching has generally been neglected. In this study, municipal solid waste incineration bottom ash (MSWI BA) was used as a case study into three LCA scenarios having different system boundaries. The importance of data quality and parameter selection in the overall LCA results was evaluated, and an innovative method to assess metal transport into the environment was applied, in order to determine emissions to the soil and water compartments for use in an LCA. It was found that toxic impacts as a result of leaching were dominant in systems including only MSWI BA utilisation, while leaching appeared negligible in larger scenarios including the entire waste system. However, leaching could not be disregarded a priori, due to large uncertainties characterising other activities in the scenario (e.g. electricity production). Based on the analysis of relevant parameters relative to leaching, and on general results of the study, recommendations are provided regarding the use of leaching data in LCA studies.« less

  12. Application of an automatic approach to calibrate the NEMURO nutrient-phytoplankton-zooplankton food web model in the Oyashio region

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi

    2010-10-01

    The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.

  13. The lunar semidiurnal tide at the polar summer mesopause observed by SOFIE

    NASA Astrophysics Data System (ADS)

    Hoffmann, C. G.; von Savigny, C.; Hervig, M. E.; Oberbremer, E.

    2018-01-01

    The polar summer mesopause, particularly the presence of noctilucent clouds (NLCs), exhibits pronounced temporal variability. Parts of this variability are thought to be caused by lunar tidal influences. We extract the semidiurnal lunar tide in various NLC related parameters by applying the superposed epoch analysis method to the dataset of the SOFIE satellite instrument. Analyzing the NLC seasons from 2007 to 2015 in the northern and southern hemisphere we, first, confirm the influence of the lunar tide on ice water content (IWC) and temperature. For both parameters the lunar influence had already recently been demonstrated in satellite measurements. Second, we apply the analysis to the variety of parameters observed by SOFIE including trace gases (H2O , O3 , CH4 , and NO), NLC properties (e.g., NLC altitudes and ice mass density), microphysical properties (e.g., particle concentration and mean radius), and mesopause properties. In all of these parameters we find signatures of the semidiurnal lunar tide, which is the first demonstration of this effect for all of these parameters. We quantify the lunar influence in terms of amplitudes and phases. Whereas the focus of the present study is providing observational evidence for the existence of lunar tidal signatures in various parameters, we do not aim at investigating the underlying mechanisms in detail, which is only possible with the utilization of comprehensive modeling approaches. Nevertheless, we briefly discuss the relations to known processes of the NLC evolution where appropriate, e.g., the relevance of the freeze-drying effect for the signature in H2O and the relation of IWC and NLC altitudes.

  14. Effect of thermal radiation and chemical reaction on non-Newtonian fluid through a vertically stretching porous plate with uniform suction

    NASA Astrophysics Data System (ADS)

    Khan, Zeeshan; Khan, Ilyas; Ullah, Murad; Tlili, I.

    2018-06-01

    In this work, we discuss the unsteady flow of non-Newtonian fluid with the properties of heat source/sink in the presence of thermal radiation moving through a binary mixture embedded in a porous medium. The basic equations of motion including continuity, momentum, energy and concentration are simplified and solved analytically by using Homotopy Analysis Method (HAM). The energy and concentration fields are coupled with Dankohler and Schmidt numbers. By applying suitable transformation, the coupled nonlinear partial differential equations are converted to couple ordinary differential equations. The effect of physical parameters involved in the solutions of velocity, temperature and concentration profiles are discussed by assign numerical values and results obtained shows that the velocity, temperature and concentration profiles are influenced appreciably by the radiation parameter, Prandtl number, suction/injection parameter, reaction order index, solutal Grashof number and the thermal Grashof. It is observed that the non-Newtonian parameter H leads to an increase in the boundary layer thickness. It was established that the Prandtl number decreases thee thermal boundary layer thickness which helps in maintaining system temperature of the fluid flow. It is observed that the temperature profiles higher for heat source parameter and lower for heat sink parameter throughout the boundary layer. Fromm this simulation it is analyzed that an increase in the Schmidt number decreases the concentration boundary layer thickness. Additionally, for the sake of comparison numerical method (ND-Solve) and Adomian Decomposition Method are also applied and good agreement is found.

  15. Usage of K-cluster and factor analysis for grouping and evaluation the quality of olive oil in accordance with physico-chemical parameters

    NASA Astrophysics Data System (ADS)

    Milev, M.; Nikolova, Kr.; Ivanova, Ir.; Dobreva, M.

    2015-11-01

    25 olive oils were studied- different in origin and ways of extraction, in accordance with 17 physico-chemical parameters as follows: color parameters - a and b, light, fluorescence peaks, pigments - chlorophyll and β-carotene, fatty-acid content. The goals of the current study were: Conducting correlation analysis to find the inner relation between the studied indices; By applying factor analysis with the help of the method of Principal Components (PCA), to reduce the great number of variables into a few factors, which are of main importance for distinguishing the different types of olive oil;Using K-means cluster to compare and group the tested types olive oils based on their similarity. The inner relation between the studied indices was found by applying correlation analysis. A factor analysis using PCA was applied on the basis of the found correlation matrix. Thus the number of the studied indices was reduced to 4 factors, which explained 79.3% from the entire variation. The first one unified the color parameters, β-carotene and the related with oxidative products fluorescence peak - about 520 nm. The second one was determined mainly by the chlorophyll content and related to it fluorescence peak - about 670 nm. The third and the fourth factors were determined by the fatty-acid content of the samples. The third one unified the fatty-acids, which give us the opportunity to distinguish olive oil from the other plant oils - oleic, linoleic and stearin acids. The fourth factor included fatty-acids with relatively much lower content in the studied samples. It is enquired the number of clusters to be determined preliminary in order to apply the K-Cluster analysis. The variant K = 3 was worked out because the types of the olive oil were three. The first cluster unified all salad and pomace olive oils, the second unified the samples of extra virgin oilstaken as controls from producers, which were bought from the trade network. The third cluster unified samples from pomace and extra virgin oils, which distinguish one from another in accordance with their parameters from the natural olive oils, because of presence of plant oils impurities.

  16. Application of the precipitation-runoff model in the Warrior coal field, Alabama

    USGS Publications Warehouse

    Kidd, Robert E.; Bossong, C.R.

    1987-01-01

    A deterministic precipitation-runoff model, the Precipitation-Runoff Modeling System, was applied in two small basins located in the Warrior coal field, Alabama. Each basin has distinct geologic, hydrologic, and land-use characteristics. Bear Creek basin (15.03 square miles) is undisturbed, is underlain almost entirely by consolidated coal-bearing rocks of Pennsylvanian age (Pottsville Formation), and is drained by an intermittent stream. Turkey Creek basin (6.08 square miles) contains a surface coal mine and is underlain by both the Pottsville Formation and unconsolidated clay, sand, and gravel deposits of Cretaceous age (Coker Formation). Aquifers in the Coker Formation sustain flow through extended rainless periods. Preliminary daily and storm calibrations were developed for each basin. Initial parameter and variable values were determined according to techniques recommended in the user's manual for the modeling system and through field reconnaissance. Parameters with meaningful sensitivity were identified and adjusted to match hydrograph shapes and to compute realistic water year budgets. When the developed calibrations were applied to data exclusive of the calibration period as a verification exercise, results were comparable to those for the calibration period. The model calibrations included preliminary parameter values for the various categories of geology and land use in each basin. The parameter values for areas underlain by the Pottsville Formation in the Bear Creek basin were transferred directly to similar areas in the Turkey Creek basin, and these parameter values were held constant throughout the model calibration. Parameter values for all geologic and land-use categories addressed in the two calibrations can probably be used in ungaged basins where similar conditions exist. The parameter transfer worked well, as a good calibration was obtained for Turkey Creek basin.

  17. Effects of extremely low frequency magnetic field on oxidative balance in brain of rats.

    PubMed

    Ciejka, Elzbieta; Kleniewska, P; Skibska, B; Goraca, A

    2011-12-01

    Extremely low frequency magnetic field (ELF-MF) may result in oxidative DNA damage and lipid peroxidation with an ultimate effect on a number of systemic disturbances and cell death. The aim of the study is to assess the effect of ELF-MF parameters most frequently used in magnetotherapy on reactive oxygen species generation (ROS) in brain tissue of experimental animals depending on the time of exposure to this field. The research material included adult male Sprague-Dawley rats, aged 3-4 months. The animals were divided into 3 groups: I - control (shame) group; II - exposed to the following parameters of the magnetic field: 7 mT, 40 Hz, 30 min/day, 10 days; III - exposed to the ELF-MF parameters of 7 mT, 40 Hz, 60 min/day, 10 days. The selected parameters of oxidative stress: thiobarbituric acid reactive substances (TBARS), hydrogen peroxide (H(2)O(2)), total free sulphydryl groups (-SH groups) and protein in brain homogenates were measured after the exposure of rats to the magnetic field. ELF-MF parameters of 7 mT, 40 Hz, 30 min/day for 10 days caused a significant increase in lipid peroxidation and insignificant increase in H(2)O(2) and free -SH groups. The same ELF-MF parameters but applied for 60 min/day caused a significant increase in free -SH groups and protein concentration in the brain homogenates indicating the adaptive mechanism. The study has shown that ELF-MF applied for 30 min/day for 10 days can affect free radical generation in the brain. Prolongation of the exposure to ELF-MF (60/min/day) caused adaptation to this field. The effect of ELF-MF irradiation on oxidative stress parameters depends on the time of animal exposure to magnetic field.

  18. Quality of mass-reared codling moth (Lepidoptera: Tortricidae) after long distance transportation 1. Logistics of shipping procedures and quality parameters as measured in the laboratory.

    USDA-ARS?s Scientific Manuscript database

    The sterile insect technique is a proven effective control tactic against lepidopteran pests when applied in an area-wide integrated pest management programme. The construction of insect mass-rearing facilities requires considerable investment and moth control strategies that include the use of ster...

  19. Summary of research in applied mathematics, numerical analysis, and computer sciences

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  20. Control of small phased-array antennas

    NASA Technical Reports Server (NTRS)

    Doland, G. D.

    1978-01-01

    Series of reports, patent descriptions, calculator programs, and other literature describes antenna control and steering apparatus for seven-element phased array. Though series contains information specific to particular system, it illustrates methods that can be applied to antennas with greater or fewer numbers of elements. Included are programs for calculating beam parameters and design functions and information to interfacing digital controller to beam-steering apparatus.

  1. Computational study of Ca, Sr and Ba under pressure

    NASA Astrophysics Data System (ADS)

    Jona, F.; Marcus, P. M.

    2006-05-01

    A first-principles procedure for the calculation of equilibrium properties of crystals under hydrostatic pressure is applied to Ca, Sr and Ba. The procedure is based on minimizing the Gibbs free energy G (at zero temperature) with respect to the structure at a given pressure p, and hence does not require the equation of state to fix the pressure. The calculated lattice constants of Ca, Sr and Ba are shown to be generally closer to measured values than previous calculations using other procedures. In particular for Ba, where careful and extensive pressure data are available, the calculated lattice parameters fit measurements to about 1% in three different phases, both cubic and hexagonal. Rigid-lattice transition pressures between phases which come directly from the crossing of G(p) curves are not close to measured transition pressures. One reason is the need to include zero-point energy (ZPE) of vibration in G. The ZPE of cubic phases is calculated with a generalized Debye approximation and applied to Ca and Sr, where it produces significant shifts in transition pressures. An extensive tabulation is given of structural parameters and elastic constants from the literature, including both theoretical and experimental results.

  2. Optical properties of the Tietz-Hua quantum well under the applied external fields

    NASA Astrophysics Data System (ADS)

    Kasapoglu, E.; Sakiroglu, S.; Ungan, F.; Yesilgul, U.; Duque, C. A.; Sökmen, I.

    2017-12-01

    In this study, the effects of the electric and magnetic fields as well as structure parameter- γ on the total absorption coefficient, including linear and third order nonlinear absorption coefficients for the optical transitions between any two subband in the Tietz-Hua quantum well have been investigated. The optical transitions were investigated by using the density matrix formalism and the perturbation expansion method. The Tietz-Hua quantum well becomes narrower (wider) when the γ - structure parameter increases (decreases) and so the energies of the bound states will be functions of this parameter. Therefore, we can provide the red or blue shift in the peak position of the absorption coefficient by changing the strength of the electric and magnetic fields as well as the structure parameters and these results can be used to adjust and control the optical properties of the Tietz-Hua quantum well.

  3. Control of power to an inductively heated part

    DOEpatents

    Adkins, Douglas R.; Frost, Charles A.; Kahle, Philip M.; Kelley, J. Bruce; Stanton, Suzanne L.

    1997-01-01

    A process for induction hardening a part to a desired depth with an AC signal applied to the part from a closely coupled induction coil includes measuring the voltage of the AC signal at the coil and the current passing through the coil; and controlling the depth of hardening of the part from the measured voltage and current. The control system determines parameters of the part that are functions of applied voltage and current to the induction coil, and uses a neural network to control the application of the AC signal based on the detected functions for each part.

  4. Control of power to an inductively heated part

    DOEpatents

    Adkins, D.R.; Frost, C.A.; Kahle, P.M.; Kelley, J.B.; Stanton, S.L.

    1997-05-20

    A process for induction hardening a part to a desired depth with an AC signal applied to the part from a closely coupled induction coil includes measuring the voltage of the AC signal at the coil and the current passing through the coil; and controlling the depth of hardening of the part from the measured voltage and current. The control system determines parameters of the part that are functions of applied voltage and current to the induction coil, and uses a neural network to control the application of the AC signal based on the detected functions for each part. 6 figs.

  5. Optimization of the reconstruction parameters in [123I]FP-CIT SPECT

    NASA Astrophysics Data System (ADS)

    Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec

    2018-04-01

    The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.

  6. Exploring parameter effects on the economic outcomes of groundwater-based developments in remote, low-resource settings

    NASA Astrophysics Data System (ADS)

    Abramson, Adam; Adar, Eilon; Lazarovitch, Naftali

    2014-06-01

    Groundwater is often the most or only feasible safe drinking water source in remote, low-resource areas, yet the economics of its development have not been systematically outlined. We applied AWARE (Assessing Water Alternatives in Remote Economies), a recently developed Decision Support System, to investigate the costs and benefits of groundwater access and abstraction for non-networked, rural supplies. Synthetic profiles of community water services (n = 17,962), defined across 13 parameters' values and ranges relevant to remote areas, were applied to the decision framework, and the parameter effects on economic outcomes were investigated. Regressions and analysis of output distributions indicate that the most important factors determining the cost of water improvements include the technological approach, the water service target, hydrological parameters, and population density. New source construction is less cost-effective than the use or improvement of existing wells, but necessary for expanding access to isolated households. We also explored three financing approaches - willingness-to-pay, -borrow, and -work - and found that they significantly impact the prospects of achieving demand-driven cost recovery. The net benefit under willingness to work, in which water infrastructure is coupled to community irrigation and cash payments replaced by labor commitments, is impacted most strongly by groundwater yield and managerial factors. These findings suggest that the cost-benefit dynamics of groundwater-based water supply improvements vary considerably by many parameters, and that the relative strengths of different development strategies may be leveraged for achieving optimal outcomes.

  7. Modeling Electric Field Influences on Plasmaspheric Refilling

    NASA Technical Reports Server (NTRS)

    Liemohn, M. W.; Kozyra, J. U.; Khazanov, G. V.; Craven, Paul D.

    1998-01-01

    We have a new model of ion transport that we have applied to the problem of plasmaspheric flux tube refilling after a geomagnetic disturbance. This model solves the Fokker-Planck kinetic equation by applying discrete difference numerical schemes to the various operators. Features of the model include a time-varying ionospheric source, self-consistent Coulomb collisions, field-aligned electric field, hot plasma interactions, and ion cyclotron wave heating. We see refilling rates similar to those of earlier observations and models, except when the electric field is included. In this case, the refilling rates can be quite different that previously predicted. Depending on the populations included and the values of relevant parameters, trap zone densities can increase or decrease. In particular, the inclusion of hot populations near the equatorial region (specifically warm pancake distributions and ring current ions) can dramatically alter the refilling rate. Results are compared with observations as well as previous hydrodynamic and kinetic particle model simulations.

  8. Fractional blood flow in oscillatory arteries with thermal radiation and magnetic field effects

    NASA Astrophysics Data System (ADS)

    Bansi, C. D. K.; Tabi, C. B.; Motsumi, T. G.; Mohamadou, A.

    2018-06-01

    A fractional model is proposed to study the effect of heat transfer and magnetic field on the blood flowing inside oscillatory arteries. The flow is due to periodic pressure gradient and the fractional model equations include body acceleration. The proposed velocity and temperature distribution equations are solved using the Laplace and Hankel transforms. The effect of the fluid parameters such as the Reynolds number (Re), the magnetic parameter (M) and the radiation parameter (N) is studied graphically with changing the fractional-order parameter. It is found that the fractional derivative is a valuable tool to control both the temperature and velocity of blood when flow parameters change under treatment, for example. Besides, this work highlights the fact that in the presence of strong magnetic field, blood velocity and temperature reduce. A reversed effect is observed where the applied thermal radiation increase; the velocity and temperature of blood increase. However, the temperature remains high around the artery centerline, which is appropriate during treatment to avoid tissues damage.

  9. [Determination of solubility parameters of high density polyethylene by inverse gas chromatography].

    PubMed

    Wang, Qiang; Chen, Yali; Liu, Ruiting; Shi, Yuge; Zhang, Zhengfang; Tang, Jun

    2011-11-01

    Inverse gas chromatographic (IGC) technology was used to determine the solubility parameters of high density polyethylene (HDPE) at the absolute temperatures from 303.15 to 343.15 K. Six solvents were applied as test probes including hexane (n-C6), heptane (n-C7), octane (n-C8), nonane (n-C9), chloroform (CHCl3) and ethyl acetate (EtAc). Some thermodynamic parameters were obtained by IGC data analysis such as the specific retention volumes of the solvents (V(0)(g)), the molar enthalpy of sorption (delta H(S)(1)), the partial molar enthalpy of mixing at infinite dilution (delta H(1)(infinity)), the molar enthalpy of vaporization (delta H(v)), the activity coefficients at infinite dilution (omega (1)(infinity)), and Flow-Huggins interaction parameters (X(1,2)(infinity)) between HDPE and probe solvents. The results showed that the above six probes are poor solvents for HDPE. The solubility parameter of HDPE at room temperature (298.15 K) was also derived as 19.00 (J/cm3)(0.5).

  10. Tuning of Kalman filter parameters via genetic algorithm for state-of-charge estimation in battery management system.

    PubMed

    Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.

  11. The impact of higher-order ionospheric effects on estimated tropospheric parameters in Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Zus, F.; Deng, Z.; Wickert, J.

    2017-08-01

    The impact of higher-order ionospheric effects on the estimated station coordinates and clocks in Global Navigation Satellite System (GNSS) Precise Point Positioning (PPP) is well documented in literature. Simulation studies reveal that higher-order ionospheric effects have a significant impact on the estimated tropospheric parameters as well. In particular, the tropospheric north-gradient component is most affected for low-latitude and midlatitude stations around noon. In a practical example we select a few hundred stations randomly distributed over the globe, in March 2012 (medium solar activity), and apply/do not apply ionospheric corrections in PPP. We compare the two sets of tropospheric parameters (ionospheric corrections applied/not applied) and find an overall good agreement with the prediction from the simulation study. The comparison of the tropospheric parameters with the tropospheric parameters derived from the ERA-Interim global atmospheric reanalysis shows that ionospheric corrections must be consistently applied in PPP and the orbit and clock generation. The inconsistent application results in an artificial station displacement which is accompanied by an artificial "tilting" of the troposphere. This finding is relevant in particular for those who consider advanced GNSS tropospheric products for meteorological studies.

  12. Tuning of Kalman Filter Parameters via Genetic Algorithm for State-of-Charge Estimation in Battery Management System

    PubMed Central

    Ting, T. O.; Lim, Eng Gee

    2014-01-01

    In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041

  13. Improvements in clathrate modelling: I. The H 2O-CO 2 system with various salts

    NASA Astrophysics Data System (ADS)

    Bakker, Ronald J.; Dubessy, Jean; Cathelineau, Michel

    1996-05-01

    The formation of clathrates in fluid inclusions during microthermometric measurements is typical for most natural fluid systems which include a mixture of H 2O, gases, and electrolytes. A general model is proposed which gives a complete description of the CO 2 clathrate stability field between 253-293 K and 0-200 MPa, and which can be applied to NaCl, KCl, and CaCl 2 bearing systems. The basic concept of the model is the equality of the chemical potential of H 2O in coexisting phases, after classical clathrate modelling. None of the original clathrate models had used a complete set of the most accurate values for the many parameters involved. The lack of well-defined standard conditions and of a thorough error analysis resulted in inaccurate estimation of clathrate stability conditions. According to our modifications which include the use of the most accurate parameters available, the semi-empirical model for the binary H 2O-CO 2 system is improved by the estimation of numerically optimised Kihara parameters σ = 365.9 pm and ɛ/k = 174.44 K at low pressures, and σ = 363.92 pm and e/k = 174.46 K at high pressures. Including the error indications of individual parameters involved in clathrate modelling, a range of 365.08-366.52 pm and 171.3-177.8 K allows a 2% accuracy in the modelled CO 2 clathrate formation pressure at selected temperatures below Q 2 conditions. A combination of the osmotic coefficient for binary salt-H 2O systems and Henry's constant for gas-H 2O systems is sufficiently accurate to estimate the activity of H 2O in aqueous solutions and the stability conditions of clathrate in electrolyte-bearing systems. The available data on salt-bearing systems is inconsistent, but our improved clathrate stability model is able to reproduce average values. The proposed modifications in clathrate modelling can be used to perform more accurate estimations of bulk density and composition of individual fluid inclusions from clathrate melting temperatures. Our model is included in several computer programs which can be applied to fluid inclusion studies.

  14. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  15. Use of simulation tools to illustrate the effect of data management practices for low and negative plate counts on the estimated parameters of microbial reduction models.

    PubMed

    Garcés-Vega, Francisco; Marks, Bradley P

    2014-08-01

    In the last 20 years, the use of microbial reduction models has expanded significantly, including inactivation (linear and nonlinear), survival, and transfer models. However, a major constraint for model development is the impossibility to directly quantify the number of viable microorganisms below the limit of detection (LOD) for a given study. Different approaches have been used to manage this challenge, including ignoring negative plate counts, using statistical estimations, or applying data transformations. Our objective was to illustrate and quantify the effect of negative plate count data management approaches on parameter estimation for microbial reduction models. Because it is impossible to obtain accurate plate counts below the LOD, we performed simulated experiments to generate synthetic data for both log-linear and Weibull-type microbial reductions. We then applied five different, previously reported data management practices and fit log-linear and Weibull models to the resulting data. The results indicated a significant effect (α = 0.05) of the data management practices on the estimated model parameters and performance indicators. For example, when the negative plate counts were replaced by the LOD for log-linear data sets, the slope of the subsequent log-linear model was, on average, 22% smaller than for the original data, the resulting model underpredicted lethality by up to 2.0 log, and the Weibull model was erroneously selected as the most likely correct model for those data. The results demonstrate that it is important to explicitly report LODs and related data management protocols, which can significantly affect model results, interpretation, and utility. Ultimately, we recommend using only the positive plate counts to estimate model parameters for microbial reduction curves and avoiding any data value substitutions or transformations when managing negative plate counts to yield the most accurate model parameters.

  16. Life cycle assessment and residue leaching: the importance of parameter, scenario and leaching data selection.

    PubMed

    Allegrini, E; Butera, S; Kosson, D S; Van Zomeren, A; Van der Sloot, H A; Astrup, T F

    2015-04-01

    Residues from industrial processes and waste management systems (WMSs) have been increasingly reutilised, leading to landfilling rate reductions and the optimisation of mineral resource utilisation in society. Life cycle assessment (LCA) is a holistic methodology allowing for the analysis of systems and products and can be applied to waste management systems to identify environmental benefits and critical aspects thereof. From an LCA perspective, residue utilisation provides benefits such as avoiding the production and depletion of primary materials, but it can lead to environmental burdens, due to the potential leaching of toxic substances. In waste LCA studies where residue utilisation is included, leaching has generally been neglected. In this study, municipal solid waste incineration bottom ash (MSWI BA) was used as a case study into three LCA scenarios having different system boundaries. The importance of data quality and parameter selection in the overall LCA results was evaluated, and an innovative method to assess metal transport into the environment was applied, in order to determine emissions to the soil and water compartments for use in an LCA. It was found that toxic impacts as a result of leaching were dominant in systems including only MSWI BA utilisation, while leaching appeared negligible in larger scenarios including the entire waste system. However, leaching could not be disregarded a priori, due to large uncertainties characterising other activities in the scenario (e.g. electricity production). Based on the analysis of relevant parameters relative to leaching, and on general results of the study, recommendations are provided regarding the use of leaching data in LCA studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Predictability of malaria parameters in Sahel under the S4CAST Model.

    NASA Astrophysics Data System (ADS)

    Diouf, Ibrahima; Rodríguez-Fonseca, Belen; Deme, Abdoulaye; Cisse, Moustapha; Ndione, Jaques-Andre; Gaye, Amadou; Suárez-Moreno, Roberto

    2016-04-01

    An extensive literature exists documenting the ENSO impacts on infectious diseases, including malaria. Other studies, however, have already focused on cholera, dengue and Rift Valley Fever. This study explores the seasonal predictability of malaria outbreaks over Sahel from previous SSTs of Pacific and Atlantic basins. The SST may be considered as a source of predictability due to its direct influence on rainfall and temperature, thus also other related variables like malaria parameters. In this work, the model has been applied to the study of predictability of the Sahelian malaria parameters from the leading MCA covariability mode in the framework of climate and health issue. The results of this work will be useful for decision makers to better access to climate forecasts and application on malaria transmission risk.

  18. Trans-dimensional joint inversion of seabed scattering and reflection data.

    PubMed

    Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2013-03-01

    This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.

  19. Performance analysis of wideband data and television channels. [space shuttle communications

    NASA Technical Reports Server (NTRS)

    Geist, J. M.

    1975-01-01

    Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.

  20. A hybrid model of cell cycle in mammals.

    PubMed

    Behaegel, Jonathan; Comet, Jean-Paul; Bernot, Gilles; Cornillon, Emilien; Delaunay, Franck

    2016-02-01

    Time plays an essential role in many biological systems, especially in cell cycle. Many models of biological systems rely on differential equations, but parameter identification is an obstacle to use differential frameworks. In this paper, we present a new hybrid modeling framework that extends René Thomas' discrete modeling. The core idea is to associate with each qualitative state "celerities" allowing us to compute the time spent in each state. This hybrid framework is illustrated by building a 5-variable model of the mammalian cell cycle. Its parameters are determined by applying formal methods on the underlying discrete model and by constraining parameters using timing observations on the cell cycle. This first hybrid model presents the most important known behaviors of the cell cycle, including quiescent phase and endoreplication.

  1. A fully-stochasticized, age-structured population model for population viability analysis of fish: Lower Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.

    2017-01-01

    We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.

  2. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  3. The shear instability energy: a new parameter for materials design?

    NASA Astrophysics Data System (ADS)

    Kanani, M.; Hartmaier, A.; Janisch, R.

    2017-10-01

    Reliable and predictive relationships between fundamental microstructural material properties and observable macroscopic mechanical behaviour are needed for the successful design of new materials. In this study we establish a link between physical properties that are defined on the atomic level and the deformation mechanisms of slip planes and interfaces that govern the mechanical behaviour of a metallic material. To accomplish this, the shear instability energy Γ is introduced, which can be determined via quantum mechanical ab initio calculations or other atomistic methods. The concept is based on a multilayer generalised stacking fault energy calculation and can be applied to distinguish the different shear deformation mechanisms occurring at TiAl interfaces during finite-temperature molecular dynamics simulations. We use the new parameter Γ to construct a deformation mechanism map for different interfaces occurring in this intermetallic. Furthermore, Γ can be used to convert the results of ab initio density functional theory calculations into those obtained with an embedded atom method type potential for TiAl. We propose to include this new physical parameter into material databases to apply it for the design of materials and microstructures, which so far mainly relies on single-crystal values for the unstable and stable stacking fault energy.

  4. Theoretical relationship between elastic wave velocity and electrical resistivity

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Sub; Yoon, Hyung-Koo

    2015-05-01

    Elastic wave velocity and electrical resistivity have been commonly applied to estimate stratum structures and obtain subsurface soil design parameters. Both elastic wave velocity and electrical resistivity are related to the void ratio; the objective of this study is therefore to suggest a theoretical relationship between the two physical parameters. Gassmann theory and Archie's equation are applied to propose a new theoretical equation, which relates the compressional wave velocity to shear wave velocity and electrical resistivity. The piezo disk element (PDE) and bender element (BE) are used to measure the compressional and shear wave velocities, respectively. In addition, the electrical resistivity is obtained by using the electrical resistivity probe (ERP). The elastic wave velocity and electrical resistivity are recorded in several types of soils including sand, silty sand, silty clay, silt, and clay-sand mixture. The appropriate input parameters are determined based on the error norm in order to increase the reliability of the proposed relationship. The predicted compressional wave velocities from the shear wave velocity and electrical resistivity are similar to the measured compressional velocities. This study demonstrates that the new theoretical relationship may be effectively used to predict the unknown geophysical property from the measured values.

  5. Comparison of radiofrequency and transoral robotic surgery in obstructive sleep apnea syndrome treatment.

    PubMed

    Aynacı, Engin; Karaman, Murat; Kerşin, Burak; Fındık, Mahmut Ozan

    2018-05-01

    Radiofrequency tissue ablation (RFTA) and transoral robotic surgery (TORS) are the methods used in OSAS surgery. We also aimed to compare the advantages and disadvantages of RF and TORS as treatment methods applied in OSAS patients in terms of many parameters, especially apnea hypopnea index (AHI). Patients were classified by performing a detailed examination and evaluation before surgery. 20 patients treated with anterior palatoplasty and uvulectomy -/+ tonsillectomy + RFTA (17 males, 3 females) and 20 patients treated with anterior palatoplasty and uvulectomy -/+  tonsillectomy + TORS (16 males, 4 females) were included in the study. PSG was performed preoperatively and postoperatively in all patients and Epworth sleepiness questionnaire was applied. All operations were performed by the same surgeon and these surgical methods -RF and TORS- were compared in terms of many parameters. When the patients treated with RF and TORS were compared in operation time, length of hospitalization and duration of transition to oral feeding; all parameters were significantly greater in the patients treated with TORS. TORS technique was found to be more successful than RF in terms of reduction of AHI value, correcting minimum arterial oxygen saturation value and decreasing Epworth Sleepiness Scale score.

  6. Power oscillation suppression by robust SMES in power system with large wind power penetration

    NASA Astrophysics Data System (ADS)

    Ngamroo, Issarachai; Cuk Supriyadi, A. N.; Dechanupaprittha, Sanchai; Mitani, Yasunori

    2009-01-01

    The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions.

  7. Radiometrie recalibration procedure for landsat-5 thematic mapper data

    USGS Publications Warehouse

    Chander, G.; Micijevic, E.; Hayes, R.W.; Barsi, J.A.

    2008-01-01

    The Landsat-5 (L5) satellite was launched on March 01, 1984, with a design life of three years. Incredibly, the L5 Thematic Mapper (TM) has collected data for 23 years. Over this time, the detectors have aged, and its radiometric characteristics have changed since launch. The calibration procedures and parameters have also changed with time. Revised radiometric calibrations have improved the radiometric accuracy of recently processed data; however, users with data that were processed prior to the calibration update do not benefit from the revisions. A procedure has been developed to give users the ability to recalibrate their existing Level 1 (L1) products without having to purchase reprocessed data from the U.S. Geological Survey (USGS). The accuracy of the recalibration is dependent on the knowledge of the prior calibration applied to the data. The ""Work Order" file, included with standard National Land Archive Production System (NLAFS) data products, gives parameters that define the applied calibration. These are the Internal Calibrator (IC) calibration parameters or the default prelaunch calibration, if there were problems with the IC calibration. This paper details the recalibration procedure for data processed using IC, in which users have the Work Order file. ?? 2001 IEEE.

  8. A Deeper Understanding of Stability in the Solar Wind: Applying Nyquist's Instability Criterion to Wind Faraday Cup Data

    NASA Astrophysics Data System (ADS)

    Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.

    2017-12-01

    Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.

  9. Flutter and divergence instability of supported piezoelectric nanotubes conveying fluid

    NASA Astrophysics Data System (ADS)

    Bahaadini, Reza; Hosseini, Mohammad; Jamali, Behnam

    2018-01-01

    In this paper, divergence and flutter instabilities of supported piezoelectric nanotubes containing flowing fluid are investigated. To take the size effects into account, the nonlocal elasticity theory is implemented in conjunction with the Euler-Bernoulli beam theory incorporating surface stress effects. The Knudsen number is applied to investigate the slip boundary conditions between the flow and wall of nanotube. The nonlocal governing equations of nanotube are obtained using Newtonian method, including the influence of piezoelectric voltage, surface effects, Knudsen number and nonlocal parameter. Applying Galerkin approach to transform resulting equations into a set of eigenvalue equations under the simple-simple (S-S) and clamped-clamped (C-C) boundary conditions. The effects of the piezoelectric voltage, surface effects, Knudsen number, nonlocal parameter and boundary conditions on the divergence and flutter boundaries of nanotubes are discussed. It is observed that the fluid-conveying nanotubes with both ends supported lose their stability by divergence first and then by flutter with increase in fluid velocity. Results indicate the importance of using piezoelectric voltage, nonlocal parameter and Knudsen number in decrease of critical flow velocities of system. Moreover, the surface effects have a significant role on the eigenfrequencies and critical fluid velocity.

  10. Automated Guided-Wave Scanning Developed to Characterize Materials and Detect Defects

    NASA Technical Reports Server (NTRS)

    Martin, Richard E.; Gyekenyeski, Andrew L.; Roth, Don J.

    2004-01-01

    The Nondestructive Evaluation (NDE) Group of the Optical Instrumentation Technology Branch at the NASA Glenn Research Center has developed a scanning system that uses guided waves to characterize materials and detect defects. The technique uses two ultrasonic transducers to interrogate the condition of a material. The sending transducer introduces an ultrasonic pulse at a point on the surface of the specimen, and the receiving transducer detects the signal after it has passed through the material. The aim of the method is to correlate certain parameters in both the time and frequency domains of the detected waveform to characteristics of the material between the two transducers. The scanning system is shown. The waveform parameters of interest include the attenuation due to internal damping, waveform shape parameters, and frequency shifts due to material changes. For the most part, guided waves are used to gauge the damage state and defect growth of materials subjected to various mechanical or environmental loads. The technique has been applied to polymer matrix composites, ceramic matrix composites, and metal matrix composites as well as metallic alloys. Historically, guided wave analysis has been a point-by-point, manual technique with waveforms collected at discrete locations and postprocessed. Data collection and analysis of this type limits the amount of detail that can be obtained. Also, the manual movement of the sensors is prone to user error and is time consuming. The development of an automated guided-wave scanning system has allowed the method to be applied to a wide variety of materials in a consistent, repeatable manner. Experimental studies have been conducted to determine the repeatability of the system as well as compare the results obtained using more traditional NDE methods. The following screen capture shows guided-wave scan results for a ceramic matrix composite plate, including images for each of nine calculated parameters. The system can display up to 18 different wave parameters. Multiple scans of the test specimen demonstrated excellent repeatability in the measurement of all the guided-wave parameters, far exceeding the traditional point-by-point technique. In addition, the scan was able to detect a subsurface defect that was confirmed using flash thermography This technology is being further refined to provide a more robust and efficient software environment. Future hardware upgrades will allow for multiple receiving transducers and the ability to scan more complex surfaces. This work supports composite materials development and testing under the Ultra-Efficient Engine Technology (UEET) Project, but it also will be applied to other material systems under development for a wide range of applications.

  11. Estimating Colloidal Contact Model Parameters Using Quasi-Static Compression Simulations.

    PubMed

    Bürger, Vincent; Briesen, Heiko

    2016-10-05

    For colloidal particles interacting in suspensions, clusters, or gels, contact models should attempt to include all physical phenomena experimentally observed. One critical point when formulating a contact model is to ensure that the interaction parameters can be easily obtained from experiments. Experimental determinations of contact parameters for particles either are based on bulk measurements for simulations on the macroscopic scale or require elaborate setups for obtaining tangential parameters such as using atomic force microscopy. However, on the colloidal scale, a simple method is required to obtain all interaction parameters simultaneously. This work demonstrates that quasi-static compression of a fractal-like particle network provides all the necessary information to obtain particle interaction parameters using a simple spring-based contact model. These springs provide resistances against all degrees of freedom associated with two-particle interactions, and include critical forces or moments where such springs break, indicating a bond-breakage event. A position-based cost function is introduced to show the identifiability of the two-particle contact parameters, and a discrete, nonlinear, and non-gradient-based global optimization method (simplex with simulated annealing, SIMPSA) is used to minimize the cost function calculated from deviations of particle positions. Results show that, in principle, all necessary contact parameters for an arbitrary particle network can be identified, although numerical efficiency as well as experimental noise must be addressed when applying this method. Such an approach lays the groundwork for identifying particle-contact parameters from a position-based particle analysis for a colloidal system using just one experiment. Spring constants also directly influence the time step of the discrete-element method, and a detailed knowledge of all necessary interaction parameters will help to improve the efficiency of colloidal particle simulations.

  12. Stochastic resonance algorithm applied to quantitative analysis for weak chromatographic signals of alkyl halides and alkyl benzenes in water samples.

    PubMed

    Xiang, Suyun; Wang, Wei; Xia, Jia; Xiang, Bingren; Ouyang, Pingkai

    2009-09-01

    The stochastic resonance algorithm is applied to the trace analysis of alkyl halides and alkyl benzenes in water samples. Compared to encountering a single signal when applying the algorithm, the optimization of system parameters for a multicomponent is more complex. In this article, the resolution of adjacent chromatographic peaks is first involved in the optimization of parameters. With the optimized parameters, the algorithm gave an ideal output with good resolution as well as enhanced signal-to-noise ratio. Applying the enhanced signals, the method extended the limit of detection and exhibited good linearity, which ensures accurate determination of the multicomponent.

  13. Hansen solubility parameters for polyethylene glycols by inverse gas chromatography.

    PubMed

    Adamska, Katarzyna; Voelkel, Adam

    2006-11-03

    Inverse gas chromatography (IGC) has been applied to determine solubility parameter and its components for nonionic surfactants--polyethylene glycols (PEG) of different molecular weight. Flory-Huggins interaction parameter (chi) and solubility parameter (delta(2)) were calculated according to DiPaola-Baranyi and Guillet method from experimentally collected retention data for the series of carefully selected test solutes. The Hansen's three-dimensional solubility parameters concept was applied to determine components (delta(d), delta(p), delta(h)) of corrected solubility parameter (delta(T)). The molecular weight and temperature of measurement influence the solubility parameter data, estimated from the slope, intercept and total solubility parameter. The solubility parameters calculated from the intercept are lower than those calculated from the slope. Temperature and structural dependences of the entopic factor (chi(S)) are presented and discussed.

  14. Flutter analysis of swept-wing subsonic aircraft with parameter studies of composite wings

    NASA Technical Reports Server (NTRS)

    Housner, J. M.; Stein, M.

    1974-01-01

    A computer program is presented for the flutter analysis, including the effects of rigid-body roll, pitch, and plunge of swept-wing subsonic aircraft with a flexible fuselage and engines mounted on flexible pylons. The program utilizes a direct flutter solution in which the flutter determinant is derived by using finite differences, and the root locus branches of the determinant are searched for the lowest flutter speed. In addition, a preprocessing subroutine is included which evaluates the variable bending and twisting stiffness properties of the wing by using a laminated, balanced ply, filamentary composite plate theory. The program has been substantiated by comparisons with existing flutter solutions. The program has been applied to parameter studies which examine the effect of filament orientation upon the flutter behavior of wings belonging to the following three classes: wings having different angles of sweep, wings having different mass ratios, and wings having variable skin thicknesses. These studies demonstrated that the program can perform a complete parameter study in one computer run. The program is designed to detect abrupt changes in the lowest flutter speed and mode shape as the parameters are varied.

  15. Assessment of groundwater vulnerability to pollution: a combination of GIS, fuzzy logic and decision making techniques

    NASA Astrophysics Data System (ADS)

    Gemitzi, Alexandra; Petalas, Christos; Tsihrintzis, Vassilios A.; Pisinaras, Vassilios

    2006-03-01

    The assessment of groundwater vulnerability to pollution aims at highlighting areas at a high risk of being polluted. This study presents a methodology, to estimate the risk of an aquifer to be polluted from concentrated and/or dispersed sources, which applies an overlay and index method involving several parameters. The parameters are categorized into three factor groups: factor group 1 includes parameters relevant to the internal aquifer system’s properties, thus determining the intrinsic aquifer vulnerability to pollution; factor group 2 comprises parameters relevant to the external stresses to the system, such as human activities and rainfall effects; factor group 3 incorporates specific geological settings, such as the presence of geothermal fields or salt intrusion zones, into the computation process. Geographical information systems have been used for data acquisition and processing, coupled with a multicriteria evaluation technique enhanced with fuzzy factor standardization. Moreover, besides assigning weights to factors, a second set of weights, i.e., order weights, has been applied to factors on a pixel by pixel basis, thus allowing control of the level of risk in the vulnerability determination and the enhancement of local site characteristics. Individual analysis of each factor group resulted in three intermediate groundwater vulnerability to pollution maps, which were combined in order to produce the final composite groundwater vulnerability map for the study area. The method has been applied in the region of Eastern Macedonia and Thrace (Northern Greece), an area of approximately 14,000 km2. The methodology has been tested and calibrated against the measured nitrate concentration in wells, in the northwest part of the study area, providing results related to the aggregation and weighting procedure.

  16. Enhancing coronary Wave Intensity Analysis robustness by high order central finite differences

    PubMed Central

    Rivolo, Simone; Asrress, Kaleab N.; Chiribiri, Amedeo; Sammut, Eva; Wesolowski, Roman; Bloch, Lars Ø.; Grøndal, Anne K.; Hønge, Jesper L.; Kim, Won Y.; Marber, Michael; Redwood, Simon; Nagel, Eike; Smith, Nicolas P.; Lee, Jack

    2014-01-01

    Background Coronary Wave Intensity Analysis (cWIA) is a technique capable of separating the effects of proximal arterial haemodynamics from cardiac mechanics. Studies have identified WIA-derived indices that are closely correlated with several disease processes and predictive of functional recovery following myocardial infarction. The cWIA clinical application has, however, been limited by technical challenges including a lack of standardization across different studies and the derived indices' sensitivity to the processing parameters. Specifically, a critical step in WIA is the noise removal for evaluation of derivatives of the acquired signals, typically performed by applying a Savitzky–Golay filter, to reduce the high frequency acquisition noise. Methods The impact of the filter parameter selection on cWIA output, and on the derived clinical metrics (integral areas and peaks of the major waves), is first analysed. The sensitivity analysis is performed either by using the filter as a differentiator to calculate the signals' time derivative or by applying the filter to smooth the ensemble-averaged waveforms. Furthermore, the power-spectrum of the ensemble-averaged waveforms contains little high-frequency components, which motivated us to propose an alternative approach to compute the time derivatives of the acquired waveforms using a central finite difference scheme. Results and Conclusion The cWIA output and consequently the derived clinical metrics are significantly affected by the filter parameters, irrespective of its use as a smoothing filter or a differentiator. The proposed approach is parameter-free and, when applied to the 10 in-vivo human datasets and the 50 in-vivo animal datasets, enhances the cWIA robustness by significantly reducing the outcome variability (by 60%). PMID:25187852

  17. Open web system of Virtual labs for nuclear and applied physics

    NASA Astrophysics Data System (ADS)

    Saldikov, I. S.; Afanasyev, V. V.; Petrov, V. I.; Ternovykh, M. Yu

    2017-01-01

    An example of virtual lab work on unique experimental equipment is presented. The virtual lab work is software based on a model of real equipment. Virtual labs can be used for educational process in nuclear safety and analysis field. As an example it includes the virtual lab called “Experimental determination of the material parameter depending on the pitch of a uranium-water lattice”. This paper included general description of this lab. A description of a database on the support of laboratory work on unique experimental equipment which is included this work, its concept development are also presented.

  18. Macromolecular refinement by model morphing using non-atomic parameterizations.

    PubMed

    Cowtan, Kevin; Agirre, Jon

    2018-02-01

    Refinement is a critical step in the determination of a model which explains the crystallographic observations and thus best accounts for the missing phase components. The scattering density is usually described in terms of atomic parameters; however, in macromolecular crystallography the resolution of the data is generally insufficient to determine the values of these parameters for individual atoms. Stereochemical and geometric restraints are used to provide additional information, but produce interrelationships between parameters which slow convergence, resulting in longer refinement times. An alternative approach is proposed in which parameters are not attached to atoms, but to regions of the electron-density map. These parameters can move the density or change the local temperature factor to better explain the structure factors. Varying the size of the region which determines the parameters at a particular position in the map allows the method to be applied at different resolutions without the use of restraints. Potential applications include initial refinement of molecular-replacement models with domain motions, and potentially the use of electron density from other sources such as electron cryo-microscopy (cryo-EM) as the refinement model.

  19. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.

  20. Adaptive Firefly Algorithm: Parameter Analysis and its Application

    PubMed Central

    Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  1. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  2. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  3. Modeling electron emission and surface effects from diamond cathodes

    DOE PAGES

    Dimitrov, D. A.; Smithe, D.; Cary, J. R.; ...

    2015-02-05

    We developed modeling capabilities, within the Vorpal particle-in-cell code, for three-dimensional (3D) simulations of surface effects and electron emission from semiconductor photocathodes. They include calculation of emission probabilities using general, piece-wise continuous, space-time dependent surface potentials, effective mass and band bending field effects. We applied these models, in combination with previously implemented capabilities for modeling charge generation and transport in diamond, to investigate the emission dependence on applied electric field in the range from approximately 2 MV/m to 17 MV/m along the [100] direction. The simulation results were compared to experimental data. For the considered parameter regime, conservation of transversemore » electron momentum (in the plane of the emission surface) allows direct emission from only two (parallel to [100]) of the six equivalent lowest conduction band valleys. When the electron affinity χ is the only parameter varied in the simulations, the value χ = 0.31 eV leads to overall qualitative agreement with the probability of emission deduced from experiments. Including band bending in the simulations improves the agreement with the experimental data, particularly at low applied fields, but not significantly. In this study, using surface potentials with different profiles further allows us to investigate the emission as a function of potential barrier height, width, and vacuum level position. However, adding surface patches with different levels of hydrogenation, modeled with position-dependent electron affinity, leads to the closest agreement with the experimental data.« less

  4. Pelvic floor muscle training protocol for stress urinary incontinence in women: A systematic review.

    PubMed

    Oliveira, Marlene; Ferreira, Margarida; Azevedo, Maria João; Firmino-Machado, João; Santos, Paula Clara

    2017-07-01

    Strengthening exercises for pelvic floor muscles (SEPFM) are considered the first approach in the treatment of stress urinary incontinence (SUI). Nevertheless, there is no evidence about training parameters. To identify the protocol and/or most effective training parameters in the treatment of female SUI. A literature research was conducted in the PubMed, Cochrane Library, PEDro, Web of Science and Lilacs databases, with publishing dates ranging from January 1992 to March 2014. The articles included consisted of English-speaking experimental studies in which SEPFM were compared with placebo treatment (usual or untreated). The sample had a diagnosis of SUI and their age ranged between 18 and 65 years. The assessment of methodological quality was performed based on the PEDro scale. Seven high methodological quality articles were included in this review. The sample consisted of 331 women, mean age 44.4±5.51 years, average duration of urinary loss of 64±5.66 months and severity of SUI ranging from mild to severe. SEPFM programs included different training parameters concerning the PFM. Some studies have applied abdominal training and adjuvant techniques. Urine leakage cure rates varied from 28.6 to 80%, while the strength increase of PFM varied from 15.6 to 161.7%. The most effective training protocol consists of SEPFM by digital palpation combined with biofeedback monitoring and vaginal cones, including 12 week training parameters, and ten repetitions per series in different positions compared with SEPFM alone or a lack of treatment.

  5. Efficient Moment-Based Inference of Admixture Parameters and Sources of Gene Flow

    PubMed Central

    Levin, Alex; Reich, David; Patterson, Nick; Berger, Bonnie

    2013-01-01

    The recent explosion in available genetic data has led to significant advances in understanding the demographic histories of and relationships among human populations. It is still a challenge, however, to infer reliable parameter values for complicated models involving many populations. Here, we present MixMapper, an efficient, interactive method for constructing phylogenetic trees including admixture events using single nucleotide polymorphism (SNP) genotype data. MixMapper implements a novel two-phase approach to admixture inference using moment statistics, first building an unadmixed scaffold tree and then adding admixed populations by solving systems of equations that express allele frequency divergences in terms of mixture parameters. Importantly, all features of the model, including topology, sources of gene flow, branch lengths, and mixture proportions, are optimized automatically from the data and include estimates of statistical uncertainty. MixMapper also uses a new method to express branch lengths in easily interpretable drift units. We apply MixMapper to recently published data for Human Genome Diversity Cell Line Panel individuals genotyped on a SNP array designed especially for use in population genetics studies, obtaining confident results for 30 populations, 20 of them admixed. Notably, we confirm a signal of ancient admixture in European populations—including previously undetected admixture in Sardinians and Basques—involving a proportion of 20–40% ancient northern Eurasian ancestry. PMID:23709261

  6. Deformation-Aware Log-Linear Models

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Deselaers, Thomas; Ney, Hermann

    In this paper, we present a novel deformation-aware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for non-linear transformations of the image. This is achieved by extending a log-linear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its generalization capabilities by applying the tuned model to the MNIST task. We gain interesting insights and achieve highly competitive results on both tasks.

  7. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  8. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  9. Real-Time Aerodynamic Parameter Estimation without Air Flow Angle Measurements

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2010-01-01

    A technique for estimating aerodynamic parameters in real time from flight data without air flow angle measurements is described and demonstrated. The method is applied to simulated F-16 data, and to flight data from a subscale jet transport aircraft. Modeling results obtained with the new approach using flight data without air flow angle measurements were compared to modeling results computed conventionally using flight data that included air flow angle measurements. Comparisons demonstrated that the new technique can provide accurate aerodynamic modeling results without air flow angle measurements, which are often difficult and expensive to obtain. Implications for efficient flight testing and flight safety are discussed.

  10. Stochastic Modeling of Empirical Storm Loss in Germany

    NASA Astrophysics Data System (ADS)

    Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.

    2012-04-01

    Based on German insurance loss data for residential property we derive storm damage functions that relate daily loss with maximum gust wind speed. Over a wide range of loss, steep power law relationships are found with spatially varying exponents ranging between approximately 8 and 12. Global correlations between parameters and socio-demographic data are employed to reduce the number of local parameters to 3. We apply a Monte Carlo approach to calculate German loss estimates including confidence bounds in daily and annual resolution. Our model reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitude.

  11. The Relationship of Mean Platelet Volume/Platelet Distribution Width and Duodenal Ulcer Perforation.

    PubMed

    Fan, Zhe; Zhuang, Chengjun

    2017-03-01

    Duodenal ulcer perforation (DUP) is a severe acute abdominal disease. Mean platelet volume (MPV) and platelet distribution width (PDW) are two platelet parameters, participating in many inflammatory processes. This study aims to investigate the relation of MPV/PDW and DUP. A total of 165 patients were studied retrospectively, including 21 females and 144 males. The study included two groups: 87 normal patients (control group) and 78 duodenal ulcer perforation patients (DUP group). Routine blood parameters were collected for analysis including white blood cell count (WBC), neutrophil ratio (NR), platelet count (PLT), MPV and PDW. Receiver operating curve (ROC) analysis was applied to evaluate the parameters' sensitivity. No significant differences were observed between the control group and DUP group in age and gender. WBC, NR and PDW were significantly increased in the DUP group ( P <0.001, respectively); PLT and MPV were significantly decreased in the DUP group ( P <0.001, respectively) compared to controls. MPV had the high sensitivity. Our results suggested a potential association between MPV/PDW and disease activity in DUP patients, and high sensitivity of MPV. © 2017 by the Association of Clinical Scientists, Inc.

  12. Adaptive Transcutaneous Power Transfer to Implantable Devices: A State of the Art Review

    PubMed Central

    Bocan, Kara N.; Sejdić, Ervin

    2016-01-01

    Wireless energy transfer is a broad research area that has recently become applicable to implantable medical devices. Wireless powering of and communication with implanted devices is possible through wireless transcutaneous energy transfer. However, designing wireless transcutaneous systems is complicated due to the variability of the environment. The focus of this review is on strategies to sense and adapt to environmental variations in wireless transcutaneous systems. Adaptive systems provide the ability to maintain performance in the face of both unpredictability (variation from expected parameters) and variability (changes over time). Current strategies in adaptive (or tunable) systems include sensing relevant metrics to evaluate the function of the system in its environment and adjusting control parameters according to sensed values through the use of tunable components. Some challenges of applying adaptive designs to implantable devices are challenges common to all implantable devices, including size and power reduction on the implant, efficiency of power transfer and safety related to energy absorption in tissue. Challenges specifically associated with adaptation include choosing relevant and accessible parameters to sense and adjust, minimizing the tuning time and complexity of control, utilizing feedback from the implanted device and coordinating adaptation at the transmitter and receiver. PMID:26999154

  13. Adaptive Transcutaneous Power Transfer to Implantable Devices: A State of the Art Review.

    PubMed

    Bocan, Kara N; Sejdić, Ervin

    2016-03-18

    Wireless energy transfer is a broad research area that has recently become applicable to implantable medical devices. Wireless powering of and communication with implanted devices is possible through wireless transcutaneous energy transfer. However, designing wireless transcutaneous systems is complicated due to the variability of the environment. The focus of this review is on strategies to sense and adapt to environmental variations in wireless transcutaneous systems. Adaptive systems provide the ability to maintain performance in the face of both unpredictability (variation from expected parameters) and variability (changes over time). Current strategies in adaptive (or tunable) systems include sensing relevant metrics to evaluate the function of the system in its environment and adjusting control parameters according to sensed values through the use of tunable components. Some challenges of applying adaptive designs to implantable devices are challenges common to all implantable devices, including size and power reduction on the implant, efficiency of power transfer and safety related to energy absorption in tissue. Challenges specifically associated with adaptation include choosing relevant and accessible parameters to sense and adjust, minimizing the tuning time and complexity of control, utilizing feedback from the implanted device and coordinating adaptation at the transmitter and receiver.

  14. Effects of processing parameters in the sonic assisted water extraction (SAWE) of 6-gingerol.

    PubMed

    Syed Jaapar, Syaripah Zaimah; Morad, Noor Azian; Iwai, Yoshio; Nordin, Mariam Firdhaus Mad

    2017-09-01

    The use of water in subcritical conditions for extraction has several drawbacks. These include the safety features, higher production costs and possible degradation of the bioactive compounds. To overcome these problems, sonic energy and an entrainer were used as external interventions to decrease the polarity of water at milder operating conditions. The effect of low (28kHz) and high (800kHz) frequencies of sonication in the extraction of the main ginger bioactive compound (6-gingerol) were compared. Six parameters were studied: mean particle size (MPS, mm), time of extraction, applied power, sample to solvent ratio (w/v), temperature of extraction, and the percentage of entrainer. The optimum conditions for high frequency SAWE prototype were MPS 0.89-1.77mm, 45min, 40W applied power, 1:30 (w/v), 45°C, and 15% of ethanol as entrainer. Two-way analysis of variance (ANOVA) gave the most significant parameter, which was power with F (1, 45.07), p<2.50×10 -9 . Although the effect of low frequency was stronger than high frequency, at the optimum conditions of the sample to solvent ratio 1:30 (w/v) with 700mL solvent and temperature 45°C, the concentration and recovery of 6-gingerol from high frequency of SAWE prototype was 2.69 times higher than at low frequency of SAWE. It was found that although the effects of high frequency (800kHz) were negligible in other studies, it could extract suitable compounds, such as 6-gingerol, at lower temperature. Therefore, the effects of sonication, which cause an enlargement in the cell wall of the ginger plant matrix, were observed using a Scanning Electron Microscope (SEM). It was found that the applied power of sonication was the most significant parameter compared to the other parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart

    PubMed Central

    Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin

    2015-01-01

    Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546

  16. Effect of new organic supplement (Panchgavya) on seed germination and soil quality.

    PubMed

    Jain, Paras; Sharma, Ravi Chandra; Bhattacharyya, Pradip; Banik, Pabitra

    2014-04-01

    We studied the suitability of Panchgavya (five products of cow), new organic amendment, application on seed germination, plant growth, and soil health. After characterization, Panchgavya was mixed with water to form different concentration and was tested for seed germination, germination index, and root and shoot growth of different seedlings. Four percent solution of Panchgavya was applied to different plants to test its efficacy. Panchgavya and other two organic amendments were incorporated in soil to test the change of soil chemical and microbiological parameters. Panchgavya contained higher nutrients as compared to farm yard manure (FYM) and vermicompost. Its application on different seeds has positively influenced germination percentage, germination index, root and shoot length, and fresh and dry weight of the seedling. Water-soluble macronutrients including pH and metal were positively and negatively correlated with the growth parameters, respectively. Four percent solution of Panchgavya application on some plants showed superiority in terms of plant height and chlorophyll content. Panchgavya-applied soil had higher values of macro and micronutrients (zinc, copper, and manganese), microbial activity as compared to FYM, and vermicompost applied soils. Application of Panchgavya can be gainfully used as an alternative organic supplement in agriculture.

  17. Accounting Artifacts in High-Throughput Toxicity Assays.

    PubMed

    Hsieh, Jui-Hua

    2016-01-01

    Compound activity identification is the primary goal in high-throughput screening (HTS) assays. However, assay artifacts including both systematic (e.g., compound auto-fluorescence) and nonsystematic (e.g., noise) complicate activity interpretation. In addition, other than the traditional potency parameter, half-maximal effect concentration (EC50), additional activity parameters (e.g., point-of-departure, POD) could be derived from HTS data for activity profiling. A data analysis pipeline has been developed to handle the artifacts and to provide compound activity characterization with either binary or continuous metrics. This chapter outlines the steps in the pipeline using Tox21 glucocorticoid receptor (GR) β-lactamase assays, including the formats to identify either agonists or antagonists, as well as the counter-screen assays for identifying artifacts as examples. The steps can be applied to other lower-throughput assays with concentration-response data.

  18. Kinematical Test Theories for Special Relativity

    NASA Astrophysics Data System (ADS)

    Lämmerzahl, Claus; Braxmaier, Claus; Dittus, Hansjörg; Müller, Holger; Peters, Achim; Schiller, Stephan

    A comparison of certain kinematical test theories for Special Relativity including the Robertson and Mansouri-Sext test theories is presented and the accuracy of the experimental results testing Special Relativity are expressed in terms of the parameters appearing in these test theories. The theoretical results are applied to the most precise experimental results obtained recently for the isotropy of light propagation and the constancy of the speed of light.

  19. Effects of the bilateral isokinetic strengthening training on functional parameters, gait, and the quality of life in patients with stroke.

    PubMed

    Büyükvural Şen, Sıdıka; Özbudak Demir, Sibel; Ekiz, Timur; Özgirgin, Neşe

    2015-01-01

    To evaluate the effects of the bilateral isokinetic strengthening training applied to knee and ankle muscles on balance, functional parameters, gait, and the quality of in stroke patients. Fifty patients (33 M, 17 F) with subacute-chronic stroke and 30 healthy subjects were included. Stroke patients were allocated into isokinetic and control groups. Conventional rehabilitation program was applied to all cases; additionally maximal concentric isokinetic strengthening training was applied to the knee-ankle muscles bilaterally to the isokinetic group 5 days a week for 3 weeks. Biodex System 3 Pro Multijoint System isokinetic dynamometer was used for isokinetic evaluation. The groups were assessed by Functional Independence Measure, Stroke Specific Quality of Life Scale, Timed 10-Meter Walk Test, Six-Minute Walk Test, Stair-Climbing Test, Timed up&go Test, Berg Balance Scale, and Rivermead Mobility Index. Compared with baseline, the isokinetic PT values of the knee and ankle on both sides significantly increased in all cases. PT change values were significantly higher in the isokinetic group than the control group (P<0.025). Furthermore, the quality of life, gait, balance and mobility index values improved significantly in both groups, besides the increase levels were found significantly higher in the isokinetic group (P<0.025, P<0.05). Bilateral isokinetic strengthening training in addition to conventional rehabilitation program after stroke seems to be effective on strengthening muscles on both sides, improving functional parameters, gait, balance and life quality.

  20. Effects of the bilateral isokinetic strengthening training on functional parameters, gait, and the quality of life in patients with stroke

    PubMed Central

    Büyükvural Şen, Sıdıka; Özbudak Demir, Sibel; Ekiz, Timur; Özgirgin, Neşe

    2015-01-01

    Objective: To evaluate the effects of the bilateral isokinetic strengthening training applied to knee and ankle muscles on balance, functional parameters, gait, and the quality of in stroke patients. Methods: Fifty patients (33 M, 17 F) with subacute-chronic stroke and 30 healthy subjects were included. Stroke patients were allocated into isokinetic and control groups. Conventional rehabilitation program was applied to all cases; additionally maximal concentric isokinetic strengthening training was applied to the knee-ankle muscles bilaterally to the isokinetic group 5 days a week for 3 weeks. Biodex System 3 Pro Multijoint System isokinetic dynamometer was used for isokinetic evaluation. The groups were assessed by Functional Independence Measure, Stroke Specific Quality of Life Scale, Timed 10-Meter Walk Test, Six-Minute Walk Test, Stair-Climbing Test, Timed up&go Test, Berg Balance Scale, and Rivermead Mobility Index. Results: Compared with baseline, the isokinetic PT values of the knee and ankle on both sides significantly increased in all cases. PT change values were significantly higher in the isokinetic group than the control group (P<0.025). Furthermore, the quality of life, gait, balance and mobility index values improved significantly in both groups, besides the increase levels were found significantly higher in the isokinetic group (P<0.025, P<0.05). Conclusion: Bilateral isokinetic strengthening training in addition to conventional rehabilitation program after stroke seems to be effective on strengthening muscles on both sides, improving functional parameters, gait, balance and life quality. PMID:26629238

  1. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  2. Innovation Analysis Approach to Design Parameters of High Speed Train Carriage and Their Intrinsic Complexity Relationships

    NASA Astrophysics Data System (ADS)

    Xiao, Shou-Ne; Wang, Ming-Meng; Hu, Guang-Zhong; Yang, Guang-Wu

    2017-09-01

    In view of the problem that it's difficult to accurately grasp the influence range and transmission path of the vehicle top design requirements on the underlying design parameters. Applying directed-weighted complex network to product parameter model is an important method that can clarify the relationships between product parameters and establish the top-down design of a product. The relationships of the product parameters of each node are calculated via a simple path searching algorithm, and the main design parameters are extracted by analysis and comparison. A uniform definition of the index formula for out-in degree can be provided based on the analysis of out-in-degree width and depth and control strength of train carriage body parameters. Vehicle gauge, axle load, crosswind and other parameters with higher values of the out-degree index are the most important boundary conditions; the most considerable performance indices are the parameters that have higher values of the out-in-degree index including torsional stiffness, maximum testing speed, service life of the vehicle, and so on; the main design parameters contain train carriage body weight, train weight per extended metre, train height and other parameters with higher values of the in-degree index. The network not only provides theoretical guidance for exploring the relationship of design parameters, but also further enriches the application of forward design method to high-speed trains.

  3. Application of multivariable search techniques to the optimization of airfoils in a low speed nonlinear inviscid flow field

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Merz, A. W.

    1975-01-01

    Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.

  4. Effect of epidural anaesthesia on clinician-applied force during vaginal delivery.

    PubMed

    Poggi, Sarah H; Allen, Robert H; Patel, Chirag; Deering, Shad H; Pezzullo, John C; Shin, Young; Spong, Catherine Y

    2004-09-01

    Epidural anesthesia (EA) is used in 80% of vaginal deliveries and is linked to neonatal and maternal trauma. Our objectives were to determine (1) whether EA affected clinician-applied force on the fetus and (2) whether this force influenced perineal trauma. After informed consent, multiparas with term, cephalic, singletons were delivered by 1 physician wearing a sensor-equipped glove to record force exerted on the fetal head. Those with EA were compared with those without for delivery force parameters. Regression analysis was used to identify predictors of vaginal laceration. The force required for delivery was greater in patients with EA (n = 27) than without (n = 5) (P < .01). Clinical parameters, including birth weight (P = .31) were similar between the groups. Clinician force was similar in those with no versus first- versus second-degree laceration (P = .5). Only birth weight was predictive of laceration (P = .02). Epidural use resulted in greater clinician force required for vaginal delivery of the fetus in multiparas, but this force was not associated with perineal trauma.

  5. Impact of forest fires on particulate matter and ozone levels during the 2003, 2004 and 2005 fire seasons in Portugal.

    PubMed

    Martins, V; Miranda, A I; Carvalho, A; Schaap, M; Borrego, C; Sá, E

    2012-01-01

    The main purpose of this work is to estimate the impact of forest fires on air pollution applying the LOTOS-EUROS air quality modeling system in Portugal for three consecutive years, 2003-2005. Forest fire emissions have been included in the modeling system through the development of a numerical module, which takes into account the most suitable parameters for Portuguese forest fire characteristics and the burnt area by large forest fires. To better evaluate the influence of forest fires on air quality the LOTOS-EUROS system has been applied with and without forest fire emissions. Hourly concentration results have been compared to measure data at several monitoring locations with better modeling quality parameters when forest fire emissions were considered. Moreover, hourly estimates, with and without fire emissions, can reach differences in the order of 20%, showing the importance and the influence of this type of emissions on air quality. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  7. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  8. Visual evaluation of kinetic characteristics of PET probe for neuroreceptors using a two-phase graphic plot analysis.

    PubMed

    Ito, Hiroshi; Ikoma, Yoko; Seki, Chie; Kimura, Yasuyuki; Kawaguchi, Hiroshi; Takuwa, Hiroyuki; Ichise, Masanori; Suhara, Tetsuya; Kanno, Iwao

    2017-05-01

    Objectives In PET studies for neuroreceptors, tracer kinetics are described by the two-tissue compartment model (2-TCM), and binding parameters, including the total distribution volume (V T ), non-displaceable distribution volume (V ND ), and binding potential (BP ND ), can be determined from model parameters estimated by kinetic analysis. The stability of binding parameter estimates depends on the kinetic characteristics of radioligands. To describe these kinetic characteristics, we previously developed a two-phase graphic plot analysis in which V ND and V T can be estimated from the x-intercept of regression lines for early and delayed phases, respectively. In this study, we applied this graphic plot analysis to visual evaluation of the kinetic characteristics of radioligands for neuroreceptors, and investigated a relationship between the shape of these graphic plots and the stability of binding parameters estimated by the kinetic analysis with 2-TCM in simulated brain tissue time-activity curves (TACs) with various binding parameters. Methods 90-min TACs were generated with the arterial input function and assumed kinetic parameters according to 2-TCM. Graphic plot analysis was applied to these simulated TACs, and the curvature of the plot for each TAC was evaluated visually. TACs with several noise levels were also generated with various kinetic parameters, and the bias and variation of binding parameters estimated by kinetic analysis were calculated in each TAC. These bias and variation were compared with the shape of graphic plots. Results The graphic plots showed larger curvature for TACs with higher specific binding and slower dissociation of specific binding. The quartile deviations of V ND and BP ND determined by kinetic analysis were smaller for radioligands with slow dissociation. Conclusions The larger curvature of graphic plots for radioligands with slow dissociation might indicate a stable determination of V ND and BP ND by kinetic analysis. For investigation of the kinetics of radioligands, such kinetic characteristics should be considered.

  9. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  10. An investigation of desalination by nanofiltration, reverse osmosis and integrated (hybrid NF/RO) membranes employed in brackish water treatment.

    PubMed

    Talaeipour, M; Nouri, J; Hassani, A H; Mahvi, A H

    2017-01-01

    As an appropriate tool, membrane process is used for desalination of brackish water, in the production of drinking water. The present study aims to investigate desalination processes of brackish water of Qom Province in Iran. This study was carried out at the central laboratory of Water and Wastewater Company of the studied area. To this aim, membrane processes, including nanofiltration (NF) and reverse osmosis (RO), separately and also their hybrid process were applied. Moreover, water physical and chemical parameters, including salinity, total dissolved solids (TDS), electric conductivity (EC), Na +1 and Cl -1 were also measured. Afterward, the rejection percent of each parameter was investigated and compared using nanofiltration and reverse osmosis separately and also by their hybrid process. The treatment process was performed by Luna domestic desalination device, which its membrane was replaced by two NF90 and TW30 membranes for nanofiltration and reverse osmosis processes, respectively. All collected brackish water samples were fed through membranes NF90-2540, TW30-1821-100(RO) and Hybrid (NF/RO) which were installed on desalination household scale pilot (Luna water 100GPD). Then, to study the effects of pressure on permeable quality of membranes, the simulation software model ROSA was applied. Results showed that percent of the salinity rejection was recorded as 50.21%; 72.82 and 78.56% in NF, RO and hybrid processes, respectively. During the study, in order to simulate the performance of nanofiltartion, reverse osmosis and hybrid by pressure drive, reverse osmosis system analysis (ROSA) model was applied. The experiments were conducted at performance three methods of desalination to remove physic-chemical parameters as percentage of rejections in the pilot plant are: in the NF system the salinity 50.21, TDS 43.41, EC 43.62, Cl 21.1, Na 36.15, and in the RO membrane the salinity 72.02, TDS 60.26, EC 60.33, Cl 43.08, Na 54.41. Also in case of the rejection in hybrid system of those parameters and ions included salinity 78.65, TDS 76.52, EC 76.42, Cl 63.95, and Na 70.91. Comparing rejection percent in three above-mentioned methods, it could be concluded that, in reverse osmosis process, ions and non-ion parameters rejection ability were rather better than nanofiltration process, and also better in hybrid compared to reverse osmosis process. The results reported in this paper indicate that the integration of membrane nanofiltration with reverse osmosis (hybrid NF/RO) can be completed by each other probably to remove salinity, TDS, EC, Cl, and Na.

  11. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  12. Efficient hierarchical trans-dimensional Bayesian inversion of magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Xiang, Enming; Guo, Rongwen; Dosso, Stan E.; Liu, Jianxin; Dong, Hao; Ren, Zhengyong

    2018-06-01

    This paper develops an efficient hierarchical trans-dimensional (trans-D) Bayesian algorithm to invert magnetotelluric (MT) data for subsurface geoelectrical structure, with unknown geophysical model parameterization (the number of conductivity-layer interfaces) and data-error models parameterized by an auto-regressive (AR) process to account for potential error correlations. The reversible-jump Markov-chain Monte Carlo algorithm, which adds/removes interfaces and AR parameters in birth/death steps, is applied to sample the trans-D posterior probability density for model parameterization, model parameters, error variance and AR parameters, accounting for the uncertainties of model dimension and data-error statistics in the uncertainty estimates of the conductivity profile. To provide efficient sampling over the multiple subspaces of different dimensions, advanced proposal schemes are applied. Parameter perturbations are carried out in principal-component space, defined by eigen-decomposition of the unit-lag model covariance matrix, to minimize the effect of inter-parameter correlations and provide effective perturbation directions and length scales. Parameters of new layers in birth steps are proposed from the prior, instead of focused distributions centred at existing values, to improve birth acceptance rates. Parallel tempering, based on a series of parallel interacting Markov chains with successively relaxed likelihoods, is applied to improve chain mixing over model dimensions. The trans-D inversion is applied in a simulation study to examine the resolution of model structure according to the data information content. The inversion is also applied to a measured MT data set from south-central Australia.

  13. A systematic and critical review of the evolving methods and applications of value of information in academia and practice.

    PubMed

    Steuten, Lotte; van de Wetering, Gijs; Groothuis-Oudshoorn, Karin; Retèl, Valesca

    2013-01-01

    This article provides a systematic and critical review of the evolving methods and applications of value of information (VOI) in academia and practice and discusses where future research needs to be directed. Published VOI studies were identified by conducting a computerized search on Scopus and ISI Web of Science from 1980 until December 2011 using pre-specified search terms. Only full-text papers that outlined and discussed VOI methods for medical decision making, and studies that applied VOI and explicitly discussed the results with a view to informing healthcare decision makers, were included. The included papers were divided into methodological and applied papers, based on the aim of the study. A total of 118 papers were included of which 50 % (n = 59) are methodological. A rapidly accumulating literature base on VOI from 1999 onwards for methodological papers and from 2005 onwards for applied papers is observed. Expected value of sample information (EVSI) is the preferred method of VOI to inform decision making regarding specific future studies, but real-life applications of EVSI remain scarce. Methodological challenges to VOI are numerous and include the high computational demands, dealing with non-linear models and interdependency between parameters, estimations of effective time horizons and patient populations, and structural uncertainties. VOI analysis receives increasing attention in both the methodological and the applied literature bases, but challenges to applying VOI in real-life decision making remain. For many technical and methodological challenges to VOI analytic solutions have been proposed in the literature, including leaner methods for VOI. Further research should also focus on the needs of decision makers regarding VOI.

  14. Study of anyon condensation and topological phase transitions from a Z4 topological phase using the projected entangled pair states approach

    NASA Astrophysics Data System (ADS)

    Iqbal, Mohsin; Duivenvoorden, Kasper; Schuch, Norbert

    2018-05-01

    We use projected entangled pair states (PEPS) to study topological quantum phase transitions. The local description of topological order in the PEPS formalism allows us to set up order parameters which measure condensation and deconfinement of anyons and serve as substitutes for conventional order parameters. We apply these order parameters, together with anyon-anyon correlation functions and some further probes, to characterize topological phases and phase transitions within a family of models based on a Z4 symmetry, which contains Z4 quantum double, toric code, double semion, and trivial phases. We find a diverse phase diagram which exhibits a variety of different phase transitions of both first and second order which we comprehensively characterize, including direct transitions between the toric code and the double semion phase.

  15. Control system estimation and design for aerospace vehicles

    NASA Technical Reports Server (NTRS)

    Stefani, R. T.; Williams, T. L.; Yakowitz, S. J.

    1972-01-01

    The selection of an estimator which is unbiased when applied to structural parameter estimation is discussed. The mathematical relationships for structural parameter estimation are defined. It is shown that a conventional weighted least squares (CWLS) estimate is biased when applied to structural parameter estimation. Two approaches to bias removal are suggested: (1) change the CWLS estimator or (2) change the objective function. The advantages of each approach are analyzed.

  16. Predicting nonstationary flood frequencies: Evidence supports an updated stationarity thesis in the United States

    NASA Astrophysics Data System (ADS)

    Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.

    2017-07-01

    Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.

  17. Applying data mining techniques to determine important parameters in chronic kidney disease and the relations of these parameters to each other.

    PubMed

    Tahmasebian, Shahram; Ghazisaeedi, Marjan; Langarizadeh, Mostafa; Mokhtaran, Mehrshad; Mahdavi-Mazdeh, Mitra; Javadian, Parisa

    2017-01-01

    Introduction: Chronic kidney disease (CKD) includes a wide range of pathophysiological processes which will be observed along with abnormal function of kidneys and progressive decrease in glomerular filtration rate (GFR). According to the definition decreasing GFR must have been present for at least three months. CKD will eventually result in end-stage kidney disease. In this process different factors play role and finding the relations between effective parameters in this regard can help to prevent or slow progression of this disease. There are always a lot of data being collected from the patients' medical records. This huge array of data can be considered a valuable source for analyzing, exploring and discovering information. Objectives: Using the data mining techniques, the present study tries to specify the effective parameters and also aims to determine their relations with each other in Iranian patients with CKD. Material and Methods: The study population includes 31996 patients with CKD. First, all of the data is registered in the database. Then data mining tools were used to find the hidden rules and relationships between parameters in collected data. Results: After data cleaning based on CRISP-DM (Cross Industry Standard Process for Data Mining) methodology and running mining algorithms on the data in the database the relationships between the effective parameters was specified. Conclusion: This study was done using the data mining method pertaining to the effective factors on patients with CKD.

  18. Applying data mining techniques to determine important parameters in chronic kidney disease and the relations of these parameters to each other

    PubMed Central

    Tahmasebian, Shahram; Ghazisaeedi, Marjan; Langarizadeh, Mostafa; Mokhtaran, Mehrshad; Mahdavi-Mazdeh, Mitra; Javadian, Parisa

    2017-01-01

    Introduction: Chronic kidney disease (CKD) includes a wide range of pathophysiological processes which will be observed along with abnormal function of kidneys and progressive decrease in glomerular filtration rate (GFR). According to the definition decreasing GFR must have been present for at least three months. CKD will eventually result in end-stage kidney disease. In this process different factors play role and finding the relations between effective parameters in this regard can help to prevent or slow progression of this disease. There are always a lot of data being collected from the patients’ medical records. This huge array of data can be considered a valuable source for analyzing, exploring and discovering information. Objectives: Using the data mining techniques, the present study tries to specify the effective parameters and also aims to determine their relations with each other in Iranian patients with CKD. Material and Methods: The study population includes 31996 patients with CKD. First, all of the data is registered in the database. Then data mining tools were used to find the hidden rules and relationships between parameters in collected data. Results: After data cleaning based on CRISP-DM (Cross Industry Standard Process for Data Mining) methodology and running mining algorithms on the data in the database the relationships between the effective parameters was specified. Conclusion: This study was done using the data mining method pertaining to the effective factors on patients with CKD. PMID:28497080

  19. Air Pollution and Quality of Sperm: A Meta-Analysis

    PubMed Central

    Fathi Najafi, Tahereh; Latifnejad Roudsari, Robab; Namvar, Farideh; Ghavami Ghanbarabadi, Vahid; Hadizadeh Talasaz, Zahra; Esmaeli, Mahin

    2015-01-01

    Context: Air pollution is common in all countries and affects reproductive functions in men and women. It particularly impacts sperm parameters in men. This meta-analysis aimed to examine the impact of air pollution on the quality of sperm. Evidence Acquisition: The scientific databases of Medline, PubMed, Scopus, Google scholar, Cochrane Library, and Elsevier were searched to identify relevant articles published between 1978 to 2013. In the first step, 76 articles were selected. These studies were ecological correlation, cohort, retrospective, cross-sectional, and case control ones that were found through electronic and hand search of references about air pollution and male infertility. The outcome measurement was the change in sperm parameters. A total of 11 articles were ultimately included in a meta-analysis to examine the impact of air pollution on sperm parameters. The authors applied meta-analysis sheets from Cochrane library, then data extraction, including mean and standard deviation of sperm parameters were calculated and finally their confidence interval (CI) were compared to CI of standard parameters. Results: The CI for pooled means were as follows: 2.68 ± 0.32 for ejaculation volume (mL), 62.1 ± 15.88 for sperm concentration (million per milliliter), 39.4 ± 5.52 for sperm motility (%), 23.91 ± 13.43 for sperm morphology (%) and 49.53 ± 11.08 for sperm count. Conclusions: The results of this meta-analysis showed that air pollution reduces sperm motility, but has no impact on the other sperm parameters of spermogram. PMID:26023349

  20. Air pollution and quality of sperm: a meta-analysis.

    PubMed

    Fathi Najafi, Tahereh; Latifnejad Roudsari, Robab; Namvar, Farideh; Ghavami Ghanbarabadi, Vahid; Hadizadeh Talasaz, Zahra; Esmaeli, Mahin

    2015-04-01

    Air pollution is common in all countries and affects reproductive functions in men and women. It particularly impacts sperm parameters in men. This meta-analysis aimed to examine the impact of air pollution on the quality of sperm. The scientific databases of Medline, PubMed, Scopus, Google scholar, Cochrane Library, and Elsevier were searched to identify relevant articles published between 1978 to 2013. In the first step, 76 articles were selected. These studies were ecological correlation, cohort, retrospective, cross-sectional, and case control ones that were found through electronic and hand search of references about air pollution and male infertility. The outcome measurement was the change in sperm parameters. A total of 11 articles were ultimately included in a meta-analysis to examine the impact of air pollution on sperm parameters. The authors applied meta-analysis sheets from Cochrane library, then data extraction, including mean and standard deviation of sperm parameters were calculated and finally their confidence interval (CI) were compared to CI of standard parameters. The CI for pooled means were as follows: 2.68 ± 0.32 for ejaculation volume (mL), 62.1 ± 15.88 for sperm concentration (million per milliliter), 39.4 ± 5.52 for sperm motility (%), 23.91 ± 13.43 for sperm morphology (%) and 49.53 ± 11.08 for sperm count. The results of this meta-analysis showed that air pollution reduces sperm motility, but has no impact on the other sperm parameters of spermogram.

  1. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Zhou, Jianmei; Wang, Jianxun; Shang, Qinglong; Wang, Hongnian; Yin, Changchun

    2014-04-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value.

  2. [Patient first - The impact of characteristics of target populations on decisions about therapy effectiveness of complex interventions: Psychological variables to assess effectiveness in interdisciplinary multimodal pain therapy].

    PubMed

    Kaiser, Ulrike; Sabatowski, Rainer; Balck, Friedrich

    2017-08-01

    The assessment of treatment effectiveness in public health settings is ensured by indicators that reflect the changes caused by specific interventions. These indicators are also applied in benchmarking systems. The selection of constructs should be guided by their relevance for affected patients (patient reported outcomes). The interdisciplinary multimodal pain therapy (IMPT) is a complex intervention based on a biopsychosocial understanding of chronic pain. For quality assurance purposes, psychological parameters (depression, general anxiety, health-related quality of life) are included in standardized therapy assessment in pain medicine (KEDOQ), which can also be used for comparative analyses in a benchmarking system. The aim of the present study was to investigate the relevance of depressive symptoms, general anxiety and mental quality of life in patients undergoing IMPT under real life conditions. In this retrospective, one-armed and exploratory observational study we used secondary data of a routine documentation of IMST in routine care, applying several variables of the German Pain Questionnaire and the facility's comprehensive basic documentation. 352 participants with IMPT (from 2006 to 2010) were included, and the follow-up was performed over two years with six assessments. Because of statistically heterogeneous characteristics a complex analysis consisting of factor and cluster analyses was applied to build subgroups. These subgroups were explored to identify differences in depressive symptoms (HADS-D), general anxiety (HADS-A), and mental quality of life (SF 36 PSK) at the time of therapy admission and their development estimated by means of effect sizes. Analyses were performed using SPSS 21.0®. Six subgroups were derived and mainly proved to be clinically and psychologically normal, with the exception of one subgroup that consistently showed psychological impairment for all three parameters. The follow-up of the total study population revealed medium or large effects; changes in the subgroups were consistently caused by two subgroups, while the other four showed little or no change. In summary, only a small proportion of the target population (20 %) demonstrated clinically relevant scores in the psychological parameters applied. When selecting indicators for quality assurance, the heterogeneity of the target populations as well as conceptual and methodological aspects should be considered. The characteristics of the parameters intended, along with clinical and personal relevance of indicators for patients, should be investigated by specific procedures such as patient surveys and statistical analyses. Copyright © 2017. Published by Elsevier GmbH.

  3. Diode Laser for Laryngeal Surgery: a Systematic Review.

    PubMed

    Arroyo, Helena Hotz; Neri, Larissa; Fussuma, Carina Yuri; Imamura, Rui

    2016-04-01

    Introduction The diode laser has been frequently used in the management of laryngeal disorders. The portability and functional diversity of this tool make it a reasonable alternative to conventional lasers. However, whether diode laser has been applied in transoral laser microsurgery, the ideal parameters, outcomes, and adverse effects remain unclear. Objective The main objective of this systematic review is to provide a reliable evaluation of the use of diode laser in laryngeal diseases, trying to clarify its ideal parameters in the larynx, as well as its outcomes and complications. Data Synthesis We included eleven studies in the final analysis. From the included articles, we collected data on patient and lesion characteristics, treatment (diode laser's parameters used in surgery), and outcomes related to the laser surgery performed. Only two studies were prospective and there were no randomized controlled trials. Most of the evidence suggests that the diode laser can be a useful tool for treatment of different pathologies in the larynx. In this sense, the parameters must be set depending on the goal (vaporization, section, or coagulation) and the clinical problem. The literature lacks studies on the ideal parameters of the diode laser in laryngeal surgery. The available data indicate that diode laser is a useful tool that should be considered in laryngeal surgeries. Thus, large, well-designed studies correlated with diode compared with other lasers are needed to better estimate its effects.

  4. Diode Laser for Laryngeal Surgery: a Systematic Review

    PubMed Central

    Arroyo, Helena Hotz; Neri, Larissa; Fussuma, Carina Yuri; Imamura, Rui

    2016-01-01

    Introduction The diode laser has been frequently used in the management of laryngeal disorders. The portability and functional diversity of this tool make it a reasonable alternative to conventional lasers. However, whether diode laser has been applied in transoral laser microsurgery, the ideal parameters, outcomes, and adverse effects remain unclear. Objective The main objective of this systematic review is to provide a reliable evaluation of the use of diode laser in laryngeal diseases, trying to clarify its ideal parameters in the larynx, as well as its outcomes and complications. Data Synthesis We included eleven studies in the final analysis. From the included articles, we collected data on patient and lesion characteristics, treatment (diode laser's parameters used in surgery), and outcomes related to the laser surgery performed. Only two studies were prospective and there were no randomized controlled trials. Most of the evidence suggests that the diode laser can be a useful tool for treatment of different pathologies in the larynx. In this sense, the parameters must be set depending on the goal (vaporization, section, or coagulation) and the clinical problem. Conclusion: The literature lacks studies on the ideal parameters of the diode laser in laryngeal surgery. The available data indicate that diode laser is a useful tool that should be considered in laryngeal surgeries. Thus, large, well-designed studies correlated with diode compared with other lasers are needed to better estimate its effects. PMID:27096024

  5. Exercise Capacity Assessment by the Modified Shuttle Walk Test and its Correlation with Biochemical Parameters in Obese Children and Adolescents.

    PubMed

    de Assumpção, Priscila Kurz; Heinzmann-Filho, João Paulo; Isaia, Heloisa Ataíde; Holzschuh, Flávia; Dalcul, Tiéle; Donadio, Márcio Vinícius Fagundes

    2018-03-23

    To evaluate exercise capacity of obese children and adolescents compared with normal-weight individuals and to investigate possible correlations with blood biochemical parameters. In this study, children and adolescents between 6 and 18 y were included and divided into control (eutrophic) and obese groups according to body mass index (BMI). Data were collected regarding demographic, anthropometric, waist circumference and exercise capacity through the Modified Shuttle Walk Test (MSWT). In the obese group, biochemical parameters in the blood (total cholesterol, HDL, LDL, triglycerides and glucose) were evaluated, and a physical activity questionnaire was applied. Seventy seven participants were included; 27 in the control group and 50 obese. There was no significant difference between the two groups regarding sample characteristics, except for body weight, BMI and waist circumference. Most obese children presented results of biochemical tests within the desirable limit, though none were considered active. There was a significant exercise capacity reduction (p < 0.001) in the obese group compared to control subjects. Positive correlations were identified for the MSWT with age and height, and a negative correlation with BMI. However, there were no correlations with the biochemical parameters analyzed. Obese children and adolescents have reduced exercise capacity when compared to normal individuals. The MSWT performance seems to have a negative association with BMI, but is not correlated with blood biochemical parameters.

  6. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  7. Evaluation of experimental design and computational parameter choices affecting analyses of ChIP-seq and RNA-seq data in undomesticated poplar trees.

    Treesearch

    Lijun Liu; V. Missirian; Matthew S. Zinkgraf; Andrew Groover; V. Filkov

    2014-01-01

    Background: One of the great advantages of next generation sequencing is the ability to generate large genomic datasets for virtually all species, including non-model organisms. It should be possible, in turn, to apply advanced computational approaches to these datasets to develop models of biological processes. In a practical sense, working with non-model organisms...

  8. Data for methyl bromide decon testing

    EPA Pesticide Factsheets

    Spreadsheets containing data for recovery of spores from different materials. Data on the fumigation parameters are also included.This dataset is associated with the following publication:Wood , J., M. Wendling, W. Richter, A. Lastivka, and L. Mickelsen. Evaluation of the Efficacy of Methyl Bromide in the Decontamination of Building and Interior Materials Contaminated with Bacillus anthracis Spores. APPLIED AND ENVIRONMENTAL MICROBIOLOGY. American Society for Microbiology, Washington, DC, USA, 1-28, (2016).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Churchill, R. Michael

    Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.

  10. Visibility of quantum graph spectrum from the vertices

    NASA Astrophysics Data System (ADS)

    Kühn, Christian; Rohleder, Jonathan

    2018-03-01

    We investigate the relation between the eigenvalues of the Laplacian with Kirchhoff vertex conditions on a finite metric graph and a corresponding Titchmarsh-Weyl function (a parameter-dependent Neumann-to-Dirichlet map). We give a complete description of all real resonances, including multiplicities, in terms of the edge lengths and the connectivity of the graph, and apply it to characterize all eigenvalues which are visible for the Titchmarsh-Weyl function.

  11. Development of weight/sizing design synthesis computer program. Volume 3: User Manual

    NASA Technical Reports Server (NTRS)

    Garrison, J. M.

    1973-01-01

    The user manual for the weight/sizing design synthesis program is presented. The program is applied to an analysis of the basic weight relationships for the space shuttle which contribute significant portions of the inert weight. The relationships measure the parameters of load, geometry, material, and environment. A verbal description of the processes simulated, data input procedures, output data, and values present in the program is included.

  12. Copula Multivariate analysis of Gross primary production and its hydro-environmental driver; A BIOME-BGC model applied to the Antisana páramos

    NASA Astrophysics Data System (ADS)

    Minaya, Veronica; Corzo, Gerald; van der Kwast, Johannes; Galarraga, Remigio; Mynett, Arthur

    2014-05-01

    Simulations of carbon cycling are prone to uncertainties from different sources, which in general are related to input data, parameters and the model representation capacities itself. The gross carbon uptake in the cycle is represented by the gross primary production (GPP), which deals with the spatio-temporal variability of the precipitation and the soil moisture dynamics. This variability associated with uncertainty of the parameters can be modelled by multivariate probabilistic distributions. Our study presents a novel methodology that uses multivariate Copulas analysis to assess the GPP. Multi-species and elevations variables are included in a first scenario of the analysis. Hydro-meteorological conditions that might generate a change in the next 50 or more years are included in a second scenario of this analysis. The biogeochemical model BIOME-BGC was applied in the Ecuadorian Andean region in elevations greater than 4000 masl with the presence of typical vegetation of páramo. The change of GPP over time is crucial for climate scenarios of the carbon cycling in this type of ecosystem. The results help to improve our understanding of the ecosystem function and clarify the dynamics and the relationship with the change of climate variables. Keywords: multivariate analysis, Copula, BIOME-BGC, NPP, páramos

  13. Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS

    PubMed Central

    Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo

    2016-01-01

    This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178

  14. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  15. Decorrelation Times of Photospheric Fields and Flows

    NASA Technical Reports Server (NTRS)

    Welsch, B. T.; Kusano, K.; Yamamoto, T. T.; Muglach, K.

    2012-01-01

    We use autocorrelation to investigate evolution in flow fields inferred by applying Fourier Local Correlation Tracking (FLCT) to a sequence of high-resolution (0.3 "), high-cadence (approx = 2 min) line-of-sight magnetograms of NOAA active region (AR) 10930 recorded by the Narrowband Filter Imager (NFI) of the Solar Optical Telescope (SOT) aboard the Hinode satellite over 12 - 13 December 2006. To baseline the timescales of flow evolution, we also autocorrelated the magnetograms, at several spatial binnings, to characterize the lifetimes of active region magnetic structures versus spatial scale. Autocorrelation of flow maps can be used to optimize tracking parameters, to understand tracking algorithms f susceptibility to noise, and to estimate flow lifetimes. Tracking parameters varied include: time interval Delta t between magnetogram pairs tracked, spatial binning applied to the magnetograms, and windowing parameter sigma used in FLCT. Flow structures vary over a range of spatial and temporal scales (including unresolved scales), so tracked flows represent a local average of the flow over a particular range of space and time. We define flow lifetime to be the flow decorrelation time, tau . For Delta t > tau, tracking results represent the average velocity over one or more flow lifetimes. We analyze lifetimes of flow components, divergences, and curls as functions of magnetic field strength and spatial scale. We find a significant trend of increasing lifetimes of flow components, divergences, and curls with field strength, consistent with Lorentz forces partially governing flows in the active photosphere, as well as strong trends of increasing flow lifetime and decreasing magnitudes with increases in both spatial scale and Delta t.

  16. Logarithmic and power law input-output relations in sensory systems with fold-change detection.

    PubMed

    Adler, Miri; Mayo, Avi; Alon, Uri

    2014-08-01

    Two central biophysical laws describe sensory responses to input signals. One is a logarithmic relationship between input and output, and the other is a power law relationship. These laws are sometimes called the Weber-Fechner law and the Stevens power law, respectively. The two laws are found in a wide variety of human sensory systems including hearing, vision, taste, and weight perception; they also occur in the responses of cells to stimuli. However the mechanistic origin of these laws is not fully understood. To address this, we consider a class of biological circuits exhibiting a property called fold-change detection (FCD). In these circuits the response dynamics depend only on the relative change in input signal and not its absolute level, a property which applies to many physiological and cellular sensory systems. We show analytically that by changing a single parameter in the FCD circuits, both logarithmic and power-law relationships emerge; these laws are modified versions of the Weber-Fechner and Stevens laws. The parameter that determines which law is found is the steepness (effective Hill coefficient) of the effect of the internal variable on the output. This finding applies to major circuit architectures found in biological systems, including the incoherent feed-forward loop and nonlinear integral feedback loops. Therefore, if one measures the response to different fold changes in input signal and observes a logarithmic or power law, the present theory can be used to rule out certain FCD mechanisms, and to predict their cooperativity parameter. We demonstrate this approach using data from eukaryotic chemotaxis signaling.

  17. VizieR Online Data Catalog: Vela Junior (RX J0852.0-4622) HESS image (HESS+, 2018)

    NASA Astrophysics Data System (ADS)

    H. E. S. S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Andersson, T.; Anguener, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernloehr, K.; Blackwell, R.; Boettcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Buechele, M.; Bulik, T.; Capasso, M.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Chretien, M.; Coffaro, M.; Colafrancesco, S.; Cologna, G.; Condon, B.; Conrad, J.; Cui, Y.; Davids, I. D.; Decock, J.; Degrange, B.; Deil, C.; Devin, J.; Dewilt, P.; Dirson, L.; Djannati-Atai, A.; Domainko, W.; Donath, A.; Drury, L. O'c.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Foerster, A.; Funk, S.; Fuessling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jogler, T.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzynski, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khelifi, B.; Kieffer, M.; King, J.; Klepser, S.; Klochkov, D.; Kluzniak, W.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krueger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lefranc, V.; Lemiere, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; Lopez-Coto, R.; Lypova, I.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Mora, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Niederwanger, F.; Niemiec J.; Oakes, L.; O'Brien, P.; Odaka, H.; Oettl, S.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poon, H.; Prokhorov, D.; Prokoph, H.; Puehlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reimer, A.; Reimer, O.; Renaud, M.; de Los Reyes, R.; Richter, S.; Rieger, F.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schuessler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, L.; Steenkamp, R.; Stegmann, C.; Stycz, K.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der, Walt D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Voelk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Woernlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zywucka, N.

    2018-03-01

    skymap.fit: H.E.S.S. excess skymap in FITS format of the region comprising Vela Junior and its surroundings. The excess map has been corrected for the gradient of exposure and smoothed with a Gaussian function of width 0.08° to match the analysis point spread function, matching the procedure applied to derive the maps in Fig. 1. sp_stat.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent statistical uncertainties at 1 sigma confidence level. The covariance matrix of the fit is also included in the format: c11 c12 c_13 c21 c22 c_23 c31 c32 c_33 where the subindices represent the following parameters of the power-law with exponential cut-off (ECPL) formula in Tab. 2: 1: flux normalization (Phi0) 2: spectral index (Gamma) 3: inverse of the cutoff energy (lambda=1/Ecut) The units for the covariance matrix are the same as for the fit parameters. Notice that, while the fit parameters section of the file shows E_cut as parameter, the fit was done in lambda=1/Ecut; hence the covariance matrix shows the values for lambda in TeV-1. sp_syst.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent systematic uncertainties at 1 sigma confidence level. The integral fluxes for several energy ranges are also included. (4 data files).

  18. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  19. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  20. A Computer Program for Drip Irrigation System Design for Small Plots

    NASA Astrophysics Data System (ADS)

    Philipova, Nina; Nicheva, Olga; Kazandjiev, Valentin; Chilikova-Lubomirova, Mila

    2012-12-01

    A computer programhas been developed for design of surface drip irrigation system. It could be applied for calculation of small scale fields with an area up to 10 ha. The program includes two main parts: crop water requirements and hydraulic calculations of the system. It has been developed in Graphical User Interface in MATLAB and gives opportunity for selecting some parameters from tables such as: agro- physical soil properties, characteristics of the corresponding crop, climatic data. It allows the user of the program to assume and set a definite value, for example the emitter discharge, plot parameters and etc. Eight cases of system layout according to the water source layout and the number of plots of the system operation are laid into hydraulic section of the program. It includes the design of lateral, manifold, main line and pump calculations. The program has been compiled to work in Windows.

  1. Thermofluid Analysis of Magnetocaloric Refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan

    While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less

  2. A genesis potential index for Western North Pacific tropical cyclones by using oceanic parameters

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Lei; Chen, Dake; Wang, Chunzai

    2016-09-01

    This study attempts to create a tropical cyclone (TC) genesis potential index (GPI) by considering oceanic parameters and necessary atmospheric parameters. Based on the general understanding of the oceanic impacts on TC genesis, many candidate factors are evaluated and discriminated, resulting in a new GPI index, which is referred to as GPIocean. GPIocean includes the parameters of (1) absolute vorticity at 1000 hPa, (2) net sea surface longwave radiation, (3) mean ocean temperature in the upper mixed layer, and (4) depth of the 26°C isotherm. GPIocean is comparable to existing GPIs in representing TC genesis over the western North Pacific on seasonal and interannual variations. The same procedure can be applied to create a similar GPI for the other ocean basins. In the context of climate change, this new index is expected to be useful for evaluating the oceanic influences on TC genesis by using ocean reanalysis products and climate model outputs.

  3. Variation of Supergranule Parameters with Solar Cycles: Results from Century-long Kodaikanal Digitized Ca ii K Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Subhamoy; Mandal, Sudip; Banerjee, Dipankar, E-mail: dipu@iiap.res.in

    The Ca ii K spectroheliograms spanning over a century (1907–2007) from Kodaikanal Solar Observatory, India, have recently been digitized and calibrated. Applying a fully automated algorithm (which includes contrast enhancement and the “Watershed method”) to these data, we have identified the supergranules and calculated the associated parameters, such as scale, circularity, and fractal dimension. We have segregated the quiet and active regions and obtained the supergranule parameters separately for these two domains. In this way, we have isolated the effect of large-scale and small-scale magnetic fields on these structures and find a significantly different behavior of the supergranule parameters overmore » solar cycles. These differences indicate intrinsic changes in the physical mechanism behind the generation and evolution of supergranules in the presence of small-scale and large-scale magnetic fields. This also highlights the need for further studies using solar dynamo theory along with magneto-convection models.« less

  4. Uncovering the effective interval of resolution parameter across multiple community optimization measures

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Cheng, Qing; Mao, He-Jin; Wang, Huanian; Chen, Junhua

    2017-03-01

    The study of community structure is a primary focus of network analysis, which has attracted a large amount of attention. In this paper, we focus on two famous functions, i.e., the Hamiltonian function H and the modularity density measure D, and intend to uncover the effective thresholds of their corresponding resolution parameter γ without resolution limit problem. Two widely used example networks are employed, including the ring network of lumps as well as the ad hoc network. In these two networks, we use discrete convex analysis to study the interval of resolution parameter of H and D that will not cause the misidentification. By comparison, we find that in both examples, for Hamiltonian function H, the larger the value of resolution parameter γ, the less resolution limit the network suffers; while for modularity density D, the less resolution limit the network suffers when we decrease the value of γ. Our framework is mathematically strict and efficient and can be applied in a lot of scientific fields.

  5. Model-Based Analysis of Biopharmaceutic Experiments To Improve Mechanistic Oral Absorption Modeling: An Integrated in Vitro in Vivo Extrapolation Perspective Using Ketoconazole as a Model Drug.

    PubMed

    Pathak, Shriram M; Ruff, Aaron; Kostewicz, Edmund S; Patel, Nikunjkumar; Turner, David B; Jamei, Masoud

    2017-12-04

    Mechanistic modeling of in vitro data generated from metabolic enzyme systems (viz., liver microsomes, hepatocytes, rCYP enzymes, etc.) facilitates in vitro-in vivo extrapolation (IVIV_E) of metabolic clearance which plays a key role in the successful prediction of clearance in vivo within physiologically-based pharmacokinetic (PBPK) modeling. A similar concept can be applied to solubility and dissolution experiments whereby mechanistic modeling can be used to estimate intrinsic parameters required for mechanistic oral absorption simulation in vivo. However, this approach has not widely been applied within an integrated workflow. We present a stepwise modeling approach where relevant biopharmaceutics parameters for ketoconazole (KTZ) are determined and/or confirmed from the modeling of in vitro experiments before being directly used within a PBPK model. Modeling was applied to various in vitro experiments, namely: (a) aqueous solubility profiles to determine intrinsic solubility, salt limiting solubility factors and to verify pK a ; (b) biorelevant solubility measurements to estimate bile-micelle partition coefficients; (c) fasted state simulated gastric fluid (FaSSGF) dissolution for formulation disintegration profiling; and (d) transfer experiments to estimate supersaturation and precipitation parameters. These parameters were then used within a PBPK model to predict the dissolved and total (i.e., including the precipitated fraction) concentrations of KTZ in the duodenum of a virtual population and compared against observed clinical data. The developed model well characterized the intraluminal dissolution, supersaturation, and precipitation behavior of KTZ. The mean simulated AUC 0-t of the total and dissolved concentrations of KTZ were comparable to (within 2-fold of) the corresponding observed profile. Moreover, the developed PBPK model of KTZ successfully described the impact of supersaturation and precipitation on the systemic plasma concentration profiles of KTZ for 200, 300, and 400 mg doses. These results demonstrate that IVIV_E applied to biopharmaceutical experiments can be used to understand and build confidence in the quality of the input parameters and mechanistic models used for mechanistic oral absorption simulations in vivo, thereby improving the prediction performance of PBPK models. Moreover, this approach can inform the selection and design of in vitro experiments, potentially eliminating redundant experiments and thus helping to reduce the cost and time of drug product development.

  6. A model for gravity-wave spectra observed by Doppler sounding systems

    NASA Technical Reports Server (NTRS)

    Vanzandt, T. E.

    1986-01-01

    A model for Mesosphere - Stratosphere - Troposphere (MST) radar spectra is developed following the formalism presented by Pinkel (1981). Expressions for the one-dimensional spectra of radial velocity versus frequency and versus radial wave number are presented. Their dependence on the parameters of the gravity-wave spectrum and on the experimental parameters, radar zenith angle and averaging time are described and the conditions for critical tests of the gravity-wave hypothesis are discussed. The model spectra is compared with spectra observed in the Arctic summer mesosphere by the Poker Flat radar. This model applies to any monostatic Doppler sounding system, including MST radar, Doppler lidar and Doppler sonar in the atmosphere, and Doppler sonar in the ocean.

  7. Interplanetary mission design handbook. Volume 1, part 4: Earth to Saturn ballistic mission opportunities, 1985-2005

    NASA Technical Reports Server (NTRS)

    Sergeyevsky, A. B.; Snyder, G. C.

    1981-01-01

    Graphical data necessary for the preliminary design of ballistic missions to Saturn are provided. Contours of launch energy requirements as well as many other launch and Saturn arrival parameters, are presented in launch date/arrival date space for all launch opportunities from 1985 through 2005. In addition, an extensive text is included which explains mission design methods, from launch window development to Saturn probe and orbiter arrival design, utilizing the graphical data in this volume as well as numerous equations elating various parameters. This is the first of a planned series of mission design documents which will apply to all planets and some other bodies in the solar system.

  8. Uncertainty analysis of signal deconvolution using a measured instrument response function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartouni, E. P.; Beeman, B.; Caggiano, J. A.

    2016-10-05

    A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less

  9. Modeling Hubble Space Telescope flight data by Q-Markov cover identification

    NASA Technical Reports Server (NTRS)

    Liu, K.; Skelton, R. E.; Sharkey, J. P.

    1992-01-01

    A state space model for the Hubble Space Telescope under the influence of unknown disturbances in orbit is presented. This model was obtained from flight data by applying the Q-Markov covariance equivalent realization identification algorithm. This state space model guarantees the match of the first Q-Markov parameters and covariance parameters of the Hubble system. The flight data were partitioned into high- and low-frequency components for more efficient Q-Markov cover modeling, to reduce some computational difficulties of the Q-Markov cover algorithm. This identification revealed more than 20 lightly damped modes within the bandwidth of the attitude control system. Comparisons with the analytical (TREETOPS) model are also included.

  10. Kinetics of Accumulation of Damage in Surface Layers of Lithium-Containing Aluminum Alloys in Fatigue Tests with Rigid Loading Cycle and Corrosive Effect of Environment

    NASA Astrophysics Data System (ADS)

    Morozova, L. V.; Zhegina, I. P.; Grigorenko, V. B.; Fomina, M. A.

    2017-07-01

    High-resolution methods of metal physics research including electron, laser and optical microscopy are used to study the kinetics of the accumulation of slip lines and bands and the corrosion damage in the plastic zone of specimens of aluminum-lithium alloys 1441 and B-1469 in rigid-cycle fatigue tests under the joint action of applied stresses and corrosive environment. The strain parameters (the density of slip bands, the sizes of plastic zones near fracture, the surface roughness in singled-out zones) and the damage parameters (the sizes of pits and the pitting area) are evaluated.

  11. Assessing the Impact of Model Parameter Uncertainty in Simulating Grass Biomass Using a Hybrid Carbon Allocation Strategy

    NASA Astrophysics Data System (ADS)

    Reyes, J. J.; Adam, J. C.; Tague, C.

    2016-12-01

    Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.

  12. Comparison between two different platelet-rich plasma preparations and control applied during anterior cruciate ligament reconstruction. Is there any evidence to support their use?

    PubMed

    Valentí Azcárate, Andrés; Lamo-Espinosa, Jose; Aquerreta Beola, Jesús Dámaso; Hernandez Gonzalez, Milagros; Mora Gasque, Gonzalo; Valentí Nin, Juan Ramón

    2014-10-01

    To compare the clinical, analytical and graft maturation effects of two different platelet-rich plasma (PRP) preparations applied during anterior cruciate ligament (ACL) reconstruction. A total of 150 patients with ACL disruption were included in the study. Arthroscopic ACL reconstruction with patellar tendon allograft was conducted on all knees using the same protocol. One hundred patients were prospectively randomised to either a group to receive double-spinning platelet-enriched gel (PRP) with leukocytes (n=50) or to a non-gel group (n=50). Finally, we included 50 patients treated with a platelet-rich preparation from a single-spinning procedure (PRGF Endoret(®) Technology) without leukocytes. Inflammatory parameters, including C-reactive protein (CRP) and knee perimeters (PER), were measured 24 hours and 10 days after surgery. Postoperative pain score (visual analogue score [VAS]) was recorded the day after surgery. Follow-up visits occurred postoperatively at 3, 6, and 12 months. The International Knee Documentation Committee scale (IKDC) was included to compare functional state, and MRI was conducted 6 months after surgery. The PRGF group showed a statistically significant improvement in swelling and inflammatory parameters compared with the other two groups at 24 hours after surgery (p<0.05). The results did not show any significant differences between groups for MRI and clinical scores. PRGF used in ACL allograft reconstruction was associated with reduced swelling; however, the intensity and uniformity of the graft on MRI were similar in the three groups, and there was no clinical or pain improvement compared with the control group. II.

  13. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  14. Estimation of adsorption isotherm and mass transfer parameters in protein chromatography using artificial neural networks.

    PubMed

    Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen

    2017-03-03

    Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Highly defined 3D printed chitosan scaffolds featuring improved cell growth.

    PubMed

    Elviri, Lisa; Foresti, Ruben; Bergonzi, Carlo; Zimetti, Francesca; Marchi, Cinzia; Bianchera, Annalisa; Bernini, Franco; Silvestri, Marco; Bettini, Ruggero

    2017-07-12

    The augmented demand for medical devices devoted to tissue regeneration and possessing a controlled micro-architecture means there is a need for industrial scale-up in the production of hydrogels. A new 3D printing technique was applied to the automation of a freeze-gelation method for the preparation of chitosan scaffolds with controlled porosity. For this aim, a dedicated 3D printer was built in-house: a preliminary effort has been necessary to explore the printing parameter space to optimize the printing results in terms of geometry, tolerances and mechanical properties of the product. Analysed parameters included viscosity of the starting chitosan solution, which was measured with a Brookfield viscometer, and temperature of deposition, which was determined by filming the process with a cryocooled sensor thermal camera. Optimized parameters were applied to the production of scaffolds from solutions of chitosan alone or with the addition of raffinose as a viscosity modifier. Resulting hydrogels were characterized in terms of morphology and porosity. In vitro cell culture studies comparing 3D printed scaffolds with their homologous produced by solution casting evidenced an improvement in biocompatibility deriving from the production technique as well as from the solid state modification of chitosan stemming from the addition of the viscosity modifier.

  16. Global optimization and reflectivity data fitting for x-ray multilayer mirrors by means of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Sanchez del Rio, Manuel; Pareschi, Giovanni

    2001-01-01

    The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thicknesses, densities, roughness). Non-linear fitting of experimental data with simulations requires to use initial values sufficiently close to the optimum value. This is a difficult task when the space topology of the variables is highly structured, as in our case. The application of global optimization methods to fit multilayer reflectivity data is presented. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (e.g. selection, crossover, mutation) on the members of the parent generation. The pressure of selection drives the population to include 'good' individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C multilayers recorded at the ESRF BM5 are presented. This method could be also applied to the help in the design of multilayers optimized for a target application, like for an astronomical grazing-incidence hard X-ray telescopes.

  17. At-line monitoring of key parameters of nisin fermentation by near infrared spectroscopy, chemometric modeling and model improvement.

    PubMed

    Guo, Wei-Liang; Du, Yi-Ping; Zhou, Yong-Can; Yang, Shuang; Lu, Jia-Hui; Zhao, Hong-Yu; Wang, Yao; Teng, Li-Rong

    2012-03-01

    An analytical procedure has been developed for at-line (fast off-line) monitoring of 4 key parameters including nisin titer (NT), the concentration of reducing sugars, cell concentration and pH during a nisin fermentation process. This procedure is based on near infrared (NIR) spectroscopy and Partial Least Squares (PLS). Samples without any preprocessing were collected at intervals of 1 h during fifteen batch of fermentations. These fermentation processes were implemented in 3 different 5 l fermentors at various conditions. NIR spectra of the samples were collected in 10 min. And then, PLS was used for modeling the relationship between NIR spectra and the key parameters which were determined by reference methods. Monte Carlo Partial Least Squares (MCPLS) was applied to identify the outliers and select the most efficacious methods for preprocessing spectra, wavelengths and the suitable number of latent variables (n (LV)). Then, the optimum models for determining NT, concentration of reducing sugars, cell concentration and pH were established. The correlation coefficients of calibration set (R (c)) were 0.8255, 0.9000, 0.9883 and 0.9581, respectively. These results demonstrated that this method can be successfully applied to at-line monitor of NT, concentration of reducing sugars, cell concentration and pH during nisin fermentation processes.

  18. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras

    PubMed Central

    Morris, Mark; Sellers, William I.

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778

  19. Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.

    PubMed

    Peyer, Kathrin E; Morris, Mark; Sellers, William I

    2015-01-01

    Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.

  20. Optimization of operating parameters in polysilicon chemical vapor deposition reactor with response surface methodology

    NASA Astrophysics Data System (ADS)

    An, Li-sha; Liu, Chun-jiao; Liu, Ying-wen

    2018-05-01

    In the polysilicon chemical vapor deposition reactor, the operating parameters are complex to affect the polysilicon's output. Therefore, it is very important to address the coupling problem of multiple parameters and solve the optimization in a computationally efficient manner. Here, we adopted Response Surface Methodology (RSM) to analyze the complex coupling effects of different operating parameters on silicon deposition rate (R) and further achieve effective optimization of the silicon CVD system. Based on finite numerical experiments, an accurate RSM regression model is obtained and applied to predict the R with different operating parameters, including temperature (T), pressure (P), inlet velocity (V), and inlet mole fraction of H2 (M). The analysis of variance is conducted to describe the rationality of regression model and examine the statistical significance of each factor. Consequently, the optimum combination of operating parameters for the silicon CVD reactor is: T = 1400 K, P = 3.82 atm, V = 3.41 m/s, M = 0.91. The validation tests and optimum solution show that the results are in good agreement with those from CFD model and the deviations of the predicted values are less than 4.19%. This work provides a theoretical guidance to operate the polysilicon CVD process.

  1. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  2. 40 CFR 1039.235 - What testing requirements apply for certification?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Select the engine configuration with the highest volume of fuel injected per cylinder per combustion... within normal production tolerances for anything we do not consider an adjustable parameter. For example, this would apply for an engine parameter that is subject to production variability because it is...

  3. 40 CFR 1039.235 - What testing requirements apply for certification?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Select the engine configuration with the highest volume of fuel injected per cylinder per combustion... within normal production tolerances for anything we do not consider an adjustable parameter. For example, this would apply for an engine parameter that is subject to production variability because it is...

  4. 40 CFR 1039.235 - What testing requirements apply for certification?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Select the engine configuration with the highest volume of fuel injected per cylinder per combustion... within normal production tolerances for anything we do not consider an adjustable parameter. For example, this would apply for an engine parameter that is subject to production variability because it is...

  5. 40 CFR 1039.235 - What testing requirements apply for certification?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Select the engine configuration with the highest volume of fuel injected per cylinder per combustion... within normal production tolerances for anything we do not consider an adjustable parameter. For example, this would apply for an engine parameter that is subject to production variability because it is...

  6. 40 CFR 1039.235 - What testing requirements apply for certification?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Select the engine configuration with the highest volume of fuel injected per cylinder per combustion... within normal production tolerances for anything we do not consider an adjustable parameter. For example, this would apply for an engine parameter that is subject to production variability because it is...

  7. Quenching or Bursting: Star Formation Acceleration—A New Methodology for Tracing Galaxy Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, D. Christopher; Darvish, Behnam; Seibert, Mark

    We introduce a new methodology for the direct extraction of galaxy physical parameters from multiwavelength photometry and spectroscopy. We use semianalytic models that describe galaxy evolution in the context of large-scale cosmological simulation to provide a catalog of galaxies, star formation histories, and physical parameters. We then apply models of stellar population synthesis and a simple extinction model to calculate the observable broadband fluxes and spectral indices for these galaxies. We use a linear regression analysis to relate physical parameters to observed colors and spectral indices. The result is a set of coefficients that can be used to translate observedmore » colors and indices into stellar mass, star formation rate, and many other parameters, including the instantaneous time derivative of the star formation rate, which we denote the Star Formation Acceleration (SFA), We apply the method to a test sample of galaxies with GALEX photometry and SDSS spectroscopy, deriving relationships between stellar mass, specific star formation rate, and SFA. We find evidence for a mass-dependent SFA in the green valley, with low-mass galaxies showing greater quenching and higher-mass galaxies greater bursting. We also find evidence for an increase in average quenching in galaxies hosting an active galactic nucleus. A simple scenario in which lower-mass galaxies accrete and become satellite galaxies, having their star-forming gas tidally and/or ram-pressure stripped, while higher-mass galaxies receive this gas and react with new star formation, can qualitatively explain our results.« less

  8. Quenching or Bursting: Star Formation Acceleration—A New Methodology for Tracing Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Martin, D. Christopher; Gonçalves, Thiago S.; Darvish, Behnam; Seibert, Mark; Schiminovich, David

    2017-06-01

    We introduce a new methodology for the direct extraction of galaxy physical parameters from multiwavelength photometry and spectroscopy. We use semianalytic models that describe galaxy evolution in the context of large-scale cosmological simulation to provide a catalog of galaxies, star formation histories, and physical parameters. We then apply models of stellar population synthesis and a simple extinction model to calculate the observable broadband fluxes and spectral indices for these galaxies. We use a linear regression analysis to relate physical parameters to observed colors and spectral indices. The result is a set of coefficients that can be used to translate observed colors and indices into stellar mass, star formation rate, and many other parameters, including the instantaneous time derivative of the star formation rate, which we denote the Star Formation Acceleration (SFA), We apply the method to a test sample of galaxies with GALEX photometry and SDSS spectroscopy, deriving relationships between stellar mass, specific star formation rate, and SFA. We find evidence for a mass-dependent SFA in the green valley, with low-mass galaxies showing greater quenching and higher-mass galaxies greater bursting. We also find evidence for an increase in average quenching in galaxies hosting an active galactic nucleus. A simple scenario in which lower-mass galaxies accrete and become satellite galaxies, having their star-forming gas tidally and/or ram-pressure stripped, while higher-mass galaxies receive this gas and react with new star formation, can qualitatively explain our results.

  9. Combined magnetic and kinetic control of advanced tokamak steady state scenarios based on semi-empirical modelling

    NASA Astrophysics Data System (ADS)

    Moreau, D.; Artaud, J. F.; Ferron, J. R.; Holcomb, C. T.; Humphreys, D. A.; Liu, F.; Luce, T. C.; Park, J. M.; Prater, R.; Turco, F.; Walker, M. L.

    2015-06-01

    This paper shows that semi-empirical data-driven models based on a two-time-scale approximation for the magnetic and kinetic control of advanced tokamak (AT) scenarios can be advantageously identified from simulated rather than real data, and used for control design. The method is applied to the combined control of the safety factor profile, q(x), and normalized pressure parameter, βN, using DIII-D parameters and actuators (on-axis co-current neutral beam injection (NBI) power, off-axis co-current NBI power, electron cyclotron current drive power, and ohmic coil). The approximate plasma response model was identified from simulated open-loop data obtained using a rapidly converging plasma transport code, METIS, which includes an MHD equilibrium and current diffusion solver, and combines plasma transport nonlinearity with 0D scaling laws and 1.5D ordinary differential equations. The paper discusses the results of closed-loop METIS simulations, using the near-optimal ARTAEMIS control algorithm (Moreau D et al 2013 Nucl. Fusion 53 063020) for steady state AT operation. With feedforward plus feedback control, the steady state target q-profile and βN are satisfactorily tracked with a time scale of about 10 s, despite large disturbances applied to the feedforward powers and plasma parameters. The robustness of the control algorithm with respect to disturbances of the H&CD actuators and of plasma parameters such as the H-factor, plasma density and effective charge, is also shown.

  10. Constitutive model for porous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weston, A.M.; Lee, E.L.

    1982-01-01

    A simple pressure versus porosity compaction model is developed to calculate the response of granular porous bed materials to shock impact. The model provides a scheme for calculating compaction behavior when relatively limited material data are available. While the model was developed to study porous explosives and propellants, it has been applied to a much wider range of materials. The early development of porous material models, such as that of Hermann, required empirical dynamic compaction data. Erkman and Edwards successfully applied the early theory to unreacted porous high explosives using a Gruneisen equation of state without yield behavior and withoutmore » trapped gas in the pores. Butcher included viscoelastic rate dependance in pore collapse. The theoretical treatment of Carroll and Holt is centered on the collapse of a circular pore and includes radial inertia terms and a complex set of stress, strain and strain rate constitutive parameters. Unfortunately data required for these parameters are generally not available. The model described here is also centered on the collapse of a circular pore, but utilizes a simpler elastic-plastic static equilibrium pore collapse mechanism without strain rate dependence, or radial inertia terms. It does include trapped gas inside the pore, a solid material flow stress that creates both a yield point and a variation in solid material pressure with radius. The solid is described by a Mie-Gruneisen type EOS. Comparisons show that this model will accurately estimate major mechanical features which have been observed in compaction experiments.« less

  11. Neonatal non-contact respiratory monitoring based on real-time infrared thermography

    PubMed Central

    2011-01-01

    Background Monitoring of vital parameters is an important topic in neonatal daily care. Progress in computational intelligence and medical sensors has facilitated the development of smart bedside monitors that can integrate multiple parameters into a single monitoring system. This paper describes non-contact monitoring of neonatal vital signals based on infrared thermography as a new biomedical engineering application. One signal of clinical interest is the spontaneous respiration rate of the neonate. It will be shown that the respiration rate of neonates can be monitored based on analysis of the anterior naris (nostrils) temperature profile associated with the inspiration and expiration phases successively. Objective The aim of this study is to develop and investigate a new non-contact respiration monitoring modality for neonatal intensive care unit (NICU) using infrared thermography imaging. This development includes subsequent image processing (region of interest (ROI) detection) and optimization. Moreover, it includes further optimization of this non-contact respiration monitoring to be considered as physiological measurement inside NICU wards. Results Continuous wavelet transformation based on Debauches wavelet function was applied to detect the breathing signal within an image stream. Respiration was successfully monitored based on a 0.3°C to 0.5°C temperature difference between the inspiration and expiration phases. Conclusions Although this method has been applied to adults before, this is the first time it was used in a newborn infant population inside the neonatal intensive care unit (NICU). The promising results suggest to include this technology into advanced NICU monitors. PMID:22243660

  12. A Self-Organizing State-Space-Model Approach for Parameter Estimation in Hodgkin-Huxley-Type Models of Single Neurons

    PubMed Central

    Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng

    2012-01-01

    Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632

  13. Modelling decremental ramps using 2- and 3-parameter "critical power" models.

    PubMed

    Morton, R Hugh; Billat, Veronique

    2013-01-01

    The "Critical Power" (CP) model of human bioenergetics provides a valuable way to identify both limits of tolerance to exercise and mechanisms that underpin that tolerance. It applies principally to cycling-based exercise, but with suitable adjustments for analogous units it can be applied to other exercise modalities; in particular to incremental ramp exercise. It has not yet been applied to decremental ramps which put heavy early demand on the anaerobic energy supply system. This paper details cycling-based bioenergetics of decremental ramps using 2- and 3-parameter CP models. It derives equations that, for an individual of known CP model parameters, define those combinations of starting intensity and decremental gradient which will or will not lead to exhaustion before ramping to zero; and equations that predict time to exhaustion on those decremental ramps that will. These are further detailed with suitably chosen numerical and graphical illustrations. These equations can be used for parameter estimation from collected data, or to make predictions when parameters are known.

  14. Supergravity, complex parameters and the Janis-Newman algorithm

    NASA Astrophysics Data System (ADS)

    Erbin, Harold; Heurtier, Lucien

    2015-08-01

    The Demiański-Janis-Newman (DJN) algorithm is an original solution generating technique. For a long time it has been limited to producing rotating solutions, restricted to the case of a metric and real scalar fields, despite the fact that Demiański extended it to include more parameters such as a NUT charge. Recently two independent prescriptions have been given for extending the algorithm to gauge fields and thus electrically charged configurations. In this paper we aim to end setting up the algorithm by providing a missing but important piece, which is how the transformation is applied to complex scalar fields. We illustrate our proposal through several examples taken from N = 2 supergravity, including the stationary BPS solutions from Behrndt et al and Sen's axion-dilaton rotating black hole. Moreover we discuss solutions that include pairs of complex parameters, such as the mass and the NUT charge, or the electric and magnetic charges, and we explain how to perform the algorithm in this context (with the example of Kerr-Newman-Taub-NUT and dyonic Kerr-Newman black holes). The final formulation of the DJN algorithm can possibly handle solutions with five of the six Plebański-Demiański parameters along with any type of bosonic fields with spin less than two (exemplified with the stationary Israel-Wilson-Perjes solutions). This provides all the necessary tools for applications to general matter-coupled gravity and to (gauged) supergravity.

  15. Validation of Cloud Parameters Derived from Geostationary Satellites, AVHRR, MODIS, and VIIRS Using SatCORPS Algorithms

    NASA Technical Reports Server (NTRS)

    Minnis, P.; Sun-Mack, S.; Bedka, K. M.; Yost, C. R.; Trepte, Q. Z.; Smith, W. L., Jr.; Painemal, D.; Chen, Y.; Palikonda, R.; Dong, X.; hide

    2016-01-01

    Validation is a key component of remote sensing that can take many different forms. The NASA LaRC Satellite ClOud and Radiative Property retrieval System (SatCORPS) is applied to many different imager datasets including those from the geostationary satellites, Meteosat, Himiwari-8, INSAT-3D, GOES, and MTSAT, as well as from the low-Earth orbiting satellite imagers, MODIS, AVHRR, and VIIRS. While each of these imagers have similar sets of channels with wavelengths near 0.65, 3.7, 11, and 12 micrometers, many differences among them can lead to discrepancies in the retrievals. These differences include spatial resolution, spectral response functions, viewing conditions, and calibrations, among others. Even when analyzed with nearly identical algorithms, it is necessary, because of those discrepancies, to validate the results from each imager separately in order to assess the uncertainties in the individual parameters. This paper presents comparisons of various SatCORPS-retrieved cloud parameters with independent measurements and retrievals from a variety of instruments. These include surface and space-based lidar and radar data from CALIPSO and CloudSat, respectively, to assess the cloud fraction, height, base, optical depth, and ice water path; satellite and surface microwave radiometers to evaluate cloud liquid water path; surface-based radiometers to evaluate optical depth and effective particle size; and airborne in-situ data to evaluate ice water content, effective particle size, and other parameters. The results of comparisons are compared and contrasted and the factors influencing the differences are discussed.

  16. Systematic effects on dark energy from 3D weak shear

    NASA Astrophysics Data System (ADS)

    Kitching, T. D.; Taylor, A. N.; Heavens, A. F.

    2008-09-01

    We present an investigation into the potential effect of systematics inherent in multiband wide-field surveys on the dark energy equation-of-state determination for two 3D weak lensing methods. The weak lensing methods are a geometric shear-ratio method and 3D cosmic shear. The analysis here uses an extension of the Fisher matrix framework to include jointly photometric redshift systematics, shear distortion systematics and intrinsic alignments. Using analytic parametrizations of these three primary systematic effects allows an isolation of systematic parameters of particular importance. We show that assuming systematic parameters are fixed, but possibly biased, results in potentially large biases in dark energy parameters. We quantify any potential bias by defining a Bias Figure of Merit. By marginalizing over extra systematic parameters, such biases are negated at the expense of an increase in the cosmological parameter errors. We show the effect on the dark energy Figure of Merit of marginalizing over each systematic parameter individually. We also show the overall reduction in the Figure of Merit due to all three types of systematic effects. Based on some assumption of the likely level of systematic errors, we find that the largest effect on the Figure of Merit comes from uncertainty in the photometric redshift systematic parameters. These can reduce the Figure of Merit by up to a factor of 2 to 4 in both 3D weak lensing methods, if no informative prior on the systematic parameters is applied. Shear distortion systematics have a smaller overall effect. Intrinsic alignment effects can reduce the Figure of Merit by up to a further factor of 2. This, however, is a worst-case scenario, within the assumptions of the parametrizations used. By including prior information on systematic parameters, the Figure of Merit can be recovered to a large extent, and combined constraints from 3D cosmic shear and shear ratio are robust to systematics. We conclude that, as a rule of thumb, given a realistic current understanding of intrinsic alignments and photometric redshifts, then including all three primary systematic effects reduces the Figure of Merit by at most a factor of 2.

  17. Joint inversion of seismic refraction and resistivity data using layered models - applications to hydrogeology

    NASA Astrophysics Data System (ADS)

    Juhojuntti, N. G.; Kamm, J.

    2010-12-01

    We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.

  18. Application of Natural Mineral Additives in Construction

    NASA Astrophysics Data System (ADS)

    Linek, Malgorzata; Nita, Piotr; Wolka, Paweł; Zebrowski, Wojciech

    2017-12-01

    The article concerns the idea of using selected mineral additives in the pavement quality concrete composition. The basis of the research paper was the modification of cement concrete intended for airfield pavements. The application of the additives: metakaolonite and natural zeolite was suggested. Analyses included the assessment of basic physical properties of modifiers. Screening analysis, assessment of micro structure and chemical microanalysis were conducted in case of these materials. The influence of the applied additives on the change of concrete mix parameters was also presented. The impact of zeolite and metakaolinite on the mix density, oxygen content and consistency class was analysed. The influence of modifiers on physical and mechanical changes of the hardened cement concrete was discussed (concrete density, compressive strength and bending strength during fracturing) in diversified research periods. The impact of the applied additives on the changes of internal structure of cement concrete was discussed. Observation of concrete micro structure was conducted using the scanning electron microscope. According to the obtained lab test results, parameters of the applied modifiers and their influence on changes of internal structure of cement concrete are reflected in the increase of mechanical properties of pavement quality concrete. The increase of compressive and bending strength in case of all analysed research periods was proved.

  19. Evaluation of the parameters affecting bone temperature during drilling using a three-dimensional dynamic elastoplastic finite element model.

    PubMed

    Chen, Yung-Chuan; Tu, Yuan-Kun; Zhuang, Jun-Yan; Tsai, Yi-Jung; Yen, Cheng-Yo; Hsiao, Chih-Kun

    2017-11-01

    A three-dimensional dynamic elastoplastic finite element model was constructed and experimentally validated and was used to investigate the parameters which influence bone temperature during drilling, including the drill speed, feeding force, drill bit diameter, and bone density. Results showed the proposed three-dimensional dynamic elastoplastic finite element model can effectively simulate the temperature elevation during bone drilling. The bone temperature rise decreased with an increase in feeding force and drill speed, however, increased with the diameter of drill bit or bone density. The temperature distribution is significantly affected by the drilling duration; a lower drilling speed reduced the exposure duration, decreases the region of the thermally affected zone. The constructed model could be applied for analyzing the influence parameters during bone drilling to reduce the risk of thermal necrosis. It may provide important information for the design of drill bits and surgical drilling powers.

  20. Potential for Remotely Sensed Soil Moisture Data in Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Engman, Edwin T.

    1997-01-01

    Many hydrologic processes display a unique signature that is detectable with microwave remote sensing. These signatures are in the form of the spatial and temporal distributions of surface soil moisture and portray the spatial heterogeneity of hydrologic processes and properties that one encounters in drainage basins. The hydrologic processes that may be detected include ground water recharge and discharge zones, storm runoff contributing areas, regions of potential and less than potential ET, and information about the hydrologic properties of soils and heterogeneity of hydrologic parameters. Microwave remote sensing has the potential to detect these signatures within a basin in the form of volumetric soil moisture measurements in the top few cm. These signatures should provide information on how and where to apply soil physical parameters in distributed and lumped parameter models and how to subdivide drainage basins into hydrologically similar sub-basins.

  1. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. SEE rate estimation based on diffusion approximation of charge collection

    NASA Astrophysics Data System (ADS)

    Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.

    2018-03-01

    The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.

  3. Microbubble Sizing and Shell Characterization Using Flow Cytometry

    PubMed Central

    Tu, Juan; Swalwell, Jarred E.; Giraud, David; Cui, Weicheng; Chen, Weizhong; Matula, Thomas J.

    2015-01-01

    Experiments were performed to size, count, and obtain shell parameters for individual ultrasound contrast microbubbles using a modified flow cytometer. Light scattering was modeled using Mie theory, and applied to calibration beads to calibrate the system. The size distribution and population were measured directly from the flow cytometer. The shell parameters (shear modulus and shear viscosity) were quantified at different acoustic pressures (from 95 to 333 kPa) by fitting microbubble response data to a bubble dynamics model. The size distribution of the contrast agent microbubbles is consistent with manufacturer specifications. The shell shear viscosity increases with increasing equilibrium microbubble size, and decreases with increasing shear rate. The observed trends are independent of driving pressure amplitude. The shell elasticity does not vary with microbubble size. The results suggest that a modified flow cytometer can be an effective tool to characterize the physical properties of microbubbles, including size distribution, population, and shell parameters. PMID:21622051

  4. Binding energy of donor impurity states and optical absorption in the Tietz-Hua quantum well under an applied electric field

    NASA Astrophysics Data System (ADS)

    Al, E. B.; Kasapoglu, E.; Sakiroglu, S.; Duque, C. A.; Sökmen, I.

    2018-04-01

    For a quantum well which has the Tietz-Hua potential, the ground and some excited donor impurity binding energies and the total absorption coefficients, including linear and third order nonlinear terms for the transitions between the related impurity states with respect to the structure parameters and the impurity position as well as the electric field strength are investigated. The binding energies were obtained using the effective-mass approximation within a variational scheme and the optical transitions between any two impurity states were calculated by using the density matrix formalism and the perturbation expansion method. Our results show that the effects of the electric field and the structure parameters on the optical transitions are more pronounced. So we can adjust the red or blue shift in the peak position of the absorption coefficient by changing the strength of the electric field as well as the structure parameters.

  5. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  6. Automatic Sleep Stage Determination by Multi-Valued Decision Making Based on Conditional Probability with Optimal Parameters

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Wang, Xingyu; Nakamura, Masatoshi

    Data for human sleep study may be affected by internal and external influences. The recorded sleep data contains complex and stochastic factors, which increase the difficulties for the computerized sleep stage determination techniques to be applied for clinical practice. The aim of this study is to develop an automatic sleep stage determination system which is optimized for variable sleep data. The main methodology includes two modules: expert knowledge database construction and automatic sleep stage determination. Visual inspection by a qualified clinician is utilized to obtain the probability density function of parameters during the learning process of expert knowledge database construction. Parameter selection is introduced in order to make the algorithm flexible. Automatic sleep stage determination is manipulated based on conditional probability. The result showed close agreement comparing with the visual inspection by clinician. The developed system can meet the customized requirements in hospitals and institutions.

  7. GW quasiparticle bandgaps of anatase TiO2 starting from DFT + U.

    PubMed

    Patrick, Christopher E; Giustino, Feliciano

    2012-05-23

    We investigate the quasiparticle band structure of anatase TiO(2), a wide gap semiconductor widely employed in photovoltaics and photocatalysis. We obtain GW quasiparticle energies starting from density-functional theory (DFT) calculations including Hubbard U corrections. Using a simple iterative procedure we determine the value of the Hubbard parameter yielding a vanishing quasiparticle correction to the fundamental bandgap of anatase TiO(2). The bandgap (3.3 eV) calculated using this optimal Hubbard parameter is smaller than the value obtained by applying many-body perturbation theory to standard DFT eigenstates and eigenvalues (3.7 eV). We extend our analysis to the rutile polymorph of TiO(2) and reach similar conclusions. Our work highlights the role of the starting non-interacting Hamiltonian in the calculation of GW quasiparticle energies in TiO(2) and suggests an optimal Hubbard parameter for future calculations.

  8. Incorporating economies of scale in the cost estimation in economic evaluation of PCV and HPV vaccination programmes in the Philippines: a game changer?

    PubMed

    Suwanthawornkul, Thanthima; Praditsitthikorn, Naiyana; Kulpeng, Wantanee; Haasis, Manuel Alexander; Guerrero, Anna Melissa; Teerawattananon, Yot

    2018-01-01

    Many economic evaluations ignore economies of scale in their cost estimation, which means that cost parameters are assumed to have a linear relationship with the level of production. Economies of scale is the situation when the average total cost of producing a product decreases with increasing volume caused by reducing the variable costs due to more efficient operation. This study investigates the significance of applying the economies of scale concept: the saving in costs gained by an increased level of production in economic evaluation of pneumococcal conjugate vaccines (PCV) and human papillomavirus (HPV) vaccinations. The fixed and variable costs of providing partial (20% coverage) and universal (100% coverage) vaccination programs in the Philippines were estimated using various methods, including costs of conducting questionnaire survey, focus-group discussion, and analysis of secondary data. Costing parameters were utilised as inputs for the two economic evaluation models for PCV and HPV. Incremental cost-effectiveness ratios (ICERs) and 5-year budget impacts with and without applying economies of scale to the costing parameters for partial and universal coverage were compared in order to determine the effect of these different costing approaches. The program costs of the partial coverage for the two immunisation programs were not very different when applying and not applying the economies of scale concept. Nevertheless, the program costs for universal coverage were 0.26 and 0.32 times lower when applying economies of scale compared to not applying economies of scale for the pneumococcal and human papillomavirus vaccinations, respectively. ICERs varied by up to 98% for pneumococcal vaccinations, whereas the change in ICERs in the human papillomavirus vaccination depended on both the costs of cervical cancer screening and the vaccination program. This results in a significant difference in the 5-year budget impact, accounting for 30 and 40% of reduction in the 5-year budget impact for the pneumococcal and human papillomavirus vaccination programs. This study demonstrated the feasibility and importance of applying economies of scale in the cost estimation in economic evaluation, which would lead to different conclusions in terms of value for money regarding the interventions, particularly with population-wide interventions such as vaccination programs. The economies of scale approach to costing is recommended for the creation of methodological guidelines for conducting economic evaluations.

  9. Methods for resistive switching of memristors

    DOEpatents

    Mickel, Patrick R.; James, Conrad D.; Lohn, Andrew; Marinella, Matthew; Hsia, Alexander H.

    2016-05-10

    The present invention is directed generally to resistive random-access memory (RRAM or ReRAM) devices and systems, as well as methods of employing a thermal resistive model to understand and determine switching of such devices. In particular example, the method includes generating a power-resistance measurement for the memristor device and applying an isothermal model to the power-resistance measurement in order to determine one or more parameters of the device (e.g., filament state).

  10. Dental wax decreases calculus accumulation in small dogs.

    PubMed

    Smith, Mark M; Smithson, Christopher W

    2014-01-01

    A dental wax was evaluated after unilateral application in 20 client-owned, mixed and purebred small dogs using a clean, split-mouth study model. All dogs had clinical signs of periodontal disease including plaque, calculus, and/or gingivitis. The wax was randomly applied to the teeth of one side of the mouth daily for 30-days while the contralateral side received no treatment. Owner parameters evaluated included compliance and a subjective assessment of ease of wax application. Gingivitis, plaque and calculus accumulation were scored at the end of the study period. Owners considered the wax easy to apply in all dogs. Compliance with no missed application days was achieved in 8 dogs. The number of missed application days had no effect on wax efficacy. There was no significant difference in gingivitis or plaque accumulation scores when comparing treated and untreated sides. Calculus accumulation scores were significantly less (22.1 %) for teeth receiving the dental wax.

  11. Adaptive support ventilation: State of the art review

    PubMed Central

    Fernández, Jaime; Miguelena, Dayra; Mulett, Hernando; Godoy, Javier; Martinón-Torres, Federico

    2013-01-01

    Mechanical ventilation is one of the most commonly applied interventions in intensive care units. Despite its life-saving role, it can be a risky procedure for the patient if not applied appropriately. To decrease risks, new ventilator modes continue to be developed in an attempt to improve patient outcomes. Advances in ventilator modes include closed-loop systems that facilitate ventilator manipulation of variables based on measured respiratory parameters. Adaptive support ventilation (ASV) is a positive pressure mode of mechanical ventilation that is closed-loop controlled, and automatically adjust based on the patient's requirements. In order to deliver safe and appropriate patient care, clinicians need to achieve a thorough understanding of this mode, including its effects on underlying respiratory mechanics. This article will discuss ASV while emphasizing appropriate ventilator settings, their advantages and disadvantages, their particular effects on oxygenation and ventilation, and the monitoring priorities for clinicians. PMID:23833471

  12. Optimization and validation of Folin-Ciocalteu method for the determination of total polyphenol content of Pu-erh tea.

    PubMed

    Musci, Marilena; Yao, Shicong

    2017-12-01

    Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.

  13. Smart storage technologies applied to fresh foods: A review.

    PubMed

    Wang, Jingyu; Zhang, Min; Gao, Zhongxue; Adhikari, Benu

    2017-06-30

    Fresh foods are perishable, seasonal and regional in nature and their storage, transportation, and preservation of freshness are quite challenging. Smart storage technologies can online detection and monitor the changes of quality parameters and storage environment of fresh foods during storage, so that operators can make timely adjustments to reduce the loss. This article reviews the smart storage technologies from two aspects: online detection technologies and smartly monitoring technologies for fresh foods. Online detection technologies include electronic nose, nuclear magnetic resonance (NMR), near infrared spectroscopy (NIRS), hyperspectral imaging and computer vision. Smartly monitoring technologies mainly include some intelligent indicators for monitoring the change of storage environment. Smart storage technologies applied to fresh foods need to be highly efficient and nondestructive and need to be competitively priced. In this work, we have critically reviewed the principles, applications, and development trends of smart storage technologies.

  14. Electron Impact Ionization of Heavier Ions including relativistic effects

    NASA Astrophysics Data System (ADS)

    Saha, B. C.; Haque, A. K. F.; Uddin, M. A.; Basak, A. K.

    2006-11-01

    The demands of the electron impact ionization cross sections in diverse fields are enormous. And this is hard to fulfill either by experimental or ab initio calculations. So various analytical and semi-classical models are applied for a rapid generation of ionization cross sections accurately. We have applied a modified version [1] of the Bell et. al. equations [2] including both the ionic and relativistic corrections. In this report we show how to generalize the MBELL parameters for treating the orbital quantum numbers nl dependency; the accuracy of the procedure is tested by evaluating cross sections for various species and energies. Detail results will be presented at the meeting. [1] A. K. F. Haque, M. A. Uddin, A. K. Basak, K. R. Karim and B. C. Saha, Phys. Rev. A73, 052703 (2006). [2] K. L. Bell, H. B. Gilbody, J. G. Hughes, A. E. Kingston, and F. J. Smith, J. Phys. Chem. Ref. Data 12, 891 (1983).

  15. The impact of changes in the rheological parameters of fine-grained hydromixtures on the efficiency of a selected industrial gravitational hydraulic transport system

    NASA Astrophysics Data System (ADS)

    Popczyk, Marcin

    2017-11-01

    Polish hard coal mines commonly use hydromixtures in their fire prevention practices. The mixtures are usually prepared based on mass-produced power production wastes, namely the ashes resulting from power production [1]. Such hydromixtures are introduced to the caving area which is formed due to the advancement of a longwall. The first part of the article presents theoretical fundamentals of determining the parameters of gravitational hydraulic transport of water and ash hydromixtures used in the mining pipeline systems. Each hydromixture produced based on fine-grained wastes is characterized by specified rheological parameters that have a direct impact on the future flow parameters of a given pipeline system. Additionally, the gravitational character of the hydraulic transport generates certain limitations concerning the so-called correct hydraulic profile of the system in relation to the applied hydromixture characterized by required rheological parameters that should ensure safe flow at a correct efficiency [2]. The paper includes an example of a gravitational hydraulic transport system and an assessment of the correctness of its hydraulic profile as well as the assessment of the impact of rheological parameters of fine-grained hydromixtures (water and ash) produced based on laboratory tests, depending on the specified flow parameters (efficiency) of the hydromixture in the analyzed system.

  16. Determination of Solubility Parameters of Ibuprofen and Ibuprofen Lysinate.

    PubMed

    Kitak, Teja; Dumičić, Aleksandra; Planinšek, Odon; Šibanc, Rok; Srčič, Stanko

    2015-12-03

    In recent years there has been a growing interest in formulating solid dispersions, which purposes mainly include solubility enhancement, sustained drug release and taste masking. The most notable problem by these dispersions is drug-carrier (in)solubility. Here we focus on solubility parameters as a tool for predicting the solubility of a drug in certain carriers. Solubility parameters were determined in two different ways: solely by using calculation methods, and by experimental approaches. Six different calculation methods were applied in order to calculate the solubility parameters of the drug ibuprofen and several excipients. However, we were not able to do so in the case of ibuprofen lysinate, as calculation models for salts are still not defined. Therefore, the extended Hansen's approach and inverse gas chromatography (IGC) were used for evaluating of solubility parameters for ibuprofen lysinate. The obtained values of the total solubility parameter did not differ much between the two methods: by the extended Hansen's approach it was δt = 31.15 MPa(0.5) and with IGC it was δt = 35.17 MPa(0.5). However, the values of partial solubility parameters, i.e., δd, δp and δh, did differ from each other, what might be due to the complex behaviour of a salt in the presence of various solvents.

  17. SU-G-IeP3-12: Preliminary Report On the Experience of Patient Radiation Dose Monitoring and Tracking Systems; PEMNET, Radimetrics and DoseWatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P; Corwin, F; Ghita, M

    Purpose: Three patient radiation dose monitoring and tracking (PRDMT) systems have been in operation at this institution for the past 6 months. There are useful information that should be disseminated to those who are considering installation of PRDMT programs. In addition, there are “problems” uncovered in the process of estimating fluoroscopic “peak” skin dose (PSD), especially, for those patients who received interventional angiographic studies and in conjunction with surgical procedures. Methods: Upon exporting the PRDMT data to Microsoft Excel program, the peak skin dose can be estimated by applying various correction factors including; attenuation due to the tabletop and examinationmore » mattress, table height, tabletop translation, backscatter, etc. A procedure was established to screen and divide the PRDMT reported radiation dose and estimated PSD to three different levels of threshold to assess the potential skin injuries, to assist patient follow-up, risk management and provide radiation dosimetry information in case of “Sentinel Event”. Results: The Radiation Dose Structured Report (RDSR) was found to be the prerequisite for the PRDMT systems to work seamlessly. And, the geometrical parameters (gantry and table orientation) displayed by the equipment are not necessarily implemented in the “patient centric” manner which could result in a large error in the PSD estimation. Since, the PRDMT systems obtain their pertinent data from the DICOM tags including the polarity (+ and − signs), the geometrical parameters need to be verified. Conclusion: PRDMT systems provide a more accurate PSD estimation than previously possible as the air-kerma-area dose meter become widely implemented. However, care should be exercised to correctly apply the geometrical parameters in estimating the patient dose. In addition, further refinement is necessary for these software programs to account for all geometrical parameters such as the tabletop translation in the z-direction in particular.« less

  18. Wet cupping therapy restores sympathovagal imbalances in cardiac rhythm.

    PubMed

    Arslan, Müzeyyen; Yeşilçam, Nesibe; Aydin, Duygu; Yüksel, Ramazan; Dane, Senol

    2014-04-01

    A recent study showed that cupping had therapeutic effects in rats with myocardial infarction and cardiac arrhythmias. The current studyaimed to investigate the possible useful effects of cupping therapy on cardiac rhythm in terms of heart rate variability (HRV). Forty healthy participants were included. Classic wet cupping therapy was applied on five points of the back. Recording electrocardiography (to determine HRV) was applied 1 hour before and 1 hour after cupping therapy. All HRV parameters increased after cupping therapy compared with before cupping therapy in healthy persons. These results indicate for the first time in humans that cupping might be cardioprotective. In this study, cupping therapy restored sympathovagal imbalances by stimulating the peripheral nervous system.

  19. Influence of Commercial Saturated Monoglyceride, Mono-/Diglycerides Mixtures, Vegetable Oil, Stirring Speed, and Temperature on the Physical Properties of Organogels

    PubMed Central

    Rocha-Amador, Omar Gerardo; Huang, Qingrong; Rocha-Guzman, Nuria Elizabeth; Moreno-Jimenez, Martha Rocio; Gonzalez-Laredo, Ruben F.

    2014-01-01

    The objective of this study was to evaluate the influence of gelator, vegetable oil, stirring speed, and temperature on the physical properties of obtained organogels. They were prepared under varying independent conditions and applying a fractional experimental design. From there a rheological characterization was developed. The physical characterization also included polarized light microscopy and calorimetric analysis. Once these data were obtained, X-Ray diffraction was applied to selected samples and a microstructure lattice was confirmed. Commonly, the only conditions that affect crystallization have been analyzed (temperature, solvent, gelator, and cooling rate). We found that stirring speed is the most important parameter in the organogel preparation. PMID:26904637

  20. Objective measurement of the optical image quality in the human eye

    NASA Astrophysics Data System (ADS)

    Navarro, Rafael M.

    2001-05-01

    This communication reviews some recent studies on the optical performance of the human eye. Although the retinal image cannot be recorded directly, different objective methods have been developed, which permit to determine optical quality parameters, such as the Point Spread Function (PSF), the Modulation Transfer Function (MTF), the geometrical ray aberrations or the wavefront distortions, in the living human eye. These methods have been applied in both basic and applied research. This includes the measurement of the optical performance of the eye across visual field, the optical quality of eyes with intraocular lens implants, the aberrations induced by LASIK refractive surgery, or the manufacture of customized phase plates to compensate the wavefront aberration in the eye.

  1. Donor impurity-related photoionization cross section in GaAs cone-like quantum dots under applied electric field

    NASA Astrophysics Data System (ADS)

    Iqraoun, E.; Sali, A.; Rezzouk, A.; Feddi, E.; Dujardin, F.; Mora-Ramos, M. E.; Duque, C. A.

    2017-06-01

    The donor impurity-related electron states in GaAs cone-like quantum dots under the influence of an externally applied static electric field are theoretically investigated. Calculations are performed within the effective mass and parabolic band approximations, using the variational procedure to include the electron-impurity correlation effects. The uncorrelated Schrödinger-like electron states are obtained in quasi-analytical form and the entire electron-impurity correlated states are used to calculate the photoionisation cross section. Results for the electron state energies and the photoionisation cross section are reported as functions of the main geometrical parameters of the cone-like structures as well as of the electric field strength.

  2. A Bayesian Retrieval of Greenland Ice Sheet Internal Temperature from Ultra-wideband Software-defined Microwave Radiometer (UWBRAD) Measurements

    NASA Astrophysics Data System (ADS)

    Duan, Y.; Durand, M. T.; Jezek, K. C.; Yardim, C.; Bringer, A.; Aksoy, M.; Johnson, J. T.

    2017-12-01

    The ultra-wideband software-defined microwave radiometer (UWBRAD) is designed to provide ice sheet internal temperature product via measuring low frequency microwave emission. Twelve channels ranging from 0.5 to 2.0 GHz are covered by the instrument. A Greenland air-borne demonstration was demonstrated in September 2016, provided first demonstration of Ultra-wideband radiometer observations of geophysical scenes, including ice sheets. Another flight is planned for September 2017 for acquiring measurements in central ice sheet. A Bayesian framework is designed to retrieve the ice sheet internal temperature from simulated UWBRAD brightness temperature (Tb) measurements over Greenland flight path with limited prior information of the ground. A 1-D heat-flow model, the Robin Model, was used to model the ice sheet internal temperature profile with ground information. Synthetic UWBRAD Tb observations was generated via the partially coherent radiation transfer model, which utilizes the Robin model temperature profile and an exponential fit of ice density from Borehole measurement as input, and corrupted with noise. The effective surface temperature, geothermal heat flux, the variance of upper layer ice density, and the variance of fine scale density variation at deeper ice sheet were treated as unknown variables within the retrieval framework. Each parameter is defined with its possible range and set to be uniformly distributed. The Markov Chain Monte Carlo (MCMC) approach is applied to make the unknown parameters randomly walk in the parameter space. We investigate whether the variables can be improved over priors using the MCMC approach and contribute to the temperature retrieval theoretically. UWBRAD measurements near camp century from 2016 was also treated with the MCMC to examine the framework with scattering effect. The fine scale density fluctuation is an important parameter. It is the most sensitive yet highly unknown parameter in the estimation framework. Including the fine scale density fluctuation greatly improved the retrieval results. The ice sheet vertical temperature profile, especially the 10m temperature, can be well retrieved via the MCMC process. Future retrieval work will apply the Bayesian approach to UWBRAD airborne measurements.

  3. The modified extended Hansen method to determine partial solubility parameters of drugs containing a single hydrogen bonding group and their sodium derivatives: benzoic acid/Na and ibuprofen/Na.

    PubMed

    Bustamante, P; Pena, M A; Barra, J

    2000-01-20

    Sodium salts are often used in drug formulation but their partial solubility parameters are not available. Sodium alters the physical properties of the drug and the knowledge of these parameters would help to predict adhesion properties that cannot be estimated using the solubility parameters of the parent acid. This work tests the applicability of the modified extended Hansen method to determine partial solubility parameters of sodium salts of acidic drugs containing a single hydrogen bonding group (ibuprofen, sodium ibuprofen, benzoic acid and sodium benzoate). The method uses a regression analysis of the logarithm of the experimental mole fraction solubility of the drug against the partial solubility parameters of the solvents, using models with three and four parameters. The solubility of the drugs was determined in a set of solvents representative of several chemical classes, ranging from low to high solubility parameter values. The best results were obtained with the four parameter model for the acidic drugs and with the three parameter model for the sodium derivatives. The four parameter model includes both a Lewis-acid and a Lewis-base term. Since the Lewis acid properties of the sodium derivatives are blocked by sodium, the three parameter model is recommended for these kind of compounds. Comparison of the parameters obtained shows that sodium greatly changes the polar parameters whereas the dispersion parameter is not much affected. Consequently the total solubility parameters of the salts are larger than for the parent acids in good agreement with the larger hydrophilicity expected from the introduction of sodium. The results indicate that the modified extended Hansen method can be applied to determine the partial solubility parameters of acidic drugs and their sodium salts.

  4. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  5. Response of Velocity Anisotropy of Shale Under Isotropic and Anisotropic Stress Fields

    NASA Astrophysics Data System (ADS)

    Li, Xiaying; Lei, Xinglin; Li, Qi

    2018-03-01

    We investigated the responses of P-wave velocity and associated anisotropy in terms of Thomsen's parameters to isotropic and anisotropic stress fields on Longmaxi shales cored along different directions. An array of piezoelectric ceramic transducers allows us to measure P-wave velocities along numerous different propagation directions. Anisotropic parameters, including the P-wave velocity α along a symmetry axis, Thomsen's parameters ɛ and δ, and the orientation of the symmetry axis, could then be extracted by fitting Thomsen's weak anisotropy model to the experimental data. The results indicate that Longmaxi shale displays weakly intrinsic velocity anisotropy with Thomsen's parameters ɛ and δ being approximately 0.05 and 0.15, respectively. The isotropic stress field has only a slight effect on velocity and associated anisotropy in terms of Thomsen's parameters. In contrast, both the magnitude and orientation of the anisotropic stress field with respect to the shale fabric are important in controlling the evolution of velocity and associated anisotropy in a changing stress field. For shale with bedding-parallel loading, velocity anisotropy is enhanced because velocities with smaller angles relative to the maximum stress increase significantly during the entire loading process, whereas those with larger angles increase slightly before the yield stress and afterwards decrease with the increasing differential stress. For shale with bedding-normal loading, anisotropy reversal is observed, and the anisotropy is progressively modified by the applied differential stress. Before reaching the yield stress, velocities with smaller angles relative to the maximum stress increase more significantly and even exceed the level of those with larger angles. After reaching the yield stress, velocities with larger angles decrease more significantly. Microstructural features such as the closure and generation of microcracks can explain the modification of the velocity anisotropy due to the applied stress anisotropy.

  6. Optimal trace inequality constants for interior penalty discontinuous Galerkin discretisations of elliptic operators using arbitrary elements with non-constant Jacobians

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Eaton, M. D.

    2017-12-01

    In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.

  7. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  8. Joint models for longitudinal and time-to-event data: a review of reporting quality with a view to meta-analysis.

    PubMed

    Sudell, Maria; Kolamunnage-Dona, Ruwanthi; Tudur-Smith, Catrin

    2016-12-05

    Joint models for longitudinal and time-to-event data are commonly used to simultaneously analyse correlated data in single study cases. Synthesis of evidence from multiple studies using meta-analysis is a natural next step but its feasibility depends heavily on the standard of reporting of joint models in the medical literature. During this review we aim to assess the current standard of reporting of joint models applied in the literature, and to determine whether current reporting standards would allow or hinder future aggregate data meta-analyses of model results. We undertook a literature review of non-methodological studies that involved joint modelling of longitudinal and time-to-event medical data. Study characteristics were extracted and an assessment of whether separate meta-analyses for longitudinal, time-to-event and association parameters were possible was made. The 65 studies identified used a wide range of joint modelling methods in a selection of software. Identified studies concerned a variety of disease areas. The majority of studies reported adequate information to conduct a meta-analysis (67.7% for longitudinal parameter aggregate data meta-analysis, 69.2% for time-to-event parameter aggregate data meta-analysis, 76.9% for association parameter aggregate data meta-analysis). In some cases model structure was difficult to ascertain from the published reports. Whilst extraction of sufficient information to permit meta-analyses was possible in a majority of cases, the standard of reporting of joint models should be maintained and improved. Recommendations for future practice include clear statement of model structure, of values of estimated parameters, of software used and of statistical methods applied.

  9. Modular Spectral Inference Framework Applied to Young Stars and Brown Dwarfs

    NASA Technical Reports Server (NTRS)

    Gully-Santiago, Michael A.; Marley, Mark S.

    2017-01-01

    In practice, synthetic spectral models are imperfect, causing inaccurate estimates of stellar parameters. Using forward modeling and statistical inference, we derive accurate stellar parameters for a given observed spectrum by emulating a grid of precomputed spectra to track uncertainties. Spectral inference as applied to brown dwarfs re: Synthetic spectral models (Marley et al 1996 and 2014) via the newest grid spans a massive multi-dimensional grid applied to IGRINS spectra, improving atmospheric models for JWST. When applied to young stars(10Myr) with large starpots, they can be measured spectroscopically, especially in the near-IR with IGRINS.

  10. Influence of classical and rock music on red blood cell rheological properties in rats.

    PubMed

    Erken, Gulten; Bor Kucukatay, Melek; Erken, Haydar Ali; Kursunluoglu, Raziye; Genc, Osman

    2008-01-01

    A number of studies have reported physiological effects of music. Different types of music have been found to induce different alterations. Although some physiological and psychological parameters have been demonstrated to be influenced by music, the effect of music on hemorheological parameters such as red blood cell (RBC) deformability and aggregation are unknown. This study aimed at investigating the effects of classical and rock music on hemorheological parameters in rats. Twenty-eight rats were divided into four groups: the control, noise-applied, and the classical music- and rock music-applied groups. Taped classical or rock music were played repeatedly for 1 hour a day for 2 weeks and 95-dB machine sound was applied to the noise-applied rats during the same period. RBC deformability and aggregation were measured using an ektacytometer. RBC deformability was found to be increased in the classical music group. Exposure to both classical and rock music resulted in a decrement in erythrocyte aggregation, but the decline in RBC aggregation was of a higher degree of significance in the classical music group. Exposure to noise did not have any effect on the parameters studied. The results of this study indicate that the alterations in hemorheological parameters were more pronounced in the classical music group compared with the rock music group.

  11. White LED compared with other light sources: age-dependent photobiological effects and parameters for evaluation.

    PubMed

    Rebec, Katja Malovrh; Klanjšek-Gunde, Marta; Bizjak, Grega; Kobav, Matej B

    2015-01-01

    Ergonomic science at work and living places should appraise human factors concerning the photobiological effects of lighting. Thorough knowledge on this subject has been gained in the past; however, few attempts have been made to propose suitable evaluation parameters. The blue light hazard and its influence on melatonin secretion in age-dependent observers is considered in this paper and parameters for its evaluation are proposed. New parameters were applied to analyse the effects of white light-emitting diode (LED) light sources and to compare them with the currently applied light sources. The photobiological effects of light sources with the same illuminance but different spectral power distribution were determined for healthy 4-76-year-old observers. The suitability of new parameters is discussed. Correlated colour temperature, the only parameter currently used to assess photobiological effects, is evaluated and compared to new parameters.

  12. Parameter regionalisation methods for a semi-distributed rainfall-runoff model: application to a Northern Apennine region

    NASA Astrophysics Data System (ADS)

    Neri, Mattia; Toth, Elena

    2017-04-01

    The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.

  13. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  14. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  15. The estimation of soil water fluxes using lysimeter data

    NASA Astrophysics Data System (ADS)

    Wegehenkel, M.

    2009-04-01

    The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.

  16. Characterization of dielectric materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Danny J.; Babinec, Susan; Hagans, Patrick L.

    2017-06-27

    A system and a method for characterizing a dielectric material are provided. The system and method generally include applying an excitation signal to electrodes on opposing sides of the dielectric material to evaluate a property of the dielectric material. The method can further include measuring the capacitive impedance across the dielectric material, and determining a variation in the capacitive impedance with respect to either or both of a time domain and a frequency domain. The measured property can include pore size and surface imperfections. The method can still further include modifying a processing parameter as the dielectric material is formedmore » in response to the detected variations in the capacitive impedance, which can correspond to a non-uniformity in the dielectric material.« less

  17. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  18. Coal liquefaction process streams characterization and evaluation: Analysis of Black Thunder coal and liquefaction products from HRI Bench Unit Run CC-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, R.J.; Solum, M.S.

    This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less

  19. The study on dynamic properties of monolithic ball end mills with various slenderness

    NASA Astrophysics Data System (ADS)

    Wojciechowski, Szymon; Tabaszewski, Maciej; Krolczyk, Grzegorz M.; Maruda, Radosław W.

    2017-10-01

    The reliable determination of modal mass, damping and stiffness coefficient (modal parameters) for the particular machine-toolholder-tool system is essential for the accurate estimation of vibrations, stability and thus the machined surface finish formed during the milling process. Therefore, this paper focuses on the analysis of ball end mill's dynamical properties. The tools investigated during this study are monolithic ball end mills with different slenderness values, made of coated cemented carbide. These kinds of tools are very often applied during the precise milling of curvilinear surfaces. The research program included the impulse test carried out for the investigated tools clamped in the hydraulic toolholder. The obtained modal parameters were further applied in the developed tool's instantaneous deflection model, in order to estimate the tool's working part vibrations during precise milling. The application of the proposed dynamics model involved also the determination of instantaneous cutting forces on the basis of the mechanistic approach. The research revealed that ball end mill's slenderness can be considered as an important milling dynamics and machined surface quality indicator.

  20. Dealing with the time-varying parameter problem of robot manipulators performing path tracking tasks

    NASA Technical Reports Server (NTRS)

    Song, Y. D.; Middleton, R. H.

    1992-01-01

    Many robotic applications involve time-varying payloads during the operation of the robot. It is therefore of interest to consider control schemes that deal with time-varying parameters. Using the properties of the element by element (or Hadarmad) product of matrices, we obtain the robot dynamics in parameter-isolated form, from which a new control scheme is developed. The controller proposed yields zero asymptotic tracking errors when applied to robotic systems with time-varying parameters by using a switching type control law. The results obtained are global in the initial state of the robot, and can be applied to rapidly varying systems.

  1. Modeling the X-Ray Process, and X-Ray Flaw Size Parameter for POD Studies

    NASA Technical Reports Server (NTRS)

    Khoshti, Ajay

    2014-01-01

    Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.

  2. Modeling the X-ray Process, and X-ray Flaw Size Parameter for POD Studies

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2014-01-01

    Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances, the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters, including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.

  3. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  4. Automated model optimisation using the Cylc workflow engine (Cyclops v1.0)

    NASA Astrophysics Data System (ADS)

    Gorman, Richard M.; Oliver, Hilary J.

    2018-06-01

    Most geophysical models include many parameters that are not fully determined by theory, and can be tuned to improve the model's agreement with available data. We might attempt to automate this tuning process in an objective way by employing an optimisation algorithm to find the set of parameters that minimises a cost function derived from comparing model outputs with measurements. A number of algorithms are available for solving optimisation problems, in various programming languages, but interfacing such software to a complex geophysical model simulation presents certain challenges. To tackle this problem, we have developed an optimisation suite (Cyclops) based on the Cylc workflow engine that implements a wide selection of optimisation algorithms from the NLopt Python toolbox (Johnson, 2014). The Cyclops optimisation suite can be used to calibrate any modelling system that has itself been implemented as a (separate) Cylc model suite, provided it includes computation and output of the desired scalar cost function. A growing number of institutions are using Cylc to orchestrate complex distributed suites of interdependent cycling tasks within their operational forecast systems, and in such cases application of the optimisation suite is particularly straightforward. As a test case, we applied the Cyclops to calibrate a global implementation of the WAVEWATCH III (v4.18) third-generation spectral wave model, forced by ERA-Interim input fields. This was calibrated over a 1-year period (1997), before applying the calibrated model to a full (1979-2016) wave hindcast. The chosen error metric was the spatial average of the root mean square error of hindcast significant wave height compared with collocated altimeter records. We describe the results of a calibration in which up to 19 parameters were optimised.

  5. Acousto-ultrasonics to Assess Material and Structural Properties

    NASA Technical Reports Server (NTRS)

    Kautz, Harold E.

    2002-01-01

    This report was created to serve as a manual for applying the Acousto-Ultrasonic NDE method, as practiced at NASA Glenn, to the study of materials and structures for a wide range of applications. Three state of the art acousto-ultrasonic (A-U) analysis parameters, ultrasonic decay (UD) rate, mean time (or skewing factor, "s"), and the centroid of the power spectrum, "f(sub c)," have been studied and applied at GRC for NDE interrogation of various materials and structures of aerospace interest. In addition to this, a unique application of Lamb wave analysis is shown. An appendix gives a brief overview of Lamb Wave analysis. This paper presents the analysis employed to calculate these parameters and the development and reasoning behind their use. It also discusses the planning of A-U measurements for materials and structures to be studied. Types of transducer coupling are discussed including contact and non-contact via laser and air. Experimental planning includes matching transducer frequency range to material and geometry of the specimen to be studied. The effect on results of initially zeroing the DC component of the ultrasonic waveform is compared with not doing so. A wide range of interrogation problems are addressed via the application of these analysis parameters to real specimens is shown for five cases: Case 1: Differences in density in [0] SiC/RBSN ceramic matrix composite. Case 2: Effect of tensile fatigue cycling in [+/-45] SiC/SiC ceramic matrix composite. Case 3: Detecting creep life, and failure, in Udimet 520 Nickel-Based Super Alloy. Case 4: Detecting Surface Layer Formation in T-650-35/PMR-15 Polymer Matrix Composites Panels due to Thermal Aging. Case 5: Detecting Spin Test Degradation in PMC Flywheels. Among these cases a wide range of materials and geometries are studied.

  6. Extreme storm surge modelling in the North Sea. The role of the sea state, forcing frequency and spatial forcing resolution

    NASA Astrophysics Data System (ADS)

    Ridder, Nina; de Vries, Hylke; Drijfhout, Sybren; van den Brink, Henk; van Meijgaard, Erik; de Vries, Hans

    2018-02-01

    This study shows that storm surge model performance in the North Sea is mostly unaffected by the application of temporal variations of surface drag due to changes in sea state provided the choice of a suitable constant Charnock parameter in the sea-state-independent case. Including essential meteorological features on smaller scales and minimising interpolation errors by increasing forcing data resolution are shown to be more important for the improvement of model performance particularly at the high tail of the probability distribution. This is found in a modelling study using WAQUA/DCSMv5 by evaluating the influence of a realistic air-sea momentum transfer parameterization and comparing it to the influence of changes in the spatial and temporal resolution of the applied forcing fields in an effort to support the improvement of impact and climate analysis studies. Particular attention is given to the representation of extreme water levels over the past decades based on the example of the Netherlands. For this, WAQUA/DCSMv5 is forced with ERA-Interim reanalysis data. Model results are obtained from a set of different forcing fields, which either (i) include a wave-state-dependent Charnock parameter or (ii) apply a constant Charnock parameter ( α C h = 0.032) tuned for young sea states in the North Sea, but differ in their spatial and/or temporal resolution. Increasing forcing field resolution from roughly 79 to 12 km through dynamically downscaling can reduce the modelled low bias, depending on coastal station, by up to 0.25 m for the modelled extreme water levels with a 1-year return period and between 0.1 m and 0.5 m for extreme surge heights.

  7. Integrated design course of applied optics focusing on operating and maintaining abilities

    NASA Astrophysics Data System (ADS)

    Xu, Zhongjie; Ning, Yu; Jiang, Tian; Cheng, Xiangai

    2017-08-01

    The abilities of operating and maintaining optical instruments are crucial in modern society. Besides the basic knowledge in optics, the optics courses in the National University of Defense Technology also focus on the training on handling typical optical equipment. As the link between classroom courses on applied optics and the field trips, the integrated design course of applied optics aims to give the students a better understanding on several instantly used optical equipment, such as hand-held telescope and periscope, etc. The basic concepts of optical system design are also emphasized as well. The course is arranged rightly after the classroom course of applied optics and composed of experimental and design tasks. The experimental tasks include the measurements of aberrations and major parameters of a primitive telescope, while in the design parts, the students are asked to design a Keplerian telescope. The whole course gives a deepened understandings on the concepts, assembling, and operating of telescopes. The students are also encouraged to extend their interests on other typical optical instruments.

  8. Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0

    NASA Technical Reports Server (NTRS)

    Paresce, Francesco (Editor)

    1990-01-01

    The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.

  9. A new statistical method for characterizing the atmospheres of extrasolar planets

    NASA Astrophysics Data System (ADS)

    Henderson, Cassandra S.; Skemer, Andrew J.; Morley, Caroline V.; Fortney, Jonathan J.

    2017-10-01

    By detecting light from extrasolar planets, we can measure their compositions and bulk physical properties. The technologies used to make these measurements are still in their infancy, and a lack of self-consistency suggests that previous observations have underestimated their systemic errors. We demonstrate a statistical method, newly applied to exoplanet characterization, which uses a Bayesian formalism to account for underestimated errorbars. We use this method to compare photometry of a substellar companion, GJ 758b, with custom atmospheric models. Our method produces a probability distribution of atmospheric model parameters including temperature, gravity, cloud model (fsed) and chemical abundance for GJ 758b. This distribution is less sensitive to highly variant data and appropriately reflects a greater uncertainty on parameter fits.

  10. Flight test maneuvers for closed loop lateral-directional modeling of the F-18 High Alpha Research Vehicle (HARV) using forebody strakes

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.

    1996-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for closed loop parameter identification purposes, specifically for lateral linear model parameter estimation at 30, 45, and 60 degrees angle of attack, using the Actuated Nose Strakes for Enhanced Rolling (ANSER) control law in Strake (S) model and Strake/Thrust Vectoring (STV) mode. Each maneuver is to be realized by applying square wave inputs to specific pilot station controls using the On-Board Excitation System (OBES). Maneuver descriptions and complete specification of the time/amplitude points defining each input are included, along with plots of the input time histories.

  11. Performance tests with a 4.75 inch bore tapered-roller bearings at high speeds

    NASA Technical Reports Server (NTRS)

    Signer, H. R.; Pinel, S. I.

    1977-01-01

    The tapered-roller bearings were tested at speeds to 15,000 rpm which results in a cone-rib tangential velocity of 130 m/sec. (25,500 ft/min). Lubrication was applied either by jets or directly to the cone-rib, augmented with jets. Additional test parameters included thrust loads to 53,400 N (12,000 lbs), radial loads to 26,700 N (6,000 lbs), lubricant flow rates from 1.9 x 0.000 to 15.1 x 0.001 cubic meter/min. (0.5 to 4.0 gpm), and lubricant inlet temperatures of 350 K and 364 K (170 F and 195 F). Temperature distribution, separator speed, and drive-motor power demand were determined as functions of these test parameters.

  12. Accuracy of gravitational physics tests using ranges to the inner planets

    NASA Technical Reports Server (NTRS)

    Ashby, N.; Bender, P.

    1981-01-01

    A number of different types of deviations from Kepler's laws for planetary orbits can occur in nonNewtonian metric gravitational theories. These include secular changes in all of the orbital elements and in the mean motion, plus additional periodic perturbations in the coordinates. The first order corrections to the Keplerian motion of a single planet around the Sun due to the parameterized post Newtonian theory parameters were calculated as well as the corrections due to the solar quadrupole moment and a possible secular change in the gravitational constant. The results were applied to the case of proposed high accuracy ranging experiments from the Earth to a Mercury orbiting spacecraft in order to see how well the various parameters can be determined.

  13. BCS superconductors: The out-of-equilibrium response to a laser pulse

    NASA Astrophysics Data System (ADS)

    Avella, Adolfo

    2018-05-01

    The dynamics of a 2D d-wave BCS superconductor driven out-of-equilibrium by a perpendicularly-impinging polarized laser pulse is analyzed on varying the laser pulse characteristics. The observed effects include: oscillations both in the amplitude and in the phase of the superconducting order parameter, suppression of the superconductivity, but also its enhancement with a strong dependence on all varying parameters and, in particular, on the polarization in plane of the applied vector potential and on the value of its frequency. This study opens up the possibility to distinguish very clearly the behavior of the nodal and anti-nodal non-thermal excitations and to tackle some of the puzzling results of the current experimental scenario in the field.

  14. Atomistic Modeling of Surface and Bulk Properties of Cu, Pd and the Cu-Pd System

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Abel, Phillip; Mosca, Hugo O.; Gray, Hugh R. (Technical Monitor)

    2002-01-01

    The BFS (Bozzolo-Ferrante-Smith) method for alloys is applied to the study of the Cu-Pd system. A variety of issues are analyzed and discussed, including the properties of pure Cu or Pd crystals (surface energies, surface relaxations), Pd/Cu and Cu/Pd surface alloys, segregation of Pd (or Cu) in Cu (or Pd), concentration dependence of the lattice parameter of the high temperature fcc CuPd solid solution, the formation and properties of low temperature ordered phases, and order-disorder transition temperatures. Emphasis is made on the ability of the method to describe these properties on the basis of a minimum set of BFS universal parameters that uniquely characterize the Cu-Pd system.

  15. Predictive simulations and optimization of nanowire field-effect PSA sensors including screening

    NASA Astrophysics Data System (ADS)

    Baumgartner, Stefan; Heitzinger, Clemens; Vacic, Aleksandar; Reed, Mark A.

    2013-06-01

    We apply our self-consistent PDE model for the electrical response of field-effect sensors to the 3D simulation of nanowire PSA (prostate-specific antigen) sensors. The charge concentration in the biofunctionalized boundary layer at the semiconductor-electrolyte interface is calculated using the propka algorithm, and the screening of the biomolecules by the free ions in the liquid is modeled by a sensitivity factor. This comprehensive approach yields excellent agreement with experimental current-voltage characteristics without any fitting parameters. Having verified the numerical model in this manner, we study the sensitivity of nanowire PSA sensors by changing device parameters, making it possible to optimize the devices and revealing the attributes of the optimal field-effect sensor.

  16. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  17. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  18. Visual Basic, Excel-based fish population modeling tool - The pallid sturgeon example

    USGS Publications Warehouse

    Moran, Edward H.; Wildhaber, Mark L.; Green, Nicholas S.; Albers, Janice L.

    2016-02-10

    The model presented in this report is a spreadsheet-based model using Visual Basic for Applications within Microsoft Excel (http://dx.doi.org/10.5066/F7057D0Z) prepared in cooperation with the U.S. Army Corps of Engineers and U.S. Fish and Wildlife Service. It uses the same model structure and, initially, parameters as used by Wildhaber and others (2015) for pallid sturgeon. The difference between the model structure used for this report and that used by Wildhaber and others (2015) is that variance is not partitioned. For the model of this report, all variance is applied at the iteration and time-step levels of the model. Wildhaber and others (2015) partition variance into parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level and temporal variance (uncertainty caused by random environmental fluctuations with time) applied at the time-step level. They included implicit individual variance (uncertainty caused by differences between individuals) within the time-step level.The interface developed for the model of this report is designed to allow the user the flexibility to change population model structure and parameter values and uncertainty separately for every component of the model. This flexibility makes the modeling tool potentially applicable to any fish species; however, the flexibility inherent in this modeling tool makes it possible for the user to obtain spurious outputs. The value and reliability of the model outputs are only as good as the model inputs. Using this modeling tool with improper or inaccurate parameter values, or for species for which the structure of the model is inappropriate, could lead to untenable management decisions. By facilitating fish population modeling, this modeling tool allows the user to evaluate a range of management options and implications. The goal of this modeling tool is to be a user-friendly modeling tool for developing fish population models useful to natural resource managers to inform their decision-making processes; however, as with all population models, caution is needed, and a full understanding of the limitations of a model and the veracity of user-supplied parameters should always be considered when using such model output in the management of any species.

  19. Experimental Design of a UCAV-Based High-Energy Laser Weapon

    DTIC Science & Technology

    2016-12-01

    propagation. The Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their... Design of Experiments (DOE) methodology is then applied to determine the significance of the UCAV-HEL design parameters and their effect on the...73 A. DESIGN OF EXPERIMENTS METHODOLOGY .............................73 B. OPERATIONAL CONCEPT

  20. Corrosion protection of galvanized steels by silane-based treatments

    NASA Astrophysics Data System (ADS)

    Yuan, Wei

    The possibility of using silane coupling agents as replacements for chromate treatments was investigated on galvanized steel substrates. In order to understand the influence of deposition parameters on silane film formation, pure zinc substrates were first used as a model for galvanized steel to study the interaction between silane coupling agents and zinc surfaces. The silane films formed on pure zinc substrates from aqueous solutions were characterized by ellipsometry, contact angle measurements, reflection absorption infrared spectroscopy, x-ray photoelectron spectroscopy, and atomic force microscopy. The deposition parameters studied include solution concentration, solution dipping time and pH value of the applied solution. It appears that silane film formation involved a true equilibrium of hydrolysis and condensation reactions in aqueous solutions. It has been found that the silane film thickness obtained depends primarily on the solution concentration and is almost independent of the solution dipping time. The molecular orientation of applied silane films is determined by the pH value of applied silane solutions and the isoelectric point of metal substrates. The deposition window in terms of pH value for zinc substrates is between 6.0 and 9.0. The total surface energy of the silane-coated pure zinc substrates decreases with film aging time, the decrease rate, however, is determined by the nature of silane coupling agents. Selected silane coupling agents were applied as prepaint or passivation treatments onto galvanized steel substrates. The corrosion protection provided by these silane-based treatments were evaluated by salt spray test, cyclic corrosion test, electrochemical impedance spectroscopy, and stack test. The results showed that silane coupling agents can possibly be used to replace chromates for corrosion control of galvanized steel substrates. Silane coatings provided by these silane treatments serve mainly as physical barriers. Factors that affect the performance of a silane coupling agent in the application of corrosion control include chemical reactivity, hydrophobic character, siloxane crosslinker network, and film thickness. Good protections afforded by the silane treatments are a synergetic effect of all these factors.

  1. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  2. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  3. Maximum life spiral bevel reduction design

    NASA Technical Reports Server (NTRS)

    Savage, M.; Prasanna, M. G.; Coe, H. H.

    1992-01-01

    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  4. Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, E. C.; Sovinec, C. R.

    The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less

  5. Parameter-Space Survey of Linear G-mode and Interchange in Extended Magnetohydrodynamics

    DOE PAGES

    Howell, E. C.; Sovinec, C. R.

    2017-09-11

    The extended magnetohydrodynamic stability of interchange modes is studied in two configurations. In slab geometry, a local dispersion relation for the gravitational interchange mode (g-mode) with three different extensions of the MHD model [P. Zhu, et al., Phys. Rev. Lett. 101, 085005 (2008)] is analyzed. Our results delineate where drifts stablize the g-mode with gyroviscosity alone and with a two-fluid Ohm’s law alone. Including the two-fluid Ohm’s law produces an ion drift wave that interacts with the g-mode. This interaction then gives rise to a second instability at finite k y. A second instability is also observed in numerical extended MHD computations of linear interchange in cylindrical screw-pinch equilibria, the second configuration. Particularly with incomplete models, this mode limits the regions of stability for physically realistic conditions. But, applying a consistent two-temperature extended MHD model that includes the diamagnetic heat flux density (more » $$\\vec{q}$$ *) makes the onset of the second mode occur at larger Hall parameter. For conditions relevant to the SSPX experiment [E.B. Hooper, Plasma Phys. Controlled Fusion 54, 113001 (2012)], significant stabilization is observed for Suydam parameters as large as unity (D s≲1).« less

  6. Safety assessment of a shallow foundation using the random finite element method

    NASA Astrophysics Data System (ADS)

    Zaskórski, Łukasz; Puła, Wojciech

    2015-04-01

    A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.

  7. Bayesian parameter estimation for nonlinear modelling of biological pathways.

    PubMed

    Ghasemi, Omid; Lindsey, Merry L; Yang, Tianyi; Nguyen, Nguyen; Huang, Yufei; Jin, Yu-Fang

    2011-01-01

    The availability of temporal measurements on biological experiments has significantly promoted research areas in systems biology. To gain insight into the interaction and regulation of biological systems, mathematical frameworks such as ordinary differential equations have been widely applied to model biological pathways and interpret the temporal data. Hill equations are the preferred formats to represent the reaction rate in differential equation frameworks, due to their simple structures and their capabilities for easy fitting to saturated experimental measurements. However, Hill equations are highly nonlinearly parameterized functions, and parameters in these functions cannot be measured easily. Additionally, because of its high nonlinearity, adaptive parameter estimation algorithms developed for linear parameterized differential equations cannot be applied. Therefore, parameter estimation in nonlinearly parameterized differential equation models for biological pathways is both challenging and rewarding. In this study, we propose a Bayesian parameter estimation algorithm to estimate parameters in nonlinear mathematical models for biological pathways using time series data. We used the Runge-Kutta method to transform differential equations to difference equations assuming a known structure of the differential equations. This transformation allowed us to generate predictions dependent on previous states and to apply a Bayesian approach, namely, the Markov chain Monte Carlo (MCMC) method. We applied this approach to the biological pathways involved in the left ventricle (LV) response to myocardial infarction (MI) and verified our algorithm by estimating two parameters in a Hill equation embedded in the nonlinear model. We further evaluated our estimation performance with different parameter settings and signal to noise ratios. Our results demonstrated the effectiveness of the algorithm for both linearly and nonlinearly parameterized dynamic systems. Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  8. Numerical simulation of hydrogen fluorine overtone chemical lasers

    NASA Astrophysics Data System (ADS)

    Chen, Jinbao; Jiang, Zhongfu; Hua, Weihong; Liu, Zejin; Shu, Baihong

    1998-08-01

    A two-dimensional program was applied to simulate the chemical dynamic process, gas dynamic process and lasing process of a combustion-driven CW HF overtone chemical lasers. Some important parameters in the cavity were obtained. The calculated results included HF molecule concentration on each vibration energy level while lasing, averaged pressure and temperature, zero power gain coefficient of each spectral line, laser spectrum, the averaged laser intensity, output power, chemical efficiency and the length of lasing zone.

  9. O-Ring-Testing Fixture

    NASA Technical Reports Server (NTRS)

    Turner, James E.; Mccluney, D. Scott

    1991-01-01

    Fixture tests O-rings for sealing ability under dynamic conditions after extended periods of compression. Hydraulic cylinder moves plug in housing. Taper of 15 degrees on plug and cavity of housing ensures that gap created between O-ring under test and wall of cavity. Secondary O-rings above and below test ring maintain pressure applied to test ring. Evaluates effects of variety of parameters, including temperature, pressure, rate of pressurization, rate and magnitude of radial gap movement, and pretest compression time.

  10. Method to Predict Tempering of Steels Under Non-isothermal Conditions

    NASA Astrophysics Data System (ADS)

    Poirier, D. R.; Kohli, A.

    2017-05-01

    A common way of representing the tempering responses of steels is with a "tempering parameter" that includes the effect of temperature and time on hardness after hardening. Such functions, usually in graphical form, are available for many steels and have been applied for isothermal tempering. In this article, we demonstrate that the method can be extended to non-isothermal conditions. Controlled heating experiments were done on three grades in order to verify the method.

  11. Systematic Calibration for a Backpacked Spherical Photogrammetry Imaging System

    NASA Astrophysics Data System (ADS)

    Rau, J. Y.; Su, B. W.; Hsiao, K. W.; Jhan, J. P.

    2016-06-01

    A spherical camera can observe the environment for almost 720 degrees' field of view in one shoot, which is useful for augmented reality, environment documentation, or mobile mapping applications. This paper aims to develop a spherical photogrammetry imaging system for the purpose of 3D measurement through a backpacked mobile mapping system (MMS). The used equipment contains a Ladybug-5 spherical camera, a tactical grade positioning and orientation system (POS), i.e. SPAN-CPT, and an odometer, etc. This research aims to directly apply photogrammetric space intersection technique for 3D mapping from a spherical image stereo-pair. For this purpose, several systematic calibration procedures are required, including lens distortion calibration, relative orientation calibration, boresight calibration for direct georeferencing, and spherical image calibration. The lens distortion is serious on the ladybug-5 camera's original 6 images. Meanwhile, for spherical image mosaicking from these original 6 images, we propose the use of their relative orientation and correct their lens distortion at the same time. However, the constructed spherical image still contains systematic error, which will reduce the 3D measurement accuracy. Later for direct georeferencing purpose, we need to establish a ground control field for boresight/lever-arm calibration. Then, we can apply the calibrated parameters to obtain the exterior orientation parameters (EOPs) of all spherical images. In the end, the 3D positioning accuracy after space intersection will be evaluated, including EOPs obtained by structure from motion method.

  12. Making and breaking bridges in a Pickering emulsion.

    PubMed

    French, David J; Taylor, Phil; Fowler, Jeff; Clegg, Paul S

    2015-03-01

    Particle bridges form in Pickering emulsions when the oil-water interfacial area generated by an applied shear is greater than that which can be stabilised by the available particles and the particles have a slight preference for the continuous phase. They can subsequently be broken by low shear or by modifying the particle wettability. We have developed a model oil-in-water system for studying particle bridging in Pickering emulsions stabilised by fluorescent Stöber silica. A mixture of dodecane and isopropyl myristate was used as the oil phase. We have used light scattering and microscopy to study the degree to which emulsions are bridged, and how this is affected by parameters including particle volume fraction, particle wettability and shear rate. We have looked for direct evidence of droplets sharing particles using freeze fracture scanning electron microscopy. We have created strongly aggregating Pickering emulsions using our model system. This aggregating state can be accessed by varying several different parameters, including particle wettability and particle volume fraction. Particles with a slight preference for the continuous phase are required for bridging to occur, and the degree of bridging increases with increasing shear rate but decreases with increasing particle volume fraction. Particle bridges can subsequently be removed by applying low shear or by modifying the particle wettability. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Bicellar systems for in vitro percutaneous absorption of diclofenac.

    PubMed

    Rubio, L; Alonso, C; Rodríguez, G; Barbosa-Barros, L; Coderch, L; De la Maza, A; Parra, J L; López, O

    2010-02-15

    This work evaluates the effect of different bicellar systems on the percutaneous absorption of diclofenac diethylamine (DDEA) using two different approaches. In the first case, the drug was included in bicellar systems, which were applied on the skin and, in the second case, the skin was treated by applying bicellar systems without drug before to the application of a DDEA aqueous solution. The characterization of bicellar systems showed that the particle size decreased when DDEA was encapsulated. Percutaneous absorption studies demonstrated a lower penetration of DDEA when the drug was included in bicellar systems than when the drug was applied in an aqueous solution. This effect was possibly due to a certain rigidity of the bicellar systems caused by the incorporation of DDEA. The absorption of DDEA on skin pretreated with bicelles increased compared to the absorption of DDEA on intact skin. Bicelles without DDEA could cause certain disorganization of the SC barrier function, thereby facilitating the percutaneous penetration of DDEA subsequently applied. Thus, depending on their physicochemical parameters and on the application conditions, these systems have potential enhancement or retardant effects on percutaneous absorption that result in an interesting strategy, which may be used in future drug delivery applications. Copyright 2009 Elsevier B.V. All rights reserved.

  14. Application of artificial neural networks to assess pesticide contamination in shallow groundwater

    USGS Publications Warehouse

    Sahoo, G.B.; Ray, C.; Mehnert, E.; Keefer, D.A.

    2006-01-01

    In this study, a feed-forward back-propagation neural network (BPNN) was developed and applied to predict pesticide concentrations in groundwater monitoring wells. Pesticide concentration data are challenging to analyze because they tend to be highly censored. Input data to the neural network included the categorical indices of depth to aquifer material, pesticide leaching class, aquifer sensitivity to pesticide contamination, time (month) of sample collection, well depth, depth to water from land surface, and additional travel distance in the saturated zone (i.e., distance from land surface to midpoint of well screen). The output of the neural network was the total pesticide concentration detected in the well. The model prediction results produced good agreements with observed data in terms of correlation coefficient (R = 0.87) and pesticide detection efficiency (E = 89%), as well as good match between the observed and predicted "class" groups. The relative importance of input parameters to pesticide occurrence in groundwater was examined in terms of R, E, mean error (ME), root mean square error (RMSE), and pesticide occurrence "class" groups by eliminating some key input parameters to the model. Well depth and time of sample collection were the most sensitive input parameters for predicting the pesticide contamination potential of a well. This infers that wells tapping shallow aquifers are more vulnerable to pesticide contamination than those wells tapping deeper aquifers. Pesticide occurrences during post-application months (June through October) were found to be 2.5 to 3 times higher than pesticide occurrences during other months (November through April). The BPNN was used to rank the input parameters with highest potential to contaminate groundwater, including two original and five ancillary parameters. The two original parameters are depth to aquifer material and pesticide leaching class. When these two parameters were the only input parameters for the BPNN, they were not able to predict contamination potential. However, when they were used with other parameters, the predictive performance efficiency of the BPNN in terms of R, E, ME, RMSE, and pesticide occurrence "class" groups increased. Ancillary data include data collected during the study such as well depth and time of sample collection. The BPNN indicated that the ancillary data had more predictive power than the original data. The BPNN results will help researchers identify parameters to improve maps of aquifer sensitivity to pesticide contamination. ?? 2006 Elsevier B.V. All rights reserved.

  15. Computation of physiological human vocal fold parameters by mathematical optimization of a biomechanical model

    PubMed Central

    Yang, Anxiong; Stingl, Michael; Berry, David A.; Lohscheller, Jörg; Voigt, Daniel; Eysholdt, Ulrich; Döllinger, Michael

    2011-01-01

    With the use of an endoscopic, high-speed camera, vocal fold dynamics may be observed clinically during phonation. However, observation and subjective judgment alone may be insufficient for clinical diagnosis and documentation of improved vocal function, especially when the laryngeal disease lacks any clear morphological presentation. In this study, biomechanical parameters of the vocal folds are computed by adjusting the corresponding parameters of a three-dimensional model until the dynamics of both systems are similar. First, a mathematical optimization method is presented. Next, model parameters (such as pressure, tension and masses) are adjusted to reproduce vocal fold dynamics, and the deduced parameters are physiologically interpreted. Various combinations of global and local optimization techniques are attempted. Evaluation of the optimization procedure is performed using 50 synthetically generated data sets. The results show sufficient reliability, including 0.07 normalized error, 96% correlation, and 91% accuracy. The technique is also demonstrated on data from human hemilarynx experiments, in which a low normalized error (0.16) and high correlation (84%) values were achieved. In the future, this technique may be applied to clinical high-speed images, yielding objective measures with which to document improved vocal function of patients with voice disorders. PMID:21877808

  16. An insight on correlations between kinematic rupture parameters from dynamic ruptures on rough faults

    NASA Astrophysics Data System (ADS)

    Thingbijam, Kiran Kumar; Galis, Martin; Vyas, Jagdish; Mai, P. Martin

    2017-04-01

    We examine the spatial interdependence between kinematic parameters of earthquake rupture, which include slip, rise-time (total duration of slip), acceleration time (time-to-peak slip velocity), peak slip velocity, and rupture velocity. These parameters were inferred from dynamic rupture models obtained by simulating spontaneous rupture on faults with varying degree of surface-roughness. We observe that the correlations between these parameters are better described by non-linear correlations (that is, on logarithm-logarithm scale) than by linear correlations. Slip and rise-time are positively correlated while these two parameters do not correlate with acceleration time, peak slip velocity, and rupture velocity. On the other hand, peak slip velocity correlates positively with rupture velocity but negatively with acceleration time. Acceleration time correlates negatively with rupture velocity. However, the observed correlations could be due to weak heterogeneity of the slip distributions given by the dynamic models. Therefore, the observed correlations may apply only to those parts of rupture plane with weak slip heterogeneity if earthquake-rupture associate highly heterogeneous slip distributions. Our findings will help to improve pseudo-dynamic rupture generators for efficient broadband ground-motion simulations for seismic hazard studies.

  17. Heat and mass transfer in MHD free convection from a moving permeable vertical surface by a perturbation technique

    NASA Astrophysics Data System (ADS)

    Abdelkhalek, M. M.

    2009-05-01

    Numerical results are presented for heat and mass transfer effect on hydromagnetic flow of a moving permeable vertical surface. An analysis is performed to study the momentum, heat and mass transfer characteristics of MHD natural convection flow over a moving permeable surface. The surface is maintained at linear temperature and concentration variations. The non-linear coupled boundary layer equations were transformed and the resulting ordinary differential equations were solved by perturbation technique [Aziz A, Na TY. Perturbation methods in heat transfer. Berlin: Springer-Verlag; 1984. p. 1-184; Kennet Cramer R, Shih-I Pai. Magneto fluid dynamics for engineers and applied physicists 1973;166-7]. The solution is found to be dependent on several governing parameter, including the magnetic field strength parameter, Prandtl number, Schmidt number, buoyancy ratio and suction/blowing parameter, a parametric study of all the governing parameters is carried out and representative results are illustrated to reveal a typical tendency of the solutions. Numerical results for the dimensionless velocity profiles, the temperature profiles, the concentration profiles, the local friction coefficient and the local Nusselt number are presented for various combinations of parameters.

  18. PMMA/PS coaxial electrospinning: a statistical analysis on processing parameters

    NASA Astrophysics Data System (ADS)

    Rahmani, Shahrzad; Arefazar, Ahmad; Latifi, Masoud

    2017-08-01

    Coaxial electrospinning, as a versatile method for producing core-shell fibers, is known to be very sensitive to two classes of influential factors including material and processing parameters. Although coaxial electrospinning has been the focus of many studies, the effects of processing parameters on the outcomes of this method have not yet been well investigated. A good knowledge of the impacts of processing parameters and their interactions on coaxial electrospinning can make it possible to better control and optimize this process. Hence, in this study, the statistical technique of response surface method (RSM) using the design of experiments on four processing factors of voltage, distance, core and shell flow rates was applied. Transmission electron microscopy (TEM), scanning electron microscopy (SEM), oil immersion and Fluorescent microscopy were used to characterize fiber morphology. The core and shell diameters of fibers were measured and the effects of all factors and their interactions were discussed. Two polynomial models with acceptable R-squares were proposed to describe the core and shell diameters as functions of the processing parameters. Voltage and distance were recognized as the most significant and influential factors on shell diameter, while core diameter was mainly under the influence of core and shell flow rates besides the voltage.

  19. Estimation of parameter uncertainty for an activated sludge model using Bayesian inference: a comparison with the frequentist method.

    PubMed

    Zonta, Zivko J; Flotats, Xavier; Magrí, Albert

    2014-08-01

    The procedure commonly used for the assessment of the parameters included in activated sludge models (ASMs) relies on the estimation of their optimal value within a confidence region (i.e. frequentist inference). Once optimal values are estimated, parameter uncertainty is computed through the covariance matrix. However, alternative approaches based on the consideration of the model parameters as probability distributions (i.e. Bayesian inference), may be of interest. The aim of this work is to apply (and compare) both Bayesian and frequentist inference methods when assessing uncertainty for an ASM-type model, which considers intracellular storage and biomass growth, simultaneously. Practical identifiability was addressed exclusively considering respirometric profiles based on the oxygen uptake rate and with the aid of probabilistic global sensitivity analysis. Parameter uncertainty was thus estimated according to both the Bayesian and frequentist inferential procedures. Results were compared in order to evidence the strengths and weaknesses of both approaches. Since it was demonstrated that Bayesian inference could be reduced to a frequentist approach under particular hypotheses, the former can be considered as a more generalist methodology. Hence, the use of Bayesian inference is encouraged for tackling inferential issues in ASM environments.

  20. Generalized Scaling and the Master Variable for Brownian Magnetic Nanoparticle Dynamics

    PubMed Central

    Reeves, Daniel B.; Shi, Yipeng; Weaver, John B.

    2016-01-01

    Understanding the dynamics of magnetic particles can help to advance several biomedical nanotechnologies. Previously, scaling relationships have been used in magnetic spectroscopy of nanoparticle Brownian motion (MSB) to measure biologically relevant properties (e.g., temperature, viscosity, bound state) surrounding nanoparticles in vivo. Those scaling relationships can be generalized with the introduction of a master variable found from non-dimensionalizing the dynamical Langevin equation. The variable encapsulates the dynamical variables of the surroundings and additionally includes the particles’ size distribution and moment and the applied field’s amplitude and frequency. From an applied perspective, the master variable allows tuning to an optimal MSB biosensing sensitivity range by manipulating both frequency and field amplitude. Calculation of magnetization harmonics in an oscillating applied field is also possible with an approximate closed-form solution in terms of the master variable and a single free parameter. PMID:26959493

  1. Estimation of genetic parameters and response to selection for a continuous trait subject to culling before testing.

    PubMed

    Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A

    2012-02-01

    The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.

  2. Can intermuscular cleavage planes provide proper transverse screw angle? Comparison of two paraspinal approaches.

    PubMed

    Cheng, Xiaofei; Ni, Bin; Liu, Qi; Chen, Jinshui; Guan, Huapeng

    2013-01-01

    The goal of this study was to determine which paraspinal approach provided a better transverse screw angle (TSA) for each vertebral level in lower lumbar surgery. Axial computed tomography (CT) images of 100 patients, from L3 to S1, were used to measure the angulation parameters, including transverse pedicle angle (TPA) and transverse cleavage plane angle (TCPA) of entry from the two approaches. The difference value between TCPA and TPA, defined as difference angle (DA), was calculated. Statistical differences of DA obtained by the two approaches and the angulation parameters between sexes, and the correlation between each angulation parameter and age or body mass index (BMI) were analyzed. TPA ranged from about 16° at L3 to 30° at S1. TCPA through the Wiltse's and Weaver's approach ranged from about -10° and 25° at L3 to 12° and 32° at S1, respectively. The absolute values of DA through the Weaver's approach were significantly lower than those through the Wiltse's approach at each level. The angulation parameters showed no significant difference with sex and no significant correlation with age or BMI. In the lower lumbar vertebrae (L3-L5) and S1, pedicle screw placement through the Weaver's approach may more easily yield the preferred TSA consistent with TPA than that through the Wiltse's approach. The reference values obtained in this paper may be applied regardless of sex, age or BMI and the descriptive statistical results may be used as references for applying the two paraspinal approaches.

  3. The ecological niche of Dermacentor marginatus in Germany.

    PubMed

    Walter, Melanie; Brugger, Katharina; Rubel, Franz

    2016-06-01

    The ixodid tick Dermacentor marginatus (Sulzer, 1776) is endemic throughout southern Europe in the range of 33-51 (°) N latitude. In Germany, however, D. marginatus was exclusively reported in the Rhine valley and adjacent areas. Its northern distribution limit near Giessen is located at the coordinates 8.32 (°) E/50.65 (°) N. Particularly with regard to the causative agents of rickettsioses, tularemia, and Q fever, the observed locations as well as the potential distribution of the vector D. marginatus in Germany are of special interest. Applying a dataset of 118 georeferenced tick locations, the ecological niche for D. marginatus was calculated. It is described by six climate parameters based on temperature and relative humidity and another six environmental parameters including land cover classes and altitude. The final ecological niche is determined by the frequency distributions of these 12 parameters at the tick locations. Main parameters are the mean annual temperature (frequency distribution characterized by the minimum, median, and maximum of 6.1, 9.9, and 12.2 (°)C), the mean annual relative humidity (73.7, 76.7, and 80.9 %), as well as the altitude (87, 240, 1108 m). The climate and environmental niche is used to estimate the habitat suitability of D. marginatus in Germany by applying the BIOCLIM model. Finally, the potential spatial distribution of D. marginatus was calculated and mapped by determining an optimal threshold value of the suitability index, i.e., the maximum of sensitivity and specificity (Youden index). The model performance is expressed by AUC = 0.91.

  4. Objective calibration of numerical weather prediction models

    NASA Astrophysics Data System (ADS)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  5. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  6. Discrete Sparse Coding.

    PubMed

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  7. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  8. Reconstructing the sky location of gravitational-wave detected compact binary systems: Methodology for testing and comparison

    NASA Astrophysics Data System (ADS)

    Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.

    2014-04-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.

  9. Classifying and mapping wetlands and peat resources using digital cartography

    USGS Publications Warehouse

    Cameron, Cornelia C.; Emery, David A.

    1992-01-01

    Digital cartography allows the portrayal of spatial associations among diverse data types and is ideally suited for land use and resource analysis. We have developed methodology that uses digital cartography for the classification of wetlands and their associated peat resources and applied it to a 1:24 000 scale map area in New Hampshire. Classifying and mapping wetlands involves integrating the spatial distribution of wetlands types with depth variations in associated peat quality and character. A hierarchically structured classification that integrates the spatial distribution of variations in (1) vegetation, (2) soil type, (3) hydrology, (4) geologic aspects, and (5) peat characteristics has been developed and can be used to build digital cartographic files for resource and land use analysis. The first three parameters are the bases used by the National Wetlands Inventory to classify wetlands and deepwater habitats of the United States. The fourth parameter, geological aspects, includes slope, relief, depth of wetland (from surface to underlying rock or substrate), wetland stratigraphy, and the type and structure of solid and unconsolidated rock surrounding and underlying the wetland. The fifth parameter, peat characteristics, includes the subsurface variation in ash, acidity, moisture, heating value (Btu), sulfur content, and other chemical properties as shown in specimens obtained from core holes. These parameters can be shown as a series of map data overlays with tables that can be integrated for resource or land use analysis.

  10. Dynamics of Female Pelvic Floor Function Using Urodynamics, Ultrasound and Magnetic Resonance Imaging (MRI)

    PubMed Central

    Constantinou, Christos E.

    2009-01-01

    In this review the diagnostic potential of evaluating female pelvic floor muscle (PFM)) function using magnetic and ultrasound imaging in the context of urodynamic observations is considered in terms of determining the mechanisms of urinary continence. A new approach is used to consider the dynamics of PFM activity by introducing new parameters derived from imaging. Novel image processing techniques are applied to illustrate the static anatomy and dynamics PFM function of stress incontinent women pre and post operatively as compared to asymptomatic subjects. Function was evaluated from the dynamics of organ displacement produced during voluntary and reflex activation. Technical innovations include the use of ultrasound analysis of movement of structures during maneuvers that are associated with external stimuli. Enabling this approach is the development of criteria and fresh and unique parameters that define the kinematics of PFM function. Principal among these parameters, are displacement, velocity, acceleration and the trajectory of pelvic floor landmarks. To accomplish this objective, movement detection, including motion tracking algorithms and segmentation algorithms were developed to derive new parameters of trajectory, displacement, velocity and acceleration, and strain of pelvic structures during different maneuvers. Results highlight the importance of timing the movement and deformation to fast and stressful maneuvers, which are important for understanding the neuromuscular control and function of PFM. Furthermore, observations suggest that timing of responses is a significant factor separating the continent from the incontinent subjects. PMID:19303690

  11. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  12. The Effects of Molecular Properties on Ready Biodegradation of Aromatic Compounds in the OECD 301B CO2 Evolution Test.

    PubMed

    He, Mei; Mei, Cheng-Fang; Sun, Guo-Ping; Li, Hai-Bei; Liu, Lei; Xu, Mei-Ying

    2016-07-01

    Ready biodegradation is the primary biodegradability of a compound, which is used for discriminating whether a compound could be rapidly and readily biodegraded in the natural ecosystems in a short period and has been applied extensively in the environmental risk assessment of many chemicals. In this study, the effects of 24 molecular properties (including 2 physicochemical parameters, 10 geometrical parameters, 6 topological parameters, and 6 electronic parameters) on the ready biodegradation of 24 kinds of synthetic aromatic compounds were investigated using the OECD 301B CO2 Evolution test. The relationship between molecular properties and ready biodegradation of these aromatic compounds varied with molecular properties. A significant inverse correlation was found for the topological parameter TD, five geometrical parameters (Rad, CAA, CMA, CSEV, and N c), and the physicochemical parameter K ow, and a positive correlation for two topological parameters TC and TVC, whereas no significant correlation was observed for any of the electronic parameters. Based on the correlations between molecular properties and ready biodegradation of these aromatic compounds, the importance of molecular properties was demonstrated as follows: geometrical properties > topological properties > physicochemical properties > electronic properties. Our study first demonstrated the effects of molecular properties on ready biodegradation by a number of experiment data under the same experimental conditions, which should be taken into account to better guide the ready biodegradation tests and understand the mechanisms of the ready biodegradation of aromatic compounds.

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  14. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  15. Needleless Electrospinning Experimental Study and Nanofiber Application in Semiconductor Packaging

    NASA Astrophysics Data System (ADS)

    Sun, Tianwei

    Electronics especially mobile electronics such as smart phones, tablet PCs, notebooks and digital cameras are undergoing rapid development nowadays and have thoroughly changed our lives. With the requirement of more transistors, higher power, smaller size, lighter weight and even bendability, thermal management of these devices became one of the key challenges. Compared to active heat management system, heat pipe, which is a passive fluidic system, is considered promising to solve this problem. However, traditional heat pipes have size, weight and capillary limitation. Thus new type of heat pipe with smaller size, lighter weight and higher capillary pressure is needed. Nanofiber has been proved with superior properties and has been applied in multiple areas. This study discussed the possibility of applying nanofiber in heat pipe as new wick structure. In this study, a needleless electrospinning device with high productivity rate was built onsite to systematically investigate the effect of processing parameters on fiber properties as well as to generate nanofiber mat to evaluate its capability in electronics cooling. Polyethylene oxide (PEO) and Polyvinyl Alcohol (PVA) nanofibers were generated. Tensiometer was used for wettability measurement. The results show that independent parameters including spinneret type, working distance, solution concentration and polymer type are strongly correlated with fiber morphology compared to other parameters. The results also show that the fabricated nanofiber mat has high capillary pressure.

  16. An Empirical Mass Function Distribution

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Robotham, A. S. G.; Power, C.

    2018-03-01

    The halo mass function, encoding the comoving number density of dark matter halos of a given mass, plays a key role in understanding the formation and evolution of galaxies. As such, it is a key goal of current and future deep optical surveys to constrain the mass function down to mass scales that typically host {L}\\star galaxies. Motivated by the proven accuracy of Press–Schechter-type mass functions, we introduce a related but purely empirical form consistent with standard formulae to better than 4% in the medium-mass regime, {10}10{--}{10}13 {h}-1 {M}ȯ . In particular, our form consists of four parameters, each of which has a simple interpretation, and can be directly related to parameters of the galaxy distribution, such as {L}\\star . Using this form within a hierarchical Bayesian likelihood model, we show how individual mass-measurement errors can be successfully included in a typical analysis, while accounting for Eddington bias. We apply our form to a question of survey design in the context of a semi-realistic data model, illustrating how it can be used to obtain optimal balance between survey depth and angular coverage for constraints on mass function parameters. Open-source Python and R codes to apply our new form are provided at http://mrpy.readthedocs.org and https://cran.r-project.org/web/packages/tggd/index.html respectively.

  17. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  18. Reaction time as an indicator of insufficient effort: Development and validation of an embedded performance validity parameter.

    PubMed

    Stevens, Andreas; Bahlo, Simone; Licha, Christina; Liske, Benjamin; Vossler-Thies, Elisabeth

    2016-11-30

    Subnormal performance in attention tasks may result from various sources including lack of effort. In this report, the derivation and validation of a performance validity parameter for reaction time is described, using a set of malingering-indices ("Slick-criteria"), and 3 independent samples of participants (total n =893). The Slick-criteria yield an estimate of the probability of malingering based on the presence of an external incentive, evidence from neuropsychological testing, from self-report and clinical data. In study (1) a validity parameter is derived using reaction time data of a sample, composed of inpatients with recent severe brain lesions not involved in litigation and of litigants with and without brain lesion. In study (2) the validity parameter is tested in an independent sample of litigants. In study (3) the parameter is applied to an independent sample comprising cooperative and non-cooperative testees. Logistic regression analysis led to a derived validity parameter based on median reaction time and standard deviation. It performed satisfactorily in studies (2) and (3) (study 2 sensitivity=0.94, specificity=1.00; study 3 sensitivity=0.79, specificity=0.87). The findings suggest that median reaction time and standard deviation may be used as indicators of negative response bias. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. The construction of support vector machine classifier using the firefly algorithm.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy.

  20. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  1. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  2. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    PubMed Central

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  3. Ab initio 27Al NMR chemical shifts and quadrupolar parameters for Al2O3 phases and their precursors

    NASA Astrophysics Data System (ADS)

    Ferreira, Ary R.; Küçükbenli, Emine; Leitão, Alexandre A.; de Gironcoli, Stefano

    2011-12-01

    The gauge-including projector augmented wave (GIPAW) method, within the density functional theory (DFT) generalized gradient approximation (GGA) framework, is applied to compute solid state NMR parameters for 27Al in the α, θ, and κ aluminium oxide phases and their gibbsite and boehmite precursors. The results for well established crystalline phases compare very well with available experimental data and provide confidence in the accuracy of the method. For γ-alumina, four structural models proposed in the literature are discussed in terms of their ability to reproduce the experimental spectra also reported in the literature. Among the considered models, the Fd3¯m structure proposed by Paglia [Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.71.224115 71, 224115 (2005)] shows the best agreement. We attempt to link the theoretical NMR parameters to the local geometry. Chemical shifts depend on coordination number but no further correlation is found with geometrical parameters. Instead, our calculations reveal that, within a given coordination number, a linear correlation exists between chemical shifts and Born effective charges.

  4. Pedestrian Detection by Laser Scanning and Depth Imagery

    NASA Astrophysics Data System (ADS)

    Barsi, A.; Lovas, T.; Molnar, B.; Somogyi, A.; Igazvolgyi, Z.

    2016-06-01

    Pedestrian flow is much less regulated and controlled compared to vehicle traffic. Estimating flow parameters would support many safety, security or commercial applications. Current paper discusses a method that enables acquiring information on pedestrian movements without disturbing and changing their motion. Profile laser scanner and depth camera have been applied to capture the geometry of the moving people as time series. Procedures have been developed to derive complex flow parameters, such as count, volume, walking direction and velocity from laser scanned point clouds. Since no images are captured from the faces of pedestrians, no privacy issues raised. The paper includes accuracy analysis of the estimated parameters based on video footage as reference. Due to the dense point clouds, detailed geometry analysis has been conducted to obtain the height and shoulder width of pedestrians and to detect whether luggage has been carried or not. The derived parameters support safety (e.g. detecting critical pedestrian density in mass events), security (e.g. detecting prohibited baggage in endangered areas) and commercial applications (e.g. counting pedestrians at all entrances/exits of a shopping mall).

  5. Theoretical research of the spin-Hamiltonian parameters for two rhombic W5+ centers in KTiOPO4 (KTP) crystal through a two-mechanism model

    NASA Astrophysics Data System (ADS)

    Mei, Yang; Chen, Bo-Wei; Wei, Chen-Fu; Zheng, Wen-Chen

    2016-09-01

    The high-order perturbation formulas based on the two-mechanism model are employed to calculate the spin-Hamiltonian parameters (g factors gi and hyperfine structure constants Ai, where i=x, y, z) for two approximately rhombic W5+ centers in KTiOPO4 (KTP) crystal. In the model, both the widely-applied crystal-field (CF) mechanism concerning the interactions of CF excited states with the ground state and the generally-neglected charge-transfer (CT) mechanism concerning the interactions of CT excited states with the ground state are included. The calculated results agree with the experimental values, and the signs of constants Ai are suggested. The calculations indicate that (i) for the high valence state dn ions in crystals, the contributions to spin-Hamiltonian parameters should take into account both the CF and CT mechanisms and (ii) the large g-shifts |Δgi | (=|gi-ge |, where ge≈ 2.0023) for W5+ centers in crystals are due to the large spin-orbit parameter of free W5+ ion.

  6. Practical implications of some recent studies in electrospray ionization fundamentals.

    PubMed

    Cech, N B; Enke, C G

    2001-01-01

    In accomplishing successful electrospray ionization analyses, it is imperative to have an understanding of the effects of variables such as analyte structure, instrumental parameters, and solution composition. Here, we review some fundamental studies of the ESI process that are relevant to these issues. We discuss how analyte chargeability and surface activity are related to ESI response, and how accessible parameters such as nonpolar surface area and reversed phase HPLC retention time can be used to predict relative ESI response. Also presented is a description of how derivitizing agents can be used to maximize or enable ESI response by improving the chargeability or hydrophobicity of ESI analytes. Limiting factors in the ESI calibration curve are discussed. At high concentrations, these factors include droplet surface area and excess charge concentration, whereas at low concentrations ion transmission becomes an issue, and chemical interference can also be limiting. Stable and reproducible non-pneumatic ESI operation depends on the ability to balance a number of parameters, including applied voltage and solution surface tension, flow rate, and conductivity. We discuss how changing these parameters can shift the mode of ESI operation from stable to unstable, and how current-voltage curves can be used to characterize the mode of ESI operation. Finally, the characteristics of the ideal ESI solvent, including surface tension and conductivity requirements, are discussed. Analysis in the positive ion mode can be accomplished with acidified methanol/water solutions, but negative ion mode analysis necessitates special constituents that suppress corona discharge and facilitate the production of stable negative ions. Copyright 2002 Wiley Periodicals, Inc.

  7. Effect of Nutrient Management Planning on Crop Yield, Nitrate Leaching and Sediment Loading in Thomas Brook Watershed

    NASA Astrophysics Data System (ADS)

    Amon-Armah, Frederick; Yiridoe, Emmanuel K.; Ahmad, Nafees H. M.; Hebb, Dale; Jamieson, Rob; Burton, David; Madani, Ali

    2013-11-01

    Government priorities on provincial Nutrient Management Planning (NMP) programs include improving the program effectiveness for environmental quality protection, and promoting more widespread adoption. Understanding the effect of NMP on both crop yield and key water-quality parameters in agricultural watersheds requires a comprehensive evaluation that takes into consideration important NMP attributes and location-specific farming conditions. This study applied the Soil and Water Assessment Tool (SWAT) to investigate the effects of crop and rotation sequence, tillage type, and nutrient N application rate on crop yield and the associated groundwater leaching and sediment loss. The SWAT model was applied to the Thomas Brook Watershed, located in the most intensively managed agricultural region of Nova Scotia, Canada. Cropping systems evaluated included seven fertilizer application rates and two tillage systems (i.e., conventional tillage and no-till). The analysis reflected cropping systems commonly managed by farmers in the Annapolis Valley region, including grain corn-based and potato-based cropping systems, and a vegetable-horticulture system. ANOVA models were developed and used to assess the effects of crop management choices on crop yield and two water-quality parameters (i.e., leaching and sediment loading). Results suggest that existing recommended N-fertilizer rate can be reduced by 10-25 %, for grain crop production, to significantly lower leaching ( P > 0.05) while optimizing the crop yield. The analysis identified the nutrient N rates in combination with specific crops and rotation systems that can be used to manage leaching while balancing impacts on crop yields within the watershed.

  8. Teaching-learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

    NASA Astrophysics Data System (ADS)

    Singh, R.; Verma, H. K.

    2013-12-01

    This paper presents a teaching-learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang-big crunch (BB-BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB-BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

  9. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  10. Method and system for determining precursors of health abnormalities from processing medical records

    DOEpatents

    None, None

    2013-06-25

    Medical reports are converted to document vectors in computing apparatus and sampled by applying a maximum variation sampling function including a fitness function to the document vectors to reduce a number of medical records being processed and to increase the diversity of the medical records being processed. Linguistic phrases are extracted from the medical records and converted to s-grams. A Haar wavelet function is applied to the s-grams over the preselected time interval; and the coefficient results of the Haar wavelet function are examined for patterns representing the likelihood of health abnormalities. This confirms certain s-grams as precursors of the health abnormality and a parameter can be calculated in relation to the occurrence of such a health abnormality.

  11. Plug nozzles - The ultimate customer driven propulsion system. [applied to manned lunar and Martian landers

    NASA Technical Reports Server (NTRS)

    Aukerman, Carl A.

    1991-01-01

    This paper presents the results of a study applying the plug cluster nozzle concept to the propulsion system for a typical lunar excursion vehicle. Primary attention for the design criteria is given to user defined factors such as reliability, low volume, and ease of propulsion system development. Total thrust and specific impulse are held constant in the study while other parameters are explored to minimize the design chamber pressure. A brief history of the plug nozzle concept is included to point out the advanced level of technology of the concept and the feasibility of exploiting the variables considered in the study. The plug cluster concept looks very promising as a candidate for consideration for the ultimate customer driven propulsion system.

  12. Systems biology as a conceptual framework for research in family medicine; use in predicting response to influenza vaccination.

    PubMed

    Majnarić-Trtica, Ljiljana; Vitale, Branko

    2011-10-01

    To introduce systems biology as a conceptual framework for research in family medicine, based on empirical data from a case study on the prediction of influenza vaccination outcomes. This concept is primarily oriented towards planning preventive interventions and includes systematic data recording, a multi-step research protocol and predictive modelling. Factors known to affect responses to influenza vaccination include older age, past exposure to influenza viruses, and chronic diseases; however, constructing useful prediction models remains a challenge, because of the need to identify health parameters that are appropriate for general use in modelling patients' responses. The sample consisted of 93 patients aged 50-89 years (median 69), with multiple medical conditions, who were vaccinated against influenza. Literature searches identified potentially predictive health-related parameters, including age, gender, diagnoses of the main chronic ageing diseases, anthropometric measures, and haematological and biochemical tests. By applying data mining algorithms, patterns were identified in the data set. Candidate health parameters, selected in this way, were then combined with information on past influenza virus exposure to build the prediction model using logistic regression. A highly significant prediction model was obtained, indicating that by using a systems biology approach it is possible to answer unresolved complex medical uncertainties. Adopting this systems biology approach can be expected to be useful in identifying the most appropriate target groups for other preventive programmes.

  13. A non-classical Mindlin plate model incorporating microstructure, surface energy and foundation effects.

    PubMed

    Gao, X-L; Zhang, G Y

    2016-07-01

    A non-classical model for a Mindlin plate resting on an elastic foundation is developed in a general form using a modified couple stress theory, a surface elasticity theory and a two-parameter Winkler-Pasternak foundation model. It includes all five kinematic variables possible for a Mindlin plate. The equations of motion and the complete boundary conditions are obtained simultaneously through a variational formulation based on Hamilton's principle, and the microstructure, surface energy and foundation effects are treated in a unified manner. The newly developed model contains one material length-scale parameter to describe the microstructure effect, three surface elastic constants to account for the surface energy effect, and two foundation parameters to capture the foundation effect. The current non-classical plate model reduces to its classical elasticity-based counterpart when the microstructure, surface energy and foundation effects are all suppressed. In addition, the new model includes the Mindlin plate models considering the microstructure dependence or the surface energy effect or the foundation influence alone as special cases, recovers the Kirchhoff plate model incorporating the microstructure, surface energy and foundation effects, and degenerates to the Timoshenko beam model including the microstructure effect. To illustrate the new Mindlin plate model, the static bending and free vibration problems of a simply supported rectangular plate are analytically solved by directly applying the general formulae derived.

  14. A non-classical Mindlin plate model incorporating microstructure, surface energy and foundation effects

    PubMed Central

    Zhang, G. Y.

    2016-01-01

    A non-classical model for a Mindlin plate resting on an elastic foundation is developed in a general form using a modified couple stress theory, a surface elasticity theory and a two-parameter Winkler–Pasternak foundation model. It includes all five kinematic variables possible for a Mindlin plate. The equations of motion and the complete boundary conditions are obtained simultaneously through a variational formulation based on Hamilton's principle, and the microstructure, surface energy and foundation effects are treated in a unified manner. The newly developed model contains one material length-scale parameter to describe the microstructure effect, three surface elastic constants to account for the surface energy effect, and two foundation parameters to capture the foundation effect. The current non-classical plate model reduces to its classical elasticity-based counterpart when the microstructure, surface energy and foundation effects are all suppressed. In addition, the new model includes the Mindlin plate models considering the microstructure dependence or the surface energy effect or the foundation influence alone as special cases, recovers the Kirchhoff plate model incorporating the microstructure, surface energy and foundation effects, and degenerates to the Timoshenko beam model including the microstructure effect. To illustrate the new Mindlin plate model, the static bending and free vibration problems of a simply supported rectangular plate are analytically solved by directly applying the general formulae derived. PMID:27493578

  15. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  16. Effect of a multifactorial fall-and-fracture risk assessment and management program on gait and balance performances and disability in hospitalized older adults: a controlled study.

    PubMed

    Trombetti, A; Hars, M; Herrmann, F; Rizzoli, R; Ferrari, S

    2013-03-01

    This controlled intervention study in hospitalized oldest old adults showed that a multifactorial fall-and-fracture risk assessment and management program, applied in a dedicated geriatric hospital unit, was effective in improving fall-related physical and functional performances and the level of independence in activities of daily living in high-risk patients. Hospitalization affords a major opportunity for interdisciplinary cooperation to manage fall-and-fracture risk factors in older adults. This study aimed at assessing the effects on physical performances and the level of independence in activities of daily living (ADL) of a multifactorial fall-and-fracture risk assessment and management program applied in a geriatric hospital setting. A controlled intervention study was conducted among 122 geriatric inpatients (mean ± SD age, 84 ± 7 years) admitted with a fall-related diagnosis. Among them, 92 were admitted to a dedicated unit and enrolled into a multifactorial intervention program, including intensive targeted exercise. Thirty patients who received standard usual care in a general geriatric unit formed the control group. Primary outcomes included gait and balance performances and the level of independence in ADL measured 12 ± 6 days apart. Secondary outcomes included length of stay, incidence of in-hospital falls, hospital readmission, and mortality rates. Compared to the usual care group, the intervention group had significant improvements in Timed Up and Go (adjusted mean difference [AMD] = -3.7s; 95 % CI = -6.8 to -0.7; P = 0.017), Tinetti (AMD = -1.4; 95 % CI = -2.1 to -0.8; P < 0.001), and Functional Independence Measure (AMD = 6.5; 95 %CI = 0.7-12.3; P = 0.027) test performances, as well as in several gait parameters (P < 0.05). Furthermore, this program favorably impacted adverse outcomes including hospital readmission (hazard ratio = 0.3; 95 % CI = 0.1-0.9; P = 0.02). A multifactorial fall-and-fracture risk-based intervention program, applied in a dedicated geriatric hospital unit, was effective and more beneficial than usual care in improving physical parameters related to the risk of fall and disability among high-risk oldest old patients.

  17. Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”)

    PubMed Central

    Pourcain, Beate St.; Smith, George Davey; York, Timothy P.; Evans, David M.

    2014-01-01

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ∼4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered. PMID:25060210

  18. An approach to and web-based tool for infectious disease outbreak intervention analysis

    NASA Astrophysics Data System (ADS)

    Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina

    2017-04-01

    Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.

  19. Optimal design of earth-moving machine elements with cusp catastrophe theory application

    NASA Astrophysics Data System (ADS)

    Pitukhin, A. V.; Skobtsov, I. G.

    2017-10-01

    This paper deals with the optimal design problem solution for the operator of an earth-moving machine with a roll-over protective structure (ROPS) in terms of the catastrophe theory. A brief description of the catastrophe theory is presented, the cusp catastrophe is considered, control parameters are viewed as Gaussian stochastic quantities in the first part of the paper. The statement of optimal design problem is given in the second part of the paper. It includes the choice of the objective function and independent design variables, establishment of system limits. The objective function is determined as mean total cost that includes initial cost and cost of failure according to the cusp catastrophe probability. Algorithm of random search method with an interval reduction subject to side and functional constraints is given in the last part of the paper. The way of optimal design problem solution can be applied to choose rational ROPS parameters, which will increase safety and reduce production and exploitation expenses.

  20. Manual of Scaling Methods

    NASA Technical Reports Server (NTRS)

    Bond, Thomas H. (Technical Monitor); Anderson, David N.

    2004-01-01

    This manual reviews the derivation of the similitude relationships believed to be important to ice accretion and examines ice-accretion data to evaluate their importance. Both size scaling and test-condition scaling methods employing the resulting similarity parameters are described, and experimental icing tests performed to evaluate scaling methods are reviewed with results. The material included applies primarily to unprotected, unswept geometries, but some discussion of how to approach other situations is included as well. The studies given here and scaling methods considered are applicable only to Appendix-C icing conditions. Nearly all of the experimental results presented have been obtained in sea-level tunnels. Recommendations are given regarding which scaling methods to use for both size scaling and test-condition scaling, and icing test results are described to support those recommendations. Facility limitations and size-scaling restrictions are discussed. Finally, appendices summarize the air, water and ice properties used in NASA scaling studies, give expressions for each of the similarity parameters used and provide sample calculations for the size-scaling and test-condition scaling methods advocated.

  1. Utilization of ground waste seashells in cement mortars for masonry and plastering.

    PubMed

    Lertwattanaruk, Pusit; Makul, Natt; Siripattarapravat, Chalothorn

    2012-11-30

    In this research, four types of waste seashells, including short-necked clam, green mussel, oyster, and cockle, were investigated experimentally to develop a cement product for masonry and plastering. The parameters studied included water demand, setting time, compressive strength, drying shrinkage and thermal conductivity of the mortars. These properties were compared with those of a control mortar that was made of a conventional Portland cement. The main parameter of this study was the proportion of ground seashells used as cement replacement (5%, 10%, 15%, or 20% by weight). Incorporation of ground seashells resulted in reduced water demand and extended setting times of the mortars, which are advantages for rendering and plastering in hot climates. All mortars containing ground seashells yielded adequate strength, less shrinkage with drying and lower thermal conductivity compared to the conventional cement. The results indicate that ground seashells can be applied as a cement replacement in mortar mixes and may improve the workability of rendering and plastering mortar. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. [Determination of solubility parameters for asymmetrical dicationic ionic liquids by inverse gas chromatography].

    PubMed

    Wang, Jun; Yang, Xuzhao; Wu, Jinchao; Song, Hao; Zou, Wenyuan

    2015-12-01

    Inverse gas chromatographic (IGC) technology was used to determine the solubility parameters of three asymmetrical dicationic ionic liquids ([ PyC5Pi] [ NTf2]2, [MpC5Pi] [NTf2]2 and [PyC6Pi] [NTf2]2) at 343.15-363.15 K. Five alkanes were applied as test probes including octane (n-C8) , decane (n-C10), dodecane (n-C12), tetradecane (n-C14), hexadecane (n-C16). Some thermodynamic parameters were obtained by IGC data analysis, such as the specific retention volumes of the solvents (V0(g)), the molar enthalpies of sorption (ΔHs(1)), the partial molar enthalpies of mixing at infinite dilution (ΔH∞91)), the molar enthalpies of vaporization (ΔH)v)), the activity coefficients at infinite dilution (Ω∞(1)), and Flory-Huggins interaction parameters (χ∞(12)) between ionic liquids and probes. The solubility parameters (δ2) of the three dicationic ionic liquids at room temperature (298.15 K) were 28.52-32.66 (J x cm(-3)) ½. The solubility parameters (δ2) of cationic structure with 4-methyl morpholine are bigger than those of the cationic structure with pyridine. The bigger the solubility parameter (δ2) is, the more the carbon numbers of linking group of the ionic liquids are. The results are of great importance to the study of the solution behavior and the applications of ionic liquid.

  3. Comparing methods for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin

    2017-04-01

    There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).

  4. Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2012-01-01

    This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.

  5. Self-adaptive multi-objective harmony search for optimal design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    2017-11-01

    In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.

  6. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way. PMID:24977175

  7. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  8. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  9. Instrumental biosensors: new perspectives for the analysis of biomolecular interactions.

    PubMed

    Nice, E C; Catimel, B

    1999-04-01

    The use of instrumental biosensors in basic research to measure biomolecular interactions in real time is increasing exponentially. Applications include protein-protein, protein-peptide, DNA-protein, DNA-DNA, and lipid-protein interactions. Such techniques have been applied to, for example, antibody-antigen, receptor-ligand, signal transduction, and nuclear receptor studies. This review outlines the principles of two of the most commonly used instruments and highlights specific operating parameters that will assist in optimising experimental design, data generation, and analysis.

  10. An epidemiological model with vaccination strategies

    NASA Astrophysics Data System (ADS)

    Prates, Dérek B.; Silva, Jaqueline M.; Gomes, Jessica L.; Kritz, Maurício V.

    2016-06-01

    Mathematical models can be widely found in the literature describing epidemics. The epidemical models that use differential equations to represent mathematically such description are especially sensible to parameters. This work analyze a variation of the SIR model when applied to a epidemic scenario including several aspects, as constant vaccination, pulse vaccination, seasonality, cross-immunity factor, birth and dead rate. The analysis and results are performed through numerical solutions of the model and a special attention is given to the discussion generated by the paramenters variation.

  11. Equations for the design of two-dimensional supersonic nozzles

    NASA Technical Reports Server (NTRS)

    Pinkel, I Irving

    1948-01-01

    Equations are presented for obtaining the wall coordinates of two-dimensional supersonic nozzles. The equations are based on the application of the method of characteristics to irrotational flow of perfect gases in channels. Curves and tables are included for obtaining the parameters required by the equations for the wall coordinates. A brief discussion of characteristics as applied to nozzle design is given to assist in understanding and using the nozzle-design method of this report. A sample design is shown.

  12. Commissioning of the Electron-Positron Collider VEPP-2000 after the Upgrade

    NASA Astrophysics Data System (ADS)

    Shatunov, Yu.; Belikov, O.; Berkaev, D.; Gorchakov, K.; Zharinov, Yu.; Zemlyanskii, I.; Kasaev, A.; Kirpotin, A.; Koop, I.; Lysenko, A.; Motygin, S.; Perevedentsev, E.; Prosvetov, V.; Rabusov, D.; Rogovskii, Yu.; Senchenko, A.; Timoshenko, M.; Shatilov, D.; Shatunov, P.; Shvarts, D.

    2018-05-01

    The VEPP-2000 electron-positron collider has been operating at BINP since 2010. Applying the concept of round colliding beams allows us to reach the record value of the beam-beam parameter, ξ 0.12. The VEPP-2000 upgrade, including the connection to the new BINP Injection Complex, the improvement of the BEP booster, and the BEP-VEPP-2000 transfer channels for operation at 1 GeV, substantially increases the installation luminosity. Data collection is in progress.

  13. Digital holographic microscopy for toxicity testing and cell culture quality control

    NASA Astrophysics Data System (ADS)

    Kemper, Björn

    2018-02-01

    For the example of digital holographic microscopy (DHM), it is illustrated how label-free biophysical parameter sets can be extracted from quantitative phase images of adherent and suspended cells, and how the retrieved data can be applied for in-vitro toxicity testing and cell culture quality assessment. This includes results from the quantification of the reactions of cells to toxic substances as well as data from sophisticated monitoring of cell alterations that are related to changes of cell culture conditions.

  14. Automated surface photometry for the Coma Cluster galaxies: The catalog

    NASA Technical Reports Server (NTRS)

    Doi, M.; Fukugita, M.; Okamura, S.; Tarusawa, K.

    1995-01-01

    A homogeneous photometry catalog is presented for 450 galaxies with B(sub 25.5) less than or equal to 16 mag located in the 9.8 deg x 9.8 deg region centered on the Coma Cluster. The catalog is based on photographic photometry using an automated surface photometry software for data reduction applied to B-band Schmidt plates. The catalog provides accurate positions, isophotal and total magnitudes, major and minor axes, and a few other photometric parameters including rudimentary morphology (early of late type).

  15. Tool Efficiency Analysis model research in SEMI industry

    NASA Astrophysics Data System (ADS)

    Lei, Ma; Nana, Zhang; Zhongqiu, Zhang

    2018-06-01

    One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.

  16. Microeconomics of 300-mm process module control

    NASA Astrophysics Data System (ADS)

    Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.

    2001-08-01

    Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.

  17. The Snowmelt-Runoff Model (SRM) user's manual

    NASA Technical Reports Server (NTRS)

    Martinec, J.; Rango, A.; Major, E.

    1983-01-01

    A manual to provide a means by which a user may apply the snowmelt runoff model (SRM) unaided is presented. Model structure, conditions of application, and data requirements, including remote sensing, are described. Guidance is given for determining various model variables and parameters. Possible sources of error are discussed and conversion of snowmelt runoff model (SRM) from the simulation mode to the operational forecasting mode is explained. A computer program is presented for running SRM is easily adaptable to most systems used by water resources agencies.

  18. ITO-based evolutionary algorithm to solve traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Dong, Wenyong; Sheng, Kang; Yang, Chuanhua; Yi, Yunfei

    2014-03-01

    In this paper, a ITO algorithm inspired by ITO stochastic process is proposed for Traveling Salesmen Problems (TSP), so far, many meta-heuristic methods have been successfully applied to TSP, however, as a member of them, ITO needs further demonstration for TSP. So starting from designing the key operators, which include the move operator, wave operator, etc, the method based on ITO for TSP is presented, and moreover, the ITO algorithm performance under different parameter sets and the maintenance of population diversity information are also studied.

  19. Stability and Hopf bifurcation for a delayed SLBRS computer virus model.

    PubMed

    Zhang, Zizhen; Yang, Huizhong

    2014-01-01

    By incorporating the time delay due to the period that computers use antivirus software to clean the virus into the SLBRS model a delayed SLBRS computer virus model is proposed in this paper. The dynamical behaviors which include local stability and Hopf bifurcation are investigated by regarding the delay as bifurcating parameter. Specially, direction and stability of the Hopf bifurcation are derived by applying the normal form method and center manifold theory. Finally, an illustrative example is also presented to testify our analytical results.

  20. A Novel Degradation Identification Method for Wind Turbine Pitch System

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Dong

    2018-04-01

    It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.

  1. Topological data analysis (TDA) applied to reveal pedogenetic principles of European topsoil system.

    PubMed

    Savic, Aleksandar; Toth, Gergely; Duponchel, Ludovic

    2017-05-15

    Recent developments in applied mathematics are bringing new tools that are capable to synthesize knowledge in various disciplines, and help in finding hidden relationships between variables. One such technique is topological data analysis (TDA), a fusion of classical exploration techniques such as principal component analysis (PCA), and a topological point of view applied to clustering of results. Various phenomena have already received new interpretations thanks to TDA, from the proper choice of sport teams to cancer treatments. For the first time, this technique has been applied in soil science, to show the interaction between physical and chemical soil attributes and main soil-forming factors, such as climate and land use. The topsoil data set of the Land Use/Land Cover Area Frame survey (LUCAS) was used as a comprehensive database that consists of approximately 20,000 samples, each described by 12 physical and chemical parameters. After the application of TDA, results obtained were cross-checked against known grouping parameters including five types of land cover, nine types of climate and the organic carbon content of soil. Some of the grouping characteristics observed using standard approaches were confirmed by TDA (e.g., organic carbon content) but novel subtle relationships (e.g., magnitude of anthropogenic effect in soil formation), were discovered as well. The importance of this finding is that TDA is a unique mathematical technique capable of extracting complex relations hidden in soil science data sets, giving the opportunity to see the influence of physicochemical, biotic and abiotic factors on topsoil formation through fresh eyes. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less

  3. Parameter estimation in nonlinear distributed systems - Approximation theory and convergence results

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract approximation framework and convergence theory is described for Galerkin approximations applied to inverse problems involving nonlinear distributed parameter systems. Parameter estimation problems are considered and formulated as the minimization of a least-squares-like performance index over a compact admissible parameter set subject to state constraints given by an inhomogeneous nonlinear distributed system. The theory applies to systems whose dynamics can be described by either time-independent or nonstationary strongly maximal monotonic operators defined on a reflexive Banach space which is densely and continuously embedded in a Hilbert space. It is demonstrated that if readily verifiable conditions on the system's dependence on the unknown parameters are satisfied, and the usual Galerkin approximation assumption holds, then solutions to the approximating problems exist and approximate a solution to the original infinite-dimensional identification problem.

  4. Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters

    NASA Astrophysics Data System (ADS)

    Kim, A. G.

    2011-02-01

    I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.

  5. Correlation of Electric Field and Critical Design Parameters for Ferroelectric Tunable Microwave Filters

    NASA Technical Reports Server (NTRS)

    Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy

    2000-01-01

    The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.

  6. A parameters optimization method for planar joint clearance model and its application for dynamics simulation of reciprocating compressor

    NASA Astrophysics Data System (ADS)

    Hai-yang, Zhao; Min-qiang, Xu; Jin-dong, Wang; Yong-bo, Li

    2015-05-01

    In order to improve the accuracy of dynamics response simulation for mechanism with joint clearance, a parameter optimization method for planar joint clearance contact force model was presented in this paper, and the optimized parameters were applied to the dynamics response simulation for mechanism with oversized joint clearance fault. By studying the effect of increased clearance on the parameters of joint clearance contact force model, the relation of model parameters between different clearances was concluded. Then the dynamic equation of a two-stage reciprocating compressor with four joint clearances was developed using Lagrange method, and a multi-body dynamic model built in ADAMS software was used to solve this equation. To obtain a simulated dynamic response much closer to that of experimental tests, the parameters of joint clearance model, instead of using the designed values, were optimized by genetic algorithms approach. Finally, the optimized parameters were applied to simulate the dynamics response of model with oversized joint clearance fault according to the concluded parameter relation. The dynamics response of experimental test verified the effectiveness of this application.

  7. Comparing Consider-Covariance Analysis with Sigma-Point Consider Filter and Linear-Theory Consider Filter Formulations

    NASA Technical Reports Server (NTRS)

    Lisano, Michael E.

    2007-01-01

    Recent literature in applied estimation theory reflects growing interest in the sigma-point (also called unscented ) formulation for optimal sequential state estimation, often describing performance comparisons with extended Kalman filters as applied to specific dynamical problems [c.f. 1, 2, 3]. Favorable attributes of sigma-point filters are described as including a lower expected error for nonlinear even non-differentiable dynamical systems, and a straightforward formulation not requiring derivation or implementation of any partial derivative Jacobian matrices. These attributes are particularly attractive, e.g. in terms of enabling simplified code architecture and streamlined testing, in the formulation of estimators for nonlinear spaceflight mechanics systems, such as filter software onboard deep-space robotic spacecraft. As presented in [4], the Sigma-Point Consider Filter (SPCF) algorithm extends the sigma-point filter algorithm to the problem of consider covariance analysis. Considering parameters in a dynamical system, while estimating its state, provides an upper bound on the estimated state covariance, which is viewed as a conservative approach to designing estimators for problems of general guidance, navigation and control. This is because, whether a parameter in the system model is observable or not, error in the knowledge of the value of a non-estimated parameter will increase the actual uncertainty of the estimated state of the system beyond the level formally indicated by the covariance of an estimator that neglects errors or uncertainty in that parameter. The equations for SPCF covariance evolution are obtained in a fashion similar to the derivation approach taken with standard (i.e. linearized or extended) consider parameterized Kalman filters (c.f. [5]). While in [4] the SPCF and linear-theory consider filter (LTCF) were applied to an illustrative linear dynamics/linear measurement problem, in the present work examines the SPCF as applied to nonlinear sequential consider covariance analysis, i.e. in the presence of nonlinear dynamics and nonlinear measurements. A simple SPCF for orbit determination, exemplifying an algorithm hosted in the guidance, navigation and control (GN&C) computer processor of a hypothetical robotic spacecraft, was implemented, and compared with an identically-parameterized (standard) extended, consider-parameterized Kalman filter. The onboard filtering scenario examined is a hypothetical spacecraft orbit about a small natural body with imperfectly-known mass. The formulations, relative complexities, and performances of the filters are compared and discussed.

  8. Surveillance and Control of Malaria Transmission in Thailand using Remotely Sensed Meteorological and Environmental Parameters

    NASA Technical Reports Server (NTRS)

    Kiang, Richard K.; Adimi, Farida; Soika, Valerii; Nigro, Joseph

    2007-01-01

    These slides address the use of remote sensing in a public health application. Specifically, this discussion focuses on the of remote sensing to detect larval habitats to predict current and future endemicity and identify key factors that sustain or promote transmission of malaria in a targeted geographic area (Thailand). In the Malaria Modeling and Surveillance Project, which is part of the NASA Applied Sciences Public Health Applications Program, we have been developing techniques to enhance public health's decision capability for malaria risk assessments and controls. The main objectives are: 1) identification of the potential breeding sites for major vector species; 2) implementation of a risk algorithm to predict the occurrence of malaria and its transmission intensity; 3) implementation of a dynamic transmission model to identify the key factors that sustain or intensify malaria transmission. The potential benefits are: 1) increased warning time for public health organizations to respond to malaria outbreaks; 2) optimized utilization of pesticide and chemoprophylaxis; 3) reduced likelihood of pesticide and drug resistance; and 4) reduced damage to environment. !> Environmental parameters important to malaria transmission include temperature, relative humidity, precipitation, and vegetation conditions. The NASA Earth science data sets that have been used for malaria surveillance and risk assessment include AVHRR Pathfinder, TRMM, MODIS, NSIPP, and SIESIP. Textural-contextual classifications are used to identify small larval habitats. Neural network methods are used to model malaria cases as a function of the remotely sensed parameters. Hindcastings based on these environmental parameters have shown good agreement to epidemiological records. Discrete event simulations are used for modeling the detailed interactions among the vector life cycle, sporogonic cycle and human infection cycle, under the explicit influences of selected extrinsic and intrinsic factors. The output of the model includes the individual infection status and the quantities normally observed in field studies, such as mosquito biting rates, sporozoite infection rates, gametocyte prevalence and incidence. Results are in good agreement with mosquito vector and human malaria data acquired by Coleman et al. over 4.5 years in Kong Mong Tha, a remote village in western Thailand. Application of our models is not restricted to the Greater Mekong Subregion. Our models have been applied to malaria in Indonesia, Korea, and other regions in the world with similar success.

  9. Aquifer Hydrogeologic Layer Zonation at the Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savelieva-Trofimova, Elena A.; Kanevski, Mikhail; timonin, v.

    2003-09-10

    Sedimentary aquifer layers are characterized by spatial variability of hydraulic properties. Nevertheless, zones with similar values of hydraulic parameters (parameter zones) can be distinguished. This parameter zonation approach is an alternative to the analysis of spatial variation of the continuous hydraulic parameters. The parameter zonation approach is primarily motivated by the lack of measurements that would be needed for direct spatial modeling of the hydraulic properties. The current work is devoted to the problem of zonation of the Hanford formation, the uppermost sedimentary aquifer unit (U1) included in hydrogeologic models at the Hanford site. U1 is characterized by 5 zonesmore » with different hydraulic properties. Each sampled location is ascribed to a parameter zone by an expert. This initial classification is accompanied by a measure of quality (also indicated by an expert) that addresses the level of classification confidence. In the current study, the coneptual zonation map developed by an expert geologist was used as an a priori model. The parameter zonation problem was formulated as a multiclass classification task. Different geostatistical and machine learning algorithms were adapted and applied to solve this problem, including: indicator kriging, conditional simulations, neural networks of different architectures, and support vector machines. All methods were trained using additional soft information based on expert estimates. Regularization methods were used to overcome possible overfitting. The zonation problem was complicated because there were few samples for some zones (classes) and by the spatial non-stationarity of the data. Special approaches were developed to overcome these complications. The comparison of different methods was performed using qualitative and quantitative statistical methods and image analysis. We examined the correspondence of the results with the geologically based interpretation, including the reproduction of the spatial orientation of the different classes and the spatial correlation structure of the classes. The uncertainty of the classification task was examined using both probabilistic interpretation of the estimators and by examining the results of a set of stochastic realizations. Characterization of the classification uncertainty is the main advantage of the proposed methods.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  11. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  12. Idealized Experiments for Optimizing Model Parameters Using a 4D-Variational Method in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, Chuan; Zhang, Rong-Hua; Wu, Xinrong; Sun, Jichang

    2018-04-01

    Large biases exist in real-time ENSO prediction, which can be attributed to uncertainties in initial conditions and model parameters. Previously, a 4D variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer ( T e), which is empirically and explicitly related to sea level (SL) variation. The strength of the thermocline effect on SST (referred to simply as "the thermocline effect") is represented by an introduced parameter, α Te. A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having their initial condition optimized only, and having their initial condition plus this additional model parameter optimized, are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameters and initial conditions together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  13. Using a 4D-Variational Method to Optimize Model Parameters in an Intermediate Coupled Model of ENSO

    NASA Astrophysics Data System (ADS)

    Gao, C.; Zhang, R. H.

    2017-12-01

    Large biases exist in real-time ENSO prediction, which is attributed to uncertainties in initial conditions and model parameters. Previously, a four dimentional variational (4D-Var) data assimilation system was developed for an intermediate coupled model (ICM) and used to improve ENSO modeling through optimized initial conditions. In this paper, this system is further applied to optimize model parameters. In the ICM used, one important process for ENSO is related to the anomalous temperature of subsurface water entrained into the mixed layer (Te), which is empirically and explicitly related to sea level (SL) variation, written as Te=αTe×FTe (SL). The introduced parameter, αTe, represents the strength of the thermocline effect on sea surface temperature (SST; referred as the thermocline effect). A numerical procedure is developed to optimize this model parameter through the 4D-Var assimilation of SST data in a twin experiment context with an idealized setting. Experiments having initial condition optimized only and having initial condition plus this additional model parameter optimized both are compared. It is shown that ENSO evolution can be more effectively recovered by including the additional optimization of this parameter in ENSO modeling. The demonstrated feasibility of optimizing model parameter and initial condition together through the 4D-Var method provides a modeling platform for ENSO studies. Further applications of the 4D-Var data assimilation system implemented in the ICM are also discussed.

  14. Inference of reactive transport model parameters using a Bayesian multivariate approach

    NASA Astrophysics Data System (ADS)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  15. Estimating Unsaturated Zone N Fluxes and Travel Times to Groundwater at Watershed Scales

    NASA Astrophysics Data System (ADS)

    Liao, L.; Green, C. T.; Harter, T.; Nolan, B. T.; Juckem, P. F.; Shope, C. L.

    2016-12-01

    Nitrate concentrations in groundwater vary at spatial and temporal scales. Local variability depends on soil properties, unsaturated zone properties, hydrology, reactivity, and other factors. For example, the travel time in the unsaturated zone can cause contaminant responses in aquifers to lag behind changes in N inputs at the land surface, and variable leaching-fractions of applied N fertilizer to groundwater can elevate (or reduce) concentrations in groundwater. In this study, we apply the vertical flux model (VFM) (Liao et al., 2012) to address the importance of travel time of N in the unsaturated zone and its fraction leached from the unsaturated zone to groundwater. The Fox-Wolf-Peshtigo basins, including 34 out of 72 counties in Wisconsin, were selected as the study area. Simulated concentrations of NO3-, N2 from denitrification, O2, and environmental tracers of groundwater age were matched to observations by adjusting parameters for recharge rate, unsaturated zone travel time, fractions of N inputs leached to groundwater, O2 reduction rate, O2 threshold for denitrification, denitrification rate, and dispersivity. Correlations between calibrated parameters and GIS parameters (land use, drainage class and soil properties etc.) were evaluated. Model results revealed a median of recharge rate of 0.11 m/yr, which is comparable with results from three independent estimates of recharge rates in the study area. The unsaturated travel times ranged from 0.2 yr to 25 yr with median of 6.8 yr. The correlation analysis revealed that relationships between VFM parameters and landscape characteristics (GIS parameters) were consistent with expected relationships. Fraction N leached was lower in the vicinity of wetlands and greater in the vicinity of crop lands. Faster unsaturated zone transport in forested areas was consistent with results of studies showing rapid vertical transport in forested soils. Reaction rate coefficients correlated with chemical indicators such as Fe and P concentrations. Overall, the results demonstrate applicability of the VFM at a regional scale, as well as potential to generate N transport estimates continuously across regions based on statistical relationships between VFM model parameters and GIS parameters.

  16. Matching Pion-Nucleon Roy-Steiner Equations to Chiral Perturbation Theory.

    PubMed

    Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meissner, Ulf-G

    2015-11-06

    We match the results for the subthreshold parameters of pion-nucleon scattering obtained from a solution of Roy-Steiner equations to chiral perturbation theory up to next-to-next-to-next-to-leading order, to extract the pertinent low-energy constants including a comprehensive analysis of systematic uncertainties and correlations. We study the convergence of the chiral series by investigating the chiral expansion of threshold parameters up to the same order and discuss the role of the Δ(1232) resonance in this context. Results for the low-energy constants are also presented in the counting scheme usually applied in chiral nuclear effective field theory, where they serve as crucial input to determine the long-range part of the nucleon-nucleon potential as well as three-nucleon forces.

  17. Ageing airplane repair assessment program for Airbus A300

    NASA Technical Reports Server (NTRS)

    Gaillardon, J. M.; Schmidt, HANS-J.; Brandecker, B.

    1992-01-01

    This paper describes the current status of the repair categorization activities and includes all details about the methodologies developed for determination of the inspection program for the skin on pressurized fuselages. For inspection threshold determination two methods are defined based on fatigue life approach, a simplified and detailed method. The detailed method considers 15 different parameters to assess the influences of material, geometry, size location, aircraft usage, and workmanship on the fatigue life of the repair and the original structure. For definition of the inspection intervals a general method is developed which applies to all concerned repairs. For this the initial flaw concept is used by considering 6 parameters and the detectable flaw sizes depending on proposed nondestructive inspection methods. An alternative method is provided for small repairs allowing visual inspection with shorter intervals.

  18. Plant growth modeling at the JSC variable pressure growth chamber - An application of experimental design

    NASA Technical Reports Server (NTRS)

    Miller, Adam M.; Edeen, Marybeth; Sirko, Robert J.

    1992-01-01

    This paper describes the approach and results of an effort to characterize plant growth under various environmental conditions at the Johnson Space Center variable pressure growth chamber. Using a field of applied mathematics and statistics known as design of experiments (DOE), we developed a test plan for varying environmental parameters during a lettuce growth experiment. The test plan was developed using a Box-Behnken approach to DOE. As a result of the experimental runs, we have developed empirical models of both the transpiration process and carbon dioxide assimilation for Waldman's Green lettuce over specified ranges of environmental parameters including carbon dioxide concentration, light intensity, dew-point temperature, and air velocity. This model also predicts transpiration and carbon dioxide assimilation for different ages of the plant canopy.

  19. Impact of the Parameter Identification of Plastic Potentials on the Finite Element Simulation of Sheet Metal Forming

    NASA Astrophysics Data System (ADS)

    Rabahallah, M.; Bouvier, S.; Balan, T.; Bacroix, B.; Teodosiu, C.

    2007-04-01

    In this work, an implicit, backward Euler time integration scheme is developed for an anisotropic, elastic-plastic model based on strain-rate potentials. The constitutive algorithm includes a sub-stepping procedure to deal with the strong nonlinearity of the plastic potentials when applied to FCC materials. The algorithm is implemented in the static implicit version of the Abaqus finite element code. Several recent plastic potentials have been implemented in this framework. The most accurate potentials require the identification of about twenty material parameters. Both mechanical tests and micromechanical simulations have been used for their identification, for a number of BCC and FCC materials. The impact of the identification procedure on the prediction of ears in cup drawing is investigated.

  20. Matching Pion-Nucleon Roy-Steiner Equations to Chiral Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.

    2015-11-01

    We match the results for the subthreshold parameters of pion-nucleon scattering obtained from a solution of Roy-Steiner equations to chiral perturbation theory up to next-to-next-to-next-to-leading order, to extract the pertinent low-energy constants including a comprehensive analysis of systematic uncertainties and correlations. We study the convergence of the chiral series by investigating the chiral expansion of threshold parameters up to the same order and discuss the role of the Δ (1232 ) resonance in this context. Results for the low-energy constants are also presented in the counting scheme usually applied in chiral nuclear effective field theory, where they serve as crucial input to determine the long-range part of the nucleon-nucleon potential as well as three-nucleon forces.

  1. NOSS/ALDCS analysis and system requirements definition. [national oceanic satellite system data collection

    NASA Technical Reports Server (NTRS)

    Reed, D. L.; Wallace, R. G.

    1981-01-01

    The results of system analyses and implementation studies of an advanced location and data collection system (ALDCS) , proposed for inclusion on the National Oceanic Satellite System (NOSS) spacecraft are reported. The system applies Doppler processing and radiofrequency interferometer position location technqiues both alone and in combination. Aspects analyzed include: the constraints imposed by random access to the system by platforms, the RF link parameters, geometric concepts of position and velocity estimation by the two techniques considered, and the effects of electrical measurement errors, spacecraft attitude errors, and geometric parameters on estimation accuracy. Hardware techniques and trade-offs for interferometric phase measurement, ambiguity resolution and calibration are considered. A combined Doppler-interferometer ALDCS intended to fulfill the NOSS data validation and oceanic research support mission is also described.

  2. Dynamic analysis of Free-Piston Stirling Engine/Linear Alternator-load system-experimentally validated

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Rauch, Jeffrey S.; Santiago, Walter

    1992-01-01

    This paper discusses the effects of variations in system parameters on the dynamic behavior of the Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporate both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. A state-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.

  3. Dynamic analysis of free-piston Stirling engine/linear alternator-load system - Experimentally validated

    NASA Technical Reports Server (NTRS)

    Kankam, M. D.; Rauch, Jeffrey S.; Santiago, Walter

    1992-01-01

    This paper discusses the effects of a variations in system parameters on the dynamic behavior of a Free-Piston Stirling Engine/Linear Alternator (FPSE/LA)-load system. The mathematical formulations incorporates both the mechanical and thermodynamic properties of the FPSE, as well as the electrical equations of the connected load. State-space technique in the frequency domain is applied to the resulting system of equations to facilitate the evaluation of parametric impacts on the system dynamic stability. Also included is a discussion on the system transient stability as affected by sudden changes in some key operating conditions. Some representative results are correlated with experimental data to verify the model and analytic formulation accuracies. Guidelines are given for ranges of the system parameters which will ensure an overall stable operation.

  4. Variability Search in GALFACTS

    NASA Astrophysics Data System (ADS)

    Kania, Joseph; Wenger, Trey; Ghosh, Tapasi; Salter, Christopher J.

    2015-01-01

    The Galactic ALFA Continuum Transit Survey (GALFACTS) is an all-Arecibo-sky survey using the seven-beam Arecibo L-band Feed Array (ALFA). The Survey is centered at 1.375 GHz with 300-MHz bandwidth, and measures all four Stokes parameters. We are looking for compact sources that vary in intensity or polarization on timescales of about a month via intra-survey comparisons and long term variations through comparisons with the NRAO VLA Sky Survey. Data processing includes locating and rejecting radio frequency interference, recognizing sources, two-dimensional Gaussian fitting to multiple cuts through the same source, and gain corrections. Our Python code is being used on the calibrations sources observed in conjunction with the survey measurements to determine the calibration parameters that will then be applied to data for the main field.

  5. Constraints on the pre-impact orbits of Solar system giant impactors

    NASA Astrophysics Data System (ADS)

    Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.

    2018-03-01

    We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.

  6. The Landau-de Gennes approach revisited: A minimal self-consistent microscopic theory for spatially inhomogeneous nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Gârlea, Ioana C.; Mulder, Bela M.

    2017-12-01

    We design a novel microscopic mean-field theory of inhomogeneous nematic liquid crystals formulated entirely in terms of the tensor order parameter field. It combines the virtues of the Landau-de Gennes approach in allowing both the direction and magnitude of the local order to vary, with a self-consistent treatment of the local free-energy valid beyond the small order parameter limit. As a proof of principle, we apply this theory to the well-studied problem of a colloid dispersed in a nematic liquid crystal by including a tunable wall coupling term. For the two-dimensional case, we investigate the organization of the liquid crystal and the position of the point defects as a function of the strength of the coupling constant.

  7. Equivalent Circuit Parameter Calculation of Interior Permanent Magnet Motor Involving Iron Loss Resistance Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Yamazaki, Katsumi

    In this paper, we propose a method to calculate the equivalent circuit parameters of interior permanent magnet motors including iron loss resistance using the finite element method. First, the finite element analysis considering harmonics and magnetic saturation is carried out to obtain time variations of magnetic fields in the stator and the rotor core. Second, the iron losses of the stator and the rotor are calculated from the results of the finite element analysis with the considerations of harmonic eddy current losses and the minor hysteresis losses of the core. As a result, we obtain the equivalent circuit parameters i.e. the d-q axis inductance and the iron loss resistance as functions of operating condition of the motor. The proposed method is applied to an interior permanent magnet motor to calculate the characteristics based on the equivalent circuit obtained by the proposed method. The calculated results are compared with the experimental results to verify the accuracy.

  8. Can arsenic occurrence rate in bedrock aquifers be predicted?

    USGS Publications Warehouse

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.

  9. Manufacturing a Porous Structure According to the Process Parameters of Functional 3D Porous Polymer Printing Technology Based on a Chemical Blowing Agent

    NASA Astrophysics Data System (ADS)

    Yoo, C. J.; Shin, B. S.; Kang, B. S.; Yun, D. H.; You, D. B.; Hong, S. M.

    2017-09-01

    In this paper, we propose a new porous polymer printing technology based on CBA(chemical blowing agent), and describe the optimization process according to the process parameters. By mixing polypropylene (PP) and CBA, a hybrid CBA filament was manufactured; the diameter of the filament ranged between 1.60 mm and 1.75 mm. A porous polymer structure was manufactured based on the traditional fused deposition modelling (FDM) method. The process parameters of the three-dimensional (3D) porous polymer printing (PPP) process included nozzle temperature, printing speed, and CBA density. Porosity increase with an increase in nozzle temperature and CBA density. On the contrary, porosity increase with a decrease in the printing speed. For porous structures, it has excellent mechanical properties. We manufactured a simple shape in 3D using 3D PPP technology. In the future, we will study the excellent mechanical properties of 3D PPP technology and apply them to various safety fields.

  10. Quantum corrections for the phase diagram of systems with competing order.

    PubMed

    Silva, N L; Continentino, Mucio A; Barci, Daniel G

    2018-06-06

    We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu 2 Si 2 . Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders.

  11. Resonant frequency calculations using a hybrid perturbation-Galerkin technique

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1991-01-01

    A two-step hybrid perturbation Galerkin technique is applied to the problem of determining the resonant frequencies of one or several degree of freedom nonlinear systems involving a parameter. In one step, the Lindstedt-Poincare method is used to determine perturbation solutions which are formally valid about one or more special values of the parameter (e.g., for large or small values of the parameter). In step two, a subset of the perturbation coordinate functions determined in step one is used in Galerkin type approximation. The technique is illustrated for several one degree of freedom systems, including the Duffing and van der Pol oscillators, as well as for the compound pendulum. For all of the examples considered, it is shown that the frequencies obtained by the hybrid technique using only a few terms from the perturbation solutions are significantly more accurate than the perturbation results on which they are based, and they compare very well with frequencies obtained by purely numerical methods.

  12. Technical Note: Artificial coral reef mesocosms for ocean acidification investigations

    NASA Astrophysics Data System (ADS)

    Leblud, J.; Moulin, L.; Batigny, A.; Dubois, P.; Grosjean, P.

    2014-11-01

    The design and evaluation of replicated artificial mesocosms are presented in the context of a thirteen month experiment on the effects of ocean acidification on tropical coral reefs. They are defined here as (semi)-closed (i.e. with or without water change from the reef) mesocosms in the laboratory with a more realistic physico-chemical environment than microcosms. Important physico-chemical parameters (i.e. pH, pO2, pCO2, total alkalinity, temperature, salinity, total alkaline earth metals and nutrients availability) were successfully monitored and controlled. Daily variations of irradiance and pH were applied to approach field conditions. Results highlighted that it was possible to maintain realistic physico-chemical parameters, including daily changes, into artificial mesocosms. On the other hand, the two identical artificial mesocosms evolved differently in terms of global community oxygen budgets although the initial biological communities and physico-chemical parameters were comparable. Artificial reef mesocosms seem to leave enough degrees of freedom to the enclosed community of living organisms to organize and change along possibly diverging pathways.

  13. Quantum corrections for the phase diagram of systems with competing order

    NASA Astrophysics Data System (ADS)

    Silva, N. L., Jr.; Continentino, Mucio A.; Barci, Daniel G.

    2018-06-01

    We use the effective potential method of quantum field theory to obtain the quantum corrections to the zero temperature phase diagram of systems with competing order parameters. We are particularly interested in two different scenarios: regions of the phase diagram where there is a bicritical point, at which both phases vanish continuously, and the case where both phases coexist homogeneously. We consider different types of couplings between the order parameters, including a bilinear one. This kind of coupling breaks time-reversal symmetry and it is only allowed if both order parameters transform according to the same irreducible representation. This occurs in many physical systems of actual interest like competing spin density waves, different types of orbital antiferromagnetism, elastic instabilities of crystal lattices, vortices in a multigap SC and also applies to describe the unusual magnetism of the heavy fermion compound URu2Si2. Our results show that quantum corrections have an important effect on the phase diagram of systems with competing orders.

  14. MRS proof-of-concept on atmospheric corrections. Atmospheric corrections using an orbital pointable imaging system

    NASA Technical Reports Server (NTRS)

    Slater, P. N. (Principal Investigator)

    1980-01-01

    The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.

  15. Biodiesel production from Spirulina microalgae feedstock using direct transesterification near supercritical methanol condition.

    PubMed

    Mohamadzadeh Shirazi, Hamed; Karimi-Sabet, Javad; Ghotbi, Cyrus

    2017-09-01

    Microalgae as a candidate for production of biodiesel, possesses a hard cell wall that prevents intracellular lipids leaving out from the cells. Direct or in situ supercritical transesterification has the potential for destruction of microalgae hard cell wall and conversion of extracted lipids to biodiesel that consequently reduces the total energy consumption. Response surface methodology combined with central composite design was applied to investigate process parameters including: Temperature, Time, Methanol-to-dry algae, Hexane-to-dry algae, and Moisture content. Thirty-two experiments were designed and performed in a batch reactor, and biodiesel efficiency between 0.44% and 99.32% was obtained. According to fatty acid methyl ester yields, a quadratic experimental model was adjusted and the significance of parameters was evaluated using analysis of variance (ANOVA). Effects of single and interaction parameters were also interpreted. In addition, the effect of supercritical process on the ultrastructure of microalgae cell wall using scanning electron spectrometry (SEM) was surveyed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. An experimental and modeling study of isothermal charge/discharge behavior of commercial Ni-MH cells

    NASA Astrophysics Data System (ADS)

    Pan, Y. H.; Srinivasan, V.; Wang, C. Y.

    In this study, a previously developed nickel-metal hydride (Ni-MH) battery model is applied in conjunction with experimental characterization. Important geometric parameters, including the active surface area and micro-diffusion length for both electrodes, are measured and incorporated in the model. The kinetic parameters of the oxygen evolution reaction are also characterized using constant potential experiments. Two separate equilibrium equations for the Ni electrode, one for charge and the other for discharge, are determined to provide a better description of the electrode hysteresis effect, and their use results in better agreement of simulation results with experimental data on both charge and discharge. The Ni electrode kinetic parameters are re-calibrated for the battery studied. The Ni-MH cell model coupled with the updated electrochemical properties is then used to simulate a wide range of experimental discharge and charge curves with satisfactory agreement. The experimentally validated model is used to predict and compare various charge algorithms so as to provide guidelines for application-specific optimization.

  17. Why are para-hydrogen clusters superfluid? A quantum theorem of corresponding states study.

    PubMed

    Sevryuk, Mikhail B; Toennies, J Peter; Ceperley, David M

    2010-08-14

    The quantum theorem of corresponding states is applied to N=13 and N=26 cold quantum fluid clusters to establish where para-hydrogen clusters lie in relation to more and less quantum delocalized systems. Path integral Monte Carlo calculations of the energies, densities, radial and pair distributions, and superfluid fractions are reported at T=0.5 K for a Lennard-Jones (LJ) (12,6) potential using six different de Boer parameters including the accepted value for hydrogen. The results indicate that the hydrogen clusters are on the borderline to being a nonsuperfluid solid but that the molecules are sufficiently delocalized to be superfluid. A general phase diagram for the total and kinetic energies of LJ (12,6) clusters encompassing all sizes from N=2 to N=infinity and for the entire range of de Boer parameters is presented. Finally the limiting de Boer parameters for quantum delocalization induced unbinding ("quantum unbinding") are estimated and the new results are found to agree with previous calculations for the bulk and smaller clusters.

  18. Can arsenic occurrence rates in bedrock aquifers be predicted?

    PubMed Central

    Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan

    2012-01-01

    A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208

  19. Satellite-derived ice data sets no. 2: Arctic monthly average microwave brightness temperatures and sea ice concentrations, 1973-1976

    NASA Technical Reports Server (NTRS)

    Parkinson, C. L.; Comiso, J. C.; Zwally, H. J.

    1987-01-01

    A summary data set for four years (mid 70's) of Arctic sea ice conditions is available on magnetic tape. The data include monthly and yearly averaged Nimbus 5 electrically scanning microwave radiometer (ESMR) brightness temperatures, an ice concentration parameter derived from the brightness temperatures, monthly climatological surface air temperatures, and monthly climatological sea level pressures. All data matrices are applied to 293 by 293 grids that cover a polar stereographic map enclosing the 50 deg N latitude circle. The grid size varies from about 32 X 32 km at the poles to about 28 X 28 km at 50 deg N. The ice concentration parameter is calculated assuming that the field of view contains only open water and first-year ice with an ice emissivity of 0.92. To account for the presence of multiyear ice, a nomogram is provided relating the ice concentration parameter, the total ice concentration, and the fraction of the ice cover which is multiyear ice.

  20. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.

Top