Comparison of two methods for calculating the P sorption capacity parameter in soils
USDA-ARS?s Scientific Manuscript database
Phosphorus (P) cycling in soils is an important process affecting P movement through the landscape. The P cycling routines in many computer models are based on the relationships developed for the EPIC model. An important parameter required for this model is the P sorption capacity parameter (PSP). I...
[Design and experiment of micro biochemical detector based on micro spectrometer].
Yu, Qing-hua; Wen, Zhi-yu; Chen, Gang; Dai, Wei-wei; Liu, Nian-ci; Wu, Xin
2012-03-01
According to the requirements of rapid detection of important life parameters for the sick and wounded, a new micro bio-chemical detection configuration was proposed utilizing continuous spectroscopy analysis, which was founded on MOEMS and embedded technology. The configuration was developed as so much research work was carried out on the detecting objects and methods. Important parameters such as stray light, absorbance linearity, absorbance ratability, stability and temperature accuracy of the instrument were tested, which are all in good agreement with the design requirements. Clinic tests show that it can detect multiple life parameters quickly (Na+, GLU, Hb eg.).
STUDY TO IDENTIFY IMPORTANT PARAMETERS FOR CHARACTERIZING PESTICIDE RESIDUE TRANSFER EFFICIENCIES
To reduce the uncertainty associated with current estimates of children's exposure to pesticides by dermal contact and non-dietary ingestion, residue transfer data are required. Prior to conducting exhaustive studies, a screening study to identify the important parameters for...
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Technical parameters for specifying imagery requirements
NASA Technical Reports Server (NTRS)
Coan, Paul P.; Dunnette, Sheri J.
1994-01-01
Providing visual information acquired from remote events to various operators, researchers, and practitioners has become progressively more important as the application of special skills in alien or hazardous situations increases. To provide an understanding of the technical parameters required to specify imagery, we have identified, defined, and discussed seven salient characteristics of images: spatial resolution, linearity, luminance resolution, spectral discrimination, temporal discrimination, edge definition, and signal-to-noise ratio. We then describe a generalizing imaging system and identified how various parts of the system affect the image data. To emphasize the different applications of imagery, we have constrasted the common television system with the significant parameters of a televisual imaging system for technical applications. Finally, we have established a method by which the required visual information can be specified by describing certain technical parameters which are directly related to the information content of the imagery. This method requires the user to complete a form listing all pertinent data requirements for the imagery.
The determination of some requirements for a helicopter flight research simulation facility
NASA Technical Reports Server (NTRS)
Sinacori, J. B.
1977-01-01
Important requirements were defined for a flight simulation facility to support Army helicopter development. In particular requirements associated with the visual and motion subsystems of the planned simulator were studied. The method used in the motion requirements study is presented together with the underlying assumptions and a description of the supporting data. Results are given in a form suitable for use in a preliminary design. Visual requirements associated with a television camera/model concept are related. The important parameters are described together with substantiating data and assumptions. Research recommendations are given.
Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.
We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.
Spectral embedding finds meaningful (relevant) structure in image and microarray data
Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L
2006-01-01
Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359
Monolithic microwave integrated circuits: Interconnections and packaging considerations
NASA Astrophysics Data System (ADS)
Bhasin, K. B.; Downey, A. N.; Ponchak, G. E.; Romanofsky, R. R.; Anzic, G.; Connolly, D. J.
Monolithic microwave integrated circuits (MMIC's) above 18 GHz were developed because of important potential system benefits in cost reliability, reproducibility, and control of circuit parameters. The importance of interconnection and packaging techniques that do not compromise these MMIC virtues is emphasized. Currently available microwave transmission media are evaluated to determine their suitability for MMIC interconnections. An antipodal finline type of microstrip waveguide transition's performance is presented. Packaging requirements for MMIC's are discussed for thermal, mechanical, and electrical parameters for optimum desired performance.
Monolithic microwave integrated circuits: Interconnections and packaging considerations
NASA Technical Reports Server (NTRS)
Bhasin, K. B.; Downey, A. N.; Ponchak, G. E.; Romanofsky, R. R.; Anzic, G.; Connolly, D. J.
1984-01-01
Monolithic microwave integrated circuits (MMIC's) above 18 GHz were developed because of important potential system benefits in cost reliability, reproducibility, and control of circuit parameters. The importance of interconnection and packaging techniques that do not compromise these MMIC virtues is emphasized. Currently available microwave transmission media are evaluated to determine their suitability for MMIC interconnections. An antipodal finline type of microstrip waveguide transition's performance is presented. Packaging requirements for MMIC's are discussed for thermal, mechanical, and electrical parameters for optimum desired performance.
Selection of the battery pack parameters for an electric vehicle based on performance requirements
NASA Astrophysics Data System (ADS)
Koniak, M.; Czerepicki, A.
2017-06-01
Each type of vehicle has specific power requirements. Some require a rapid charging, other make long distances between charges, but a common feature is the longest battery life time. Additionally, the battery is influenced by factors such as temperature, depth of discharge and the operation current. The article contain the parameters of chemical cells that should be taken into account during the design of the battery for a specific application. This is particularly important because the batteries are not properly matched and can wear prematurely and cause an additional costs. The method of selecting the correct cell type should take previously discussed features and operating characteristics of the vehicle into account. The authors present methods of obtaining such characteristics along with their assessment and examples. Also there has been described an example of the battery parameters selection based on design assumptions of the vehicle and the expected performance characteristics. Selecting proper battery operating parameters is important due to its impact on the economic result of investments in electric vehicles. For example, for some Li-Ion technologies, the earlier worn out of batteries in a fleet of cruise boats or buses having estimated lifetime of 10 years is not acceptable, because this will cause substantial financial losses for the owner of the rolling stock. The presented method of choosing the right cell technology in the selected application, can be the basis for making the decision on future battery technical parameters.
Predictors of early survival in Soay sheep: cohort-, maternal- and individual-level variation
Jones, Owen R; Crawley, Michael J; Pilkington, Jill G; Pemberton, Josephine M
2005-01-01
A demographic understanding of population dynamics requires an appreciation of the processes influencing survival—a demographic rate influenced by parameters varying at the individual, maternal and cohort level. There have been few attempts to partition the variance in demography contributed by each of these parameter types. Here, we use data from a feral population of Soay sheep (Ovis aries), from the island of St Kilda, to explore the relative importance of these parameter types on early survival. We demonstrate that the importance of variation occurring at the level of the individual, and maternally, far outweighs that occurring at the cohort level. The most important variables within the individual and maternal levels were birth weight and maternal age class, respectively. This work underlines the importance of using individual based models in ecological demography and we, therefore, caution against studies that focus solely on population processes. PMID:16321784
ERIC Educational Resources Information Center
Goel, Sanjay
2006-01-01
Fifty-four engineers and managers working with Indian and multinational IT companies, with an average experience of 7.5 years, have responded to a survey about engineering education. Respondents have assessed the importance of 49 parameters. Twenty-three of these parameters correspond to core engineering and general professional competencies for…
Development of system design information for carbon dioxide using an amine type sorber
NASA Technical Reports Server (NTRS)
Rankin, R. L.; Roehlich, F.; Vancheri, F.
1971-01-01
Development work on system design information for amine type carbon dioxide sorber is reported. Amberlite IR-45, an aminated styrene divinyl benzene matrix, was investigated to determine the influence of design parameters of sorber particle size, process flow rate, CO2 partial pressure, total pressure, and bed designs. CO2 capacity and energy requirements for a 4-man size system were related mathematically to important operational parameters. Some fundamental studies in CO2 sorber capacity, energy requirements, and process operation were also performed.
NASA Astrophysics Data System (ADS)
Reyes, J. J.; Adam, J. C.; Tague, C.
2016-12-01
Grasslands play an important role in agricultural production as forage for livestock; they also provide a diverse set of ecosystem services including soil carbon (C) storage. The partitioning of C between above and belowground plant compartments (i.e. allocation) is influenced by both plant characteristics and environmental conditions. The objectives of this study are to 1) develop and evaluate a hybrid C allocation strategy suitable for grasslands, and 2) apply this strategy to examine the importance of various parameters related to biogeochemical cycling, photosynthesis, allocation, and soil water drainage on above and belowground biomass. We include allocation as an important process in quantifying the model parameter uncertainty, which identifies the most influential parameters and what processes may require further refinement. For this, we use the Regional Hydro-ecologic Simulation System, a mechanistic model that simulates coupled water and biogeochemical processes. A Latin hypercube sampling scheme was used to develop parameter sets for calibration and evaluation of allocation strategies, as well as parameter uncertainty analysis. We developed the hybrid allocation strategy to integrate both growth-based and resource-limited allocation mechanisms. When evaluating the new strategy simultaneously for above and belowground biomass, it produced a larger number of less biased parameter sets: 16% more compared to resource-limited and 9% more compared to growth-based. This also demonstrates its flexible application across diverse plant types and environmental conditions. We found that higher parameter importance corresponded to sub- or supra-optimal resource availability (i.e. water, nutrients) and temperature ranges (i.e. too hot or cold). For example, photosynthesis-related parameters were more important at sites warmer than the theoretical optimal growth temperature. Therefore, larger values of parameter importance indicate greater relative sensitivity in adequately representing the relevant process to capture limiting resources or manage atypical environmental conditions. These results may inform future experimental work by focusing efforts on quantifying specific parameters under various environmental conditions or across diverse plant functional types.
Requirements on high resolution detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, A.
For a number of microtomography applications X-ray detectors with a spatial resolution of 1 {mu}m are required. This high spatial resolution will influence and degrade other parameters of secondary importance like detective quantum efficiency (DQE), dynamic range, linearity and frame rate. This note summarizes the most important arguments, for and against those detector systems which could be considered. This article discusses the mutual dependencies between the various figures which characterize a detector, and tries to give some ideas on how to proceed in order to improve present technology.
Knopman, Debra S.; Voss, Clifford I.
1988-01-01
Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.
Metrological reliability of optical coherence tomography in biomedical applications
NASA Astrophysics Data System (ADS)
Goloni, C. M.; Temporão, G. P.; Monteiro, E. C.
2013-09-01
Optical coherence tomography (OCT) has been proving to be an efficient diagnostics technique for imaging in vivo tissues, an optical biopsy with important perspectives as a diagnostic tool for quantitative characterization of tissue structures. Despite its established clinical use, there is no international standard to address the specific requirements for basic safety and essential performance of OCT devices for biomedical imaging. The present work studies the parameters necessary for conformity assessment of optoelectronics equipment used in biomedical applications like Laser, Intense Pulsed Light (IPL), and OCT, targeting to identify the potential requirements to be considered in the case of a future development of a particular standard for OCT equipment. In addition to some of the particular requirements standards for laser and IPL, also applicable for metrological reliability analysis of OCT equipment, specific parameters for OCT's evaluation have been identified, considering its biomedical application. For each parameter identified, its information on the accompanying documents and/or its measurement has been recommended. Among the parameters for which the measurement requirement was recommended, including the uncertainty evaluation, the following are highlighted: optical radiation output, axial and transverse resolution, pulse duration and interval, and beam divergence.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Trends of microwave dielectric materials for antenna application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sulong, T. A. T., E-mail: tuanamirahtuansulong@gmail.com; Osman, R. A. M., E-mail: rozana@unimap.edu.my; Idris, M. S., E-mail: sobri@unimap.edu.my
Rapid development of a modern microwave communication system requires a high quality microwave dielectric ceramic material to be used as mobile and satellite communication. High permittivity of dielectric ceramics leads to fabrication of compact device for electronic components. Dielectric ceramics which used for microwave applications required three important parameters such as high or appropriate permittivity (ε{sub r}), high quality factor (Q {sub f} ≥ 5000 GH z) and good temperature coefficient of resonant frequency (τ{sub f}). This paper review of various dielectric ceramic materials used as microwave dielectric materials and related parameters for antenna applications.
Porous silicon for drug delivery systems
NASA Astrophysics Data System (ADS)
Abramova, E. N.; Khort, A. M.; Yakovenko, A. G.; Kornilova, D. S.; Slipchenko, E. A.; Prokhorov, D. I.; Shvets, V. I.
2018-01-01
The article deals with main principles of the formation of porous silicon (por-Si) to produce containers for drug delivery systems. Most important por-Si characteristics to produce nanocontainers with required parameters are determined.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boozer, Allen H., E-mail: ahb17@columbia.edu
2015-03-15
The plasma current in ITER cannot be allowed to transfer from thermal to relativistic electron carriers. The potential for damage is too great. Before the final design is chosen for the mitigation system to prevent such a transfer, it is important that the parameters that control the physics be understood. Equations that determine these parameters and their characteristic values are derived. The mitigation benefits of the injection of impurities with the highest possible atomic number Z and the slowing plasma cooling during halo current mitigation to ≳40 ms in ITER are discussed. The highest possible Z increases the poloidal flux consumptionmore » required for each e-fold in the number of relativistic electrons and reduces the number of high energy seed electrons from which exponentiation builds. Slow cooling of the plasma during halo current mitigation also reduces the electron seed. Existing experiments could test physics elements required for mitigation but cannot carry out an integrated demonstration. ITER itself cannot carry out an integrated demonstration without excessive danger of damage unless the probability of successful mitigation is extremely high. The probability of success depends on the reliability of the theory. Equations required for a reliable Monte Carlo simulation are derived.« less
Transport Properties for Combustion Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, N.J.; Bastein, L.; Price, P.N.
This review examines current approximations and approaches that underlie the evaluation of transport properties for combustion modeling applications. Discussed in the review are: the intermolecular potential and its descriptive molecular parameters; various approaches to evaluating collision integrals; supporting data required for the evaluation of transport properties; commonly used computer programs for predicting transport properties; the quality of experimental measurements and their importance for validating or rejecting approximations to property estimation; the interpretation of corresponding states; combination rules that yield pair molecular potential parameters for unlike species from like species parameters; and mixture approximations. The insensitivity of transport properties to intermolecularmore » forces is noted, especially the non-uniqueness of the supporting potential parameters. Viscosity experiments of pure substances and binary mixtures measured post 1970 are used to evaluate a number of approximations; the intermediate temperature range 1 < T* < 10, where T* is kT/{var_epsilon}, is emphasized since this is where rich data sets are available. When suitable potential parameters are used, errors in transport property predictions for pure substances and binary mixtures are less than 5 %, when they are calculated using the approaches of Kee et al.; Mason, Kestin, and Uribe; Paul and Warnatz; or Ern and Giovangigli. Recommendations stemming from the review include (1) revisiting the supporting data required by the various computational approaches, and updating the data sets with accurate potential parameters, dipole moments, and polarizabilities; (2) characterizing the range of parameter space over which the fit to experimental data is good, rather than the current practice of reporting only the parameter set that best fits the data; (3) looking for improved combining rules, since existing rules were found to under-predict the viscosity in most cases; (4) performing more transport property measurements for mixtures that include radical species, an important but neglected area; (5) using the TRANLIB approach for treating polar molecules and (6) performing more accurate measurements of the molecular parameters used to evaluate the molecular heat capacity, since it affects thermal conductivity, which is important in predicting flame development.« less
USCS and the USDA Soil Classification System: Development of a Mapping Scheme
2015-03-01
important to human daily living. A variety of disciplines (geology, agriculture, engineering, etc.) require a sys- tematic categorization of soil, detailing...it is often important to also con- sider parameters that indicate soil strength. Two important properties used for engineering-related problems are...that many textural clas- sification systems were developed to meet specifics needs. In agriculture, textural classification is used to determine crop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, W.C.; Barrett, D.M.; Sampayan, S.E.
1990-08-06
In this paper we discuss system issues and modeling requirements within the context of energy sweep in an electron linear induction accelerator. When needed, particular parameter values are taken from the ETA-II linear induction accelerator at Lawrence Livermore National Laboratory. For this paper, the most important parameter is energy sweep during a pulse. It is important to have low energy sweep to satisfy the FEL resonance condition and to limit the beam corkscrew motion. It is desired to achieve {Delta}E/E = {plus minus}1% for a 50-ns flattop whereas the present level of performance is {Delta}E/E = {plus minus}1% in 10more » ns. To improve this situation we will identify a number of areas in which modeling could help increase understanding and improve our ability to design linear induction accelerators.« less
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
1989-10-30
assembled pair is tumble lapped. Tumble lapping is a process in which Mechanically, the 1.5-inch diameter rotors a a weighted lapping element and slurry of...parameter met AGTF requirements at that site. Weighting factors were than assigned to each parameter as an indication of importance of the parameter to...AGTF. The weighted score was determined by multiplying the score by the weighting factor. The weighted scores were then totaled for each site to
Overview of Icing Physics Relevant to Scaling
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
An understanding of icing physics is required for the development of both scaling methods and ice-accretion prediction codes. This paper gives an overview of our present understanding of the important physical processes and the associated similarity parameters that determine the shape of Appendix C ice accretions. For many years it has been recognized that ice accretion processes depend on flow effects over the model, on droplet trajectories, on the rate of water collection and time of exposure, and, for glaze ice, on a heat balance. For scaling applications, equations describing these events have been based on analyses at the stagnation line of the model and have resulted in the identification of several non-dimensional similarity parameters. The parameters include the modified inertia parameter of the water drop, the accumulation parameter and the freezing fraction. Other parameters dealing with the leading edge heat balance have also been used for convenience. By equating scale expressions for these parameters to the values to be simulated a set of equations is produced which can be solved for the scale test conditions. Studies in the past few years have shown that at least one parameter in addition to those mentioned above is needed to describe surface-water effects, and some of the traditional parameters may not be as significant as once thought. Insight into the importance of each parameter, and the physical processes it represents, can be made by viewing whether ice shapes change, and the extent of the change, when each parameter is varied. Experimental evidence is presented to establish the importance of each of the traditionally used parameters and to identify the possible form of a new similarity parameter to be used for scaling.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
In orbit adiabatic demagnetization refrigeration for bolometric and microcalorimetric detectors
NASA Astrophysics Data System (ADS)
Hepburn, I. D.; Ade, P. A. R.; Davenport, I.; Smith, A.; Sumner, T. J.
1992-12-01
The new generation of photon detectors for satellite based mm/submm and X-ray astronomical observations require cooling to temperatures in the range 60 to 300 mK. At present Adiabatic Demagnetization Refrigeration (ADR) is the best proposed technique for producing these temperatures in orbit due to its inherent simplicity and gravity independent operation. For the efficient utilization of an ADR it is important to realize long operational times at base temperature with short recycle times. These criteria are dependent on several parameters; the required operating temperature, the cryogen bath temperature, the amount of heat leakage to the paramagnetic salt, the volume and type of salt and the maximum obtainable magnetic field. For space application these parameters are restricted by the limitations imposed on the physical size, the mass, the available electrical power and the cooling power available. The design considerations required in order to match these parameters are described and test data from a working laboratory system is presented.
NASA Astrophysics Data System (ADS)
Xiao, Shou-Ne; Wang, Ming-Meng; Hu, Guang-Zhong; Yang, Guang-Wu
2017-09-01
In view of the problem that it's difficult to accurately grasp the influence range and transmission path of the vehicle top design requirements on the underlying design parameters. Applying directed-weighted complex network to product parameter model is an important method that can clarify the relationships between product parameters and establish the top-down design of a product. The relationships of the product parameters of each node are calculated via a simple path searching algorithm, and the main design parameters are extracted by analysis and comparison. A uniform definition of the index formula for out-in degree can be provided based on the analysis of out-in-degree width and depth and control strength of train carriage body parameters. Vehicle gauge, axle load, crosswind and other parameters with higher values of the out-degree index are the most important boundary conditions; the most considerable performance indices are the parameters that have higher values of the out-in-degree index including torsional stiffness, maximum testing speed, service life of the vehicle, and so on; the main design parameters contain train carriage body weight, train weight per extended metre, train height and other parameters with higher values of the in-degree index. The network not only provides theoretical guidance for exploring the relationship of design parameters, but also further enriches the application of forward design method to high-speed trains.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Analysis of carbon dioxide bands near 2.2 micrometers
NASA Technical Reports Server (NTRS)
Abubaker, M. S.; Shaw, J. H.
1984-01-01
Carbon dioxide is one of the more important atmospheric infrared-absorbing gases due to its relatively high, and increasing, concentration. The spectral parameters of its bands are required for understanding radiative heat transfer in the atmosphere. The line intensities, positions, line half-widths, rotational constants, and band centers of three overlapping bands of CO2 near 2.2 microns are presented. Non-linear least squares (NLLS) regression procedures were employed to determine these parameters.
CHARACTERIZING RESIDUE TRANSFER EFFICIENCIES USING A FLUORESCENT IMAGING TECHNIQUE
To reduce the uncertainty associated with current estimates of children's exposure to pesticides by dermal contact and indirect ingestion, residue transfer data are required. Prior to conducting exhaustive studies, a screening study to identify the important parameters for chara...
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
NASA Astrophysics Data System (ADS)
de Santana, Felipe Bachion; de Souza, André Marcelo; Poppi, Ronei Jesus
2018-02-01
This study evaluates the use of visible and near infrared spectroscopy (Vis-NIRS) combined with multivariate regression based on random forest to quantify some quality soil parameters. The parameters analyzed were soil cation exchange capacity (CEC), sum of exchange bases (SB), organic matter (OM), clay and sand present in the soils of several regions of Brazil. Current methods for evaluating these parameters are laborious, timely and require various wet analytical methods that are not adequate for use in precision agriculture, where faster and automatic responses are required. The random forest regression models were statistically better than PLS regression models for CEC, OM, clay and sand, demonstrating resistance to overfitting, attenuating the effect of outlier samples and indicating the most important variables for the model. The methodology demonstrates the potential of the Vis-NIR as an alternative for determination of CEC, SB, OM, sand and clay, making possible to develop a fast and automatic analytical procedure.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
High-performance radial AMTEC cell design for ultra-high-power solar AMTEC systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendricks, T.J.; Huang, C.
1999-07-01
Alkali Metal Thermal to Electric Conversion (AMTEC) technology is rapidly maturing for potential application in ultra-high-power solar AMTEC systems required by potential future US Air Force (USAF) spacecraft missions in medium-earth and geosynchronous orbits (MEO and GEO). Solar thermal AMTEC power systems potentially have several important advantages over current solar photovoltaic power systems in ultra-high-power spacecraft applications for USAF MEO and GEO missions. This work presents key aspects of radial AMTEC cell design to achieve high cell performance in solar AMTEC systems delivering larger than 50 kW(e) to support high power USAF missions. These missions typically require AMTEC cell conversionmore » efficiency larger than 25%. A sophisticated design parameter methodology is described and demonstrated which establishes optimum design parameters in any radial cell design to satisfy high-power mission requirements. Specific relationships, which are distinct functions of cell temperatures and pressures, define critical dependencies between key cell design parameters, particularly the impact of parasitic thermal losses on Beta Alumina Solid Electrolyte (BASE) area requirements, voltage, number of BASE tubes, and system power production for both maximum power-per-BASE-area and optimum efficiency conditions. Finally, some high-level system tradeoffs are demonstrated using the design parameter methodology to establish high-power radial cell design requirements and philosophy. The discussion highlights how to incorporate this methodology with sophisticated SINDA/FLUINT AMTEC cell modeling capabilities to determine optimum radial AMTEC cell designs.« less
FIELD QUALITY CONTROL STRATEGIES ASSESSING SOLIDIFICATION/STABILIZATION
Existing regulatory mobility reduction (leaching) tests are not amenable to real time quality control because of the time required to perform sample extraction and chemical analysis. This is of conccern because the leaching test is the most important parameter used to relate trea...
The role of impulse parameters in force variability
NASA Technical Reports Server (NTRS)
Carlton, L. G.; Newell, K. M.
1986-01-01
One of the principle limitations of the human motor system is the ability to produce consistent motor responses. When asked to repeatedly make the same movement, performance outcomes are characterized by a considerable amount of variability. This occurs whether variability is expressed in terms of kinetics or kinematics. Variability in performance is of considerable importance because for tasks requiring accuracy it is a critical variable in determining the skill of the performer. What has long been sought is a description of the parameter or parameters that determine the degree of variability. Two general experimental protocals were used. One protocal is to use dynamic actions and record variability in kinematic parameters such as spatial or temporal error. A second strategy was to use isometric actions and record kinetic variables such as peak force produced. What might be the important force related factors affecting variability is examined and an experimental approach to examine the influence of each of these variables is provided.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Design and Analysis of a Forging Die for Manufacturing of Multiple Connecting Rods
NASA Astrophysics Data System (ADS)
Megharaj, C. E.; Nagaraj, P. M.; Jeelan Pasha, K.
2016-09-01
This paper demonstrates to utilize the hammer capacity by modifying the die design such that forging hammer can manufacture more than one connecting rod in a given forging cycle time. To modify the die design study is carried out to understand the parameters that are required for forging die design. By considering these parameters, forging die is designed using design modelling tool solid edge. This new design now can produce two connecting rods in same capacity hammer. The new design is required to validate by verifying complete filing of metal in die cavities without any defects in it. To verify this, analysis tool DEFORM 3D is used in this project. Before start of validation process it is require to convert 3D generated models in to. STL file format to import the models into the analysis tool DEFORM 3D. After importing these designs they are analysed for material flow into the cavities and energy required to produce two connecting rods in new forging die design. It is found that the forging die design is proper without any defects and also energy graph shows that the forging energy required to produce two connecting rods is within the limit of that hammer capacity. Implementation of this project increases the production of connecting rods by 200% in less than previous cycle time.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.
2006-01-01
As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.
Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan
2015-01-01
Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Dayman, B., Jr.; Fiore, A. W.
1974-01-01
The present work discusses in general terms the various kinds of ground facilities, in particular, wind tunnels, which support aerodynamic testing. Since not all flight parameters can be simulated simultaneously, an important problem consists in matching parameters. It is pointed out that there is a lack of wind tunnels for a complete Reynolds-number simulation. Using a computer to simulate flow fields can result in considerable reduction of wind-tunnel hours required to develop a given flight vehicle.
DOT National Transportation Integrated Search
2008-09-01
The Resilient Modulus (Mr) of pavement materials and subgrades is an important input : parameter for the design of pavement structures. The Repeated Loading Triaxial (RLT) test : typically determines Mr. However, the RLT test requires well trained pe...
Developing Accurate Spatial Maps of Cotton Fiber Quality Parameters
USDA-ARS?s Scientific Manuscript database
Awareness of the importance of cotton fiber quality (Gossypium, L. sps.) has increased as advances in spinning technology require better quality cotton fiber. Recent advances in geospatial information sciences allow an improved ability to study the extent and causes of spatial variability in fiber p...
Identification of milling and baking quality QTL in multiple soft wheat mapping populations
USDA-ARS?s Scientific Manuscript database
Wheat derived food products require a range of characteristics. Identification and understanding of the genetic components controlling end-use quality of wheat is important for crop improvement. We assessed the underlying genetics controlling specific milling and baking quality parameters of soft wh...
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
Machine learning action parameters in lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William
2018-05-01
Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.
Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri
2013-01-01
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.
[Methods for measuring skin aging].
Zieger, M; Kaatz, M
2016-02-01
Aging affects human skin and is becoming increasingly important with regard to medical, social and aesthetic issues. Detection of intrinsic and extrinsic components of skin aging requires reliable measurement methods. Modern techniques, e.g., based on direct imaging, spectroscopy or skin physiological measurements, provide a broad spectrum of parameters for different applications.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1982-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1984-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
USDA-ARS?s Scientific Manuscript database
The study of health impacts, emission estimation of particulate matter (PM), and development of new control technologies require knowledge of PM characteristics. Among these PM characteristics, the particle size distribution (PSD) is perhaps the most important physical parameter governing particle b...
Nonlinear system analysis in bipolar integrated circuits
NASA Astrophysics Data System (ADS)
Fang, T. F.; Whalen, J. J.
1980-01-01
Since analog bipolar integrated circuits (IC's) have become important components in modern communication systems, the study of the Radio Frequency Interference (RFI) effects in bipolar IC amplifiers is an important subject for electromagnetic compatibility (EMC) engineering. The investigation has focused on using the nonlinear circuit analysis program (NCAP) to predict RF demodulation effects in broadband bipolar IC amplifiers. The audio frequency (AF) voltage at the IC amplifier output terminal caused by an amplitude modulated (AM) RF signal at the IC amplifier input terminal was calculated and compared to measured values. Two broadband IC amplifiers were investigated: (1) a cascode circuit using a CA3026 dual differential pair; (2) a unity gain voltage follower circuit using a micro A741 operational amplifier (op amp). Before using NCAP for RFI analysis, the model parameters for each bipolar junction transistor (BJT) in the integrated circuit were determined. Probe measurement techniques, manufacturer's data, and other researcher's data were used to obtain the required NCAP BJT model parameter values. An important contribution included in this effort is a complete set of NCAP BJT model parameters for most of the transistor types used in linear IC's.
Mielnicki, Wojciech; Dyla, Agnieszka; Zawada, Tomasz
2016-12-05
Transthoracic echocardiography (TTE) has become one of the most important diagnostic tools in the treatment of critically ill patients. It allows clinicians to recognise potentially reversible life-threatening situations and is also very effective in the monitoring of the fluid status of patients, slowly substituting invasive methods in the intensive care unit. Hemodynamic assessment is based on a few static and dynamic parameters. Dynamic parameters change during the respiratory cycle in mechanical ventilation and the level of this change directly corresponds to fluid responsiveness. Most of the parameters cannot be used in spontaneously breathing patients. For these patients the most important test is passive leg raising, which is a good substitute for fluid bolus. Although TTE is very useful in the critical care setting, we should not forget the important limitations, not only technical ones but also caused by the critical illness itself. Unfortunately, this method does not allow continuous monitoring and every change in the patient's condition requires repeated examination.
Kišonaitė, Miglė; Zubrienė, Asta; Čapkauskaitė, Edita; Smirnov, Alexey; Smirnovienė, Joana; Kairys, Visvaldas; Michailovienė, Vilma; Manakova, Elena; Gražulis, Saulius; Matulis, Daumantas
2014-01-01
The early stage of drug discovery is often based on selecting the highest affinity lead compound. To this end the structural and energetic characterization of the binding reaction is important. The binding energetics can be resolved into enthalpic and entropic contributions to the binding Gibbs free energy. Most compound binding reactions are coupled to the absorption or release of protons by the protein or the compound. A distinction between the observed and intrinsic parameters of the binding energetics requires the dissection of the protonation/deprotonation processes. Since only the intrinsic parameters can be correlated with molecular structural perturbations associated with complex formation, it is these parameters that are required for rational drug design. Carbonic anhydrase (CA) isoforms are important therapeutic targets to treat a range of disorders including glaucoma, obesity, epilepsy, and cancer. For effective treatment isoform-specific inhibitors are needed. In this work we investigated the binding and protonation energetics of sixteen [(2-pyrimidinylthio)acetyl]benzenesulfonamide CA inhibitors using isothermal titration calorimetry and fluorescent thermal shift assay. The compounds were built by combining four sulfonamide headgroups with four tailgroups yielding 16 compounds. Their intrinsic binding thermodynamics showed the limitations of the functional group energetic additivity approach used in fragment-based drug design, especially at the level of enthalpies and entropies of binding. Combined with high resolution crystal structural data correlations were drawn between the chemical functional groups on selected inhibitors and intrinsic thermodynamic parameters of CA-inhibitor complex formation. PMID:25493428
Design Change Model for Effective Scheduling Change Propagation Paths
NASA Astrophysics Data System (ADS)
Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin
2017-09-01
Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.
Determination of Destress Blasting Effectiveness Using Seismic Source Parameters
NASA Astrophysics Data System (ADS)
Wojtecki, Łukasz; Mendecki, Maciej J.; Zuberek, Wacaław M.
2017-12-01
Underground mining of coal seams in the Upper Silesian Coal Basin is currently performed under difficult geological and mining conditions. The mining depth, dislocations (faults and folds) and mining remnants are responsible for rockburst hazard in the highest degree. This hazard can be minimized by using active rockburst prevention, where destress blastings play an important role. Destress blastings in coal seams aim to destress the local stress concentrations. These blastings are usually performed from the longwall face to decrease the stress level ahead of the longwall. An accurate estimation of active rockburst prevention effectiveness is important during mining under disadvantageous geological and mining conditions, which affect the risk of rockburst. Seismic source parameters characterize the focus of tremor, which may be useful in estimating the destress blasting effects. Investigated destress blastings were performed in coal seam no. 507 during its longwall mining in one of the coal mines in the Upper Silesian Coal Basin under difficult geological and mining conditions. The seismic source parameters of the provoked tremors were calculated. The presented preliminary investigations enable a rapid estimation of the destress blasting effectiveness using seismic source parameters, but further analysis in other geological and mining conditions with other blasting parameters is required.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2017-11-01
Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
System study of the utilization of space for carbon dioxide research
NASA Technical Reports Server (NTRS)
Glaser, P. E.; Vranka, R.
1985-01-01
The objectives included: compiling and selecting the Scientific Data Requirements (SDRs) pertinent to the CO2 Research Program that have the potential to be more successfully achieved by utilizing space-based sensor systems; assessment of potential space technology in monitoring those parameters which may be important first indicators of climate change due to increasing atmospheric CO2, including the behavior of the West Antarctic ice sheet; and determine the potential of space technology for monitoring those parameters to improve understanding of the coupling between CO2 and cloud cover.
The bulk composition of Titan's atmosphere.
NASA Technical Reports Server (NTRS)
Trafton, L.
1972-01-01
Consideration of the physical constraints for Titan's atmosphere leads to a model which describes the bulk composition of the atmosphere in terms of observable parameters. Intermediate-resolution photometric scans of both Saturn and Titan, including scans of the Q branch of Titan's methane band, constrain these parameters in such a way that the model indicates the presence of another important atmospheric gas, namely, another bulk constituent or a significant thermal opacity. Further progress in determining the composition and state of Titan's atmosphere requires additional observations to eliminate present ambiguities. For this purpose, particular observational targets are suggested.
A New Calibration Method for Commercial RGB-D Sensors.
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-05-24
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
An approach for cooling by solar energy
NASA Astrophysics Data System (ADS)
Rabeih, S. M.; Wahhab, M. A.; Asfour, H. M.
The present investigation is concerned with the possibility to base the operation of a household refrigerator on solar energy instead of gas fuel. The currently employed heating system is to be replaced by a solar collector with an absorption area of two sq m. Attention is given to the required changes in the generator design, the solar parameters at the location of refrigerator installation, the mathematical approach for the thermal analysis of the solar collector, the development of a computer program for the evaluation of the important parameters, the experimental test rig, and the measurement of the experimental parameters. A description is given of the obtained optimum operating conditions for the considered system.
Fatigue reassessment for lifetime extension of offshore wind monopile substructures
NASA Astrophysics Data System (ADS)
Ziegler, Lisa; Muskulus, Michael
2016-09-01
Fatigue reassessment is required to decide about lifetime extension of aging offshore wind farms. This paper presents a methodology to identify important parameters to monitor during the operational phase of offshore wind turbines. An elementary effects method is applied to analyze the global sensitivity of residual fatigue lifetimes to environmental, structural and operational parameters. Therefore, renewed lifetime simulations are performed for a case study which consists of a 5 MW turbine with monopile substructure in 20 m water depth. Results show that corrosion, turbine availability, and turbulence intensity are the most influential parameters. This can vary strongly for other settings (water depth, turbine size, etc.) making case-specific assessments necessary.
Stafeev, A A; Zinov'ev, G I; Drozdov, D D
2015-01-01
The orthopedic restoration and related to its clinical stages (preparation, gingival retraction, impression) is often associated with complications which arise from the marginal gingiva. The technology of indirect ceramic restoration requires an assessment of the clinical and morphological parameters of periodontal tissues. The study outlines correlation between the type of periodontal histhology and inflammatory and degenerative complications that has been established after the analysis of morphofunctional state of periodontal tissue. Results of clinical studies and correlation analysis of clinical and morphological parameters of marginal gingiva has shown that important parameter influencing the choice of manufacturing technology are the position of restoration margin relatively to marginal gingiva and periodontal morphotype.
Magnetorheological finishing: a perfect solution to nanofinishing requirements
NASA Astrophysics Data System (ADS)
Sidpara, Ajay
2014-09-01
Finishing of optics for different applications is the most important as well as difficult step to meet the specification of optics. Conventional grinding or other polishing processes are not able to reduce surface roughness beyond a certain limit due to high forces acting on the workpiece, embedded abrasive particles, limited control over process, etc. Magnetorheological finishing (MRF) process provides a new, efficient, and innovative way to finish optical materials as well many metals to their desired level of accuracy. This paper provides an overview of MRF process for different applications, important process parameters, requirement of magnetorheological fluid with respect to workpiece material, and some areas that need to be explored for extending the application of MRF process.
USDA-ARS?s Scientific Manuscript database
Extreme hydrological processes are often very dynamic and destructive.A better understanding of these processes requires an accurate mapping of key variables that control them. In this regard, soil moisture is perhaps the most important parameter that impacts the magnitude of flooding events as it c...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
40 CFR 80.65 - General requirements for refiners and importers.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., 1995 through December 31, 1997, either as being subject to the simple model standards, or to the complex model standards; (v) For each of the following parameters, either gasoline or RBOB which meets the...; (B) NOX emissions performance in the case of gasoline certified using the complex model. (C) Benzene...
Geometric model for softwood transverse thermal conductivity. Part I
Hong-mei Gu; Audrey Zink-Sharp
2005-01-01
Thermal conductivity is a very important parameter in determining heat transfer rate and is required for developing of drying models and in industrial operations such as adhesive cure rate. Geometric models for predicting softwood thermal conductivity in the radial and tangential directions were generated in this study based on obervation and measurements of wood...
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
Regularities And Irregularities Of The Stark Parameters For Single Ionized Noble Gases
NASA Astrophysics Data System (ADS)
Peláez, R. J.; Djurovic, S.; Cirišan, M.; Aparicio, J. A.; Mar S.
2010-07-01
Spectroscopy of ionized noble gases has a great importance for the laboratory and astrophysical plasmas. Generally, spectra of inert gases are important for many physics areas, for example laser physics, fusion diagnostics, photoelectron spectroscopy, collision physics, astrophysics etc. Stark halfwidths as well as shifts of spectral lines are usually employed for plasma diagnostic purposes. For example atomic data of argon krypton and xenon will be useful for the spectral diagnostic of ITER. In addition, the software used for stellar atmosphere simulation like TMAP, and SMART require a large amount of atomic and spectroscopic data. Availability of these parameters will be useful for a further development of stellar atmosphere and evolution models. Stark parameters data of spectral lines can also be useful for verification of theoretical calculations and investigation of regularities and systematic trends of these parameters within a multiplet, supermultiplet or transition array. In the last years, different trends and regularities of Stark parameters (halwidths and shifts of spectral lines) have been analyzed. The conditions related with atomic structure of the element as well as plasma conditions are responsible for regular or irregular behaviors of the Stark parameters. The absence of very close perturbing levels makes Ne II as a good candidate for analysis of the regularities. Other two considered elements Kr II and Xe II with complex spectra present strong perturbations and in some cases an irregularities in Stark parameters appear. In this work we analyze the influence of the perturbations to Stark parameters within the multiplets.
The conception of fashion products for children: reflections on safety parameters.
Prete, Lígia Gomes Pereira; Emidio, Lucimar de Fátima Bilmaia; Martins, Suzana Barreto
2012-01-01
The purpose of this study is to reflect on safety requirements for children's clothing, based on the standardization proposed by the ABNT (Technical Standardization Brazilian Association). Bibliographic research and case studies were considered on writing this work. We also discuss the importance of adding other safety requirements to the current standardization, as well as the increasing of the actual age range specified by the ABNT, following the children's clothing safety standardizations in Portugal and the United States, also stated here.
The feasibility of inflight measurement of lightning strike parameters
NASA Technical Reports Server (NTRS)
Crouch, K. E.; Plumer, J. A.
1978-01-01
The appearance of nonmetallic structural materials and microelectronics in aircraft design has resulted in a need for better knowledge of hazardous environments such as lightning and the effects these environments have on the aircraft. This feasibility study was performed to determine the lightning parameters in the greatest need of clarification and the performance requirements of equipment necessary to sense and record these parameters on an instrumented flight research aircraft. It was found that electric field rate of change, lightning currents, and induced voltages in aircraft wiring are the parameters of greatest importance. Flat-plate electric field sensors and resistive current shunts are proposed for electric field and current sensors, to provide direct measurements of these parameters. Six bit analog-to-digital signal conversion at a 5 nanosecond sampling rate, short-term storage of 85000 bits and long term storage of 5 x 10 to the 7th power bits of electric field, current and induced voltage data on the airplane are proposed, with readout and further analysis to be accomplished on the ground. A NASA F-106B was found to be suitable for use as the research aircraft because it has a minimum number of possible lightning attachment points, space for the necessary instrumentation, and appears to meet operational requirements. Safety considerations are also presented.
Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.
2001-03-08
1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.
Data quality system using reference dictionaries and edit distance algorithms
NASA Astrophysics Data System (ADS)
Karbarz, Radosław; Mulawka, Jan
2015-09-01
The real art of management it is important to make smart decisions, what in most of the cases is not a trivial task. Those decisions may lead to determination of production level, funds allocation for investments etc. Most of the parameters in decision-making process such as: interest rate, goods value or exchange rate may change. It is well know that these parameters in the decision-making are based on the data contained in datamarts or data warehouse. However, if the information derived from the processed data sets is the basis for the most important management decisions, it is required that the data is accurate, complete and current. In order to achieve high quality data and to gain from them measurable business benefits, data quality system should be used. The article describes the approach to the problem, shows the algorithms in details and their usage. Finally the test results are provide. Test results show the best algorithms (in terms of quality and quantity) for different parameters and data distribution.
NASA Astrophysics Data System (ADS)
Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.
2017-05-01
The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.
CBM Resources/reserves classification and evaluation based on PRMS rules
NASA Astrophysics Data System (ADS)
Fa, Guifang; Yuan, Ruie; Wang, Zuoqian; Lan, Jun; Zhao, Jian; Xia, Mingjun; Cai, Dechao; Yi, Yanjing
2018-02-01
This paper introduces a set of definitions and classification requirements for coalbed methane (CBM) resources/reserves, based on Petroleum Resources Management System (PRMS). The basic CBM classification criterions of 1P, 2P, 3P and contingent resources are put forward from the following aspects: ownership, project maturity, drilling requirements, testing requirements, economic requirements, infrastructure and market, timing of production and development, and so on. The volumetric method is used to evaluate the OGIP, with focuses on analyses of key parameters and principles of the parameter selection, such as net thickness, ash and water content, coal rank and composition, coal density, cleat volume and saturation and absorbed gas content etc. A dynamic method is used to assess the reserves and recovery efficiency. Since the differences in rock and fluid properties, displacement mechanism, completion and operating practices and wellbore type resulted in different production curve characteristics, the factors affecting production behavior, the dewatering period, pressure build-up and interference effects were analyzed. The conclusion and results that the paper achieved can be used as important references for reasonable assessment of CBM resources/reserves.
Practical aspects of modeling aircraft dynamics from flight data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1984-01-01
The purpose of parameter estimation, a subset of system identification, is to estimate the coefficients (such as stability and control derivatives) of the aircraft differential equations of motion from sampled measured dynamic responses. In the past, the primary reason for estimating stability and control derivatives from flight tests was to make comparisons with wind tunnel estimates. As aircraft became more complex, and as flight envelopes were expanded to include flight regimes that were not well understood, new requirements for the derivative estimates evolved. For many years, the flight determined derivatives were used in simulations to aid in flight planning and in pilot training. The simulations were particularly important in research flight test programs in which an envelope expansion into new flight regimes was required. Parameter estimation techniques for estimating stability and control derivatives from flight data became more sophisticated to support the flight test programs. As knowledge of these new flight regimes increased, more complex aircraft were flown. Much of this increased complexity was in sophisticated flight control systems. The design and refinement of the control system required higher fidelity simulations than were previously required.
Schumacher, Carsten; Eismann, Hendrik; Sieg, Lion; Friedrich, Lars; Scheinichen, Dirk; Vondran, Florian W R; Johanning, Kai
2018-01-01
Liver transplantation is a complex intervention, and early anticipation of personnel and logistic requirements is of great importance. Early identification of high-risk patients could prove useful. We therefore evaluated prognostic values of recipient parameters commonly available in the early preoperative stage regarding postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. All adult patients undergoing first liver transplantation at Hannover Medical School between January 2005 and December 2010 were included in this retrospective study. Demographic, clinical, and laboratory data as well as clinical courses were recorded. Prognostic values regarding 30- and 90-day outcomes were evaluated by uni- and multivariate statistical tests. Identified risk parameters were used to calculate risk scores. There were 426 patients (40.4% female) included with a mean age of 48.6 (11.9) years. Absolute 30-day mortality rate was 9.9%, and absolute 90-day mortality rate was 13.4%. Preoperative leukocyte count >5200/μL, platelet count <91 000/μL, and creatinine values ≥77 μmol/L were relevant risk factors for both observation periods ( P < .05, respectively). A score based on these factors significantly differentiated between groups of varying postoperative outcomes and intraoperative transfusion requirements ( P < .05, respectively). A score based on preoperative creatinine, leukocyte, and platelet values allowed early estimation of postoperative 30- and 90-day outcomes and intraoperative transfusion requirements in liver transplantation. Results might help to improve timely logistic and personal strategies.
Airbreathing engine selection criteria for SSTO propulsion system
NASA Astrophysics Data System (ADS)
Ohkami, Yoshiaki; Maita, Masataka
1995-02-01
This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).
NASA Astrophysics Data System (ADS)
Wibowo, Y. T.; Baskoro, S. Y.; Manurung, V. A. T.
2018-02-01
Plastic based products spread all over the world in many aspects of life. The ability to substitute other materials is getting stronger and wider. The use of plastic materials increases and become unavoidable. Plastic based mass production requires injection process as well Mold. The milling process of plastic mold steel material was done using HSS End Mill cutting tool that is widely used in a small and medium enterprise for the reason of its ability to be re sharpened and relatively inexpensive. Study on the effect of the geometry tool states that it has an important effect on the quality improvement. Cutting speed, feed rate, depth of cut and radii are input parameters beside to the tool path strategy. This paper aims to investigate input parameter and cutting tools behaviors within some different tool path strategy. For the reason of experiments efficiency Taguchi method and ANOVA were used. Response studied is surface roughness and cutting behaviors. By achieving the expected quality, no more additional process is required. Finally, the optimal combination of machining parameters will deliver the expected roughness and of course totally reduced cutting time. However actually, SMEs do not optimally use this data for cost reduction.
A Fast Evaluation Method for Energy Building Consumption Based on the Design of Experiments
NASA Astrophysics Data System (ADS)
Belahya, Hocine; Boubekri, Abdelghani; Kriker, Abdelouahed
2017-08-01
Building sector is one of the effective consumer energy by 42% in Algeria. The need for energy has continued to grow, in inordinate way, due to lack of legislation on energy performance in this large consumer sector. Another reason is the simultaneous change of users’ requirements to maintain their comfort, especially summer in dry lands and parts of southern Algeria, where the town of Ouargla presents a typical example which leads to a large amount of electricity consumption through the use of air conditioning. In order to achieve a high performance envelope of the building, an optimization of major parameters building envelope is required, using design of experiments (DOE), can determine the most effective parameters and eliminate the less importance. The study building is often complex and time consuming due to the large number of parameters to consider. This study focuses on reducing the computing time and determines the major parameters of building energy consumption, such as area of building, factor shape, orientation, ration walls to windows …etc to make some proposal models in order to minimize the seasonal energy consumption due to air conditioning needs.
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.
2013-01-01
Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224
Lindqvist, R
2006-07-01
Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.
Modelling of industrial robot in LabView Robotics
NASA Astrophysics Data System (ADS)
Banas, W.; Cwikła, G.; Foit, K.; Gwiazda, A.; Monica, Z.; Sekala, A.
2017-08-01
Currently can find many models of industrial systems including robots. These models differ from each other not only by the accuracy representation parameters, but the representation range. For example, CAD models describe the geometry of the robot and some even designate a mass parameters as mass, center of gravity, moment of inertia, etc. These models are used in the design of robotic lines and sockets. Also systems for off-line programming use these models and many of them can be exported to CAD. It is important to note that models for off-line programming describe not only the geometry but contain the information necessary to create a program for the robot. Exports from CAD to off-line programming system requires additional information. These models are used for static determination of reachability points, and testing collision. It’s enough to generate a program for the robot, and even check the interaction of elements of the production line, or robotic cell. Mathematical models allow robots to study the properties of kinematic and dynamic of robot movement. In these models the geometry is not so important, so are used only selected parameters such as the length of the robot arm, the center of gravity, moment of inertia. These parameters are introduced into the equations of motion of the robot and motion parameters are determined.
Stochastic Inversion of 2D Magnetotelluric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, itmore » provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
A New Calibration Method for Commercial RGB-D Sensors
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-01-01
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter-level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges. PMID:28538695
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... shape. Such fruit, if it is wider than it is tall, is considered to be badly misshapen. Identification... the current parameters for misshapen fruit; from ``fruit that is not wider than tall'' to fruit that is a certain percentage wider than it is tall. This alternative would allow for flatter/wider fruit...
Molybdenum disulfide and water interaction parameters
NASA Astrophysics Data System (ADS)
Heiranian, Mohammad; Wu, Yanbin; Aluru, Narayana R.
2017-09-01
Understanding the interaction between water and molybdenum disulfide (MoS2) is of crucial importance to investigate the physics of various applications involving MoS2 and water interfaces. An accurate force field is required to describe water and MoS2 interactions. In this work, water-MoS2 force field parameters are derived using the high-accuracy random phase approximation (RPA) method and validated by comparing to experiments. The parameters obtained from the RPA method result in water-MoS2 interface properties (solid-liquid work of adhesion) in good comparison to the experimental measurements. An accurate description of MoS2-water interaction will facilitate the study of MoS2 in applications such as DNA sequencing, sea water desalination, and power generation.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
François, Clément; Tanasescu, Adrian; Lamy, François-Xavier; Despiegel, Nicolas; Falissard, Bruno; Chalem, Ylana; Lançon, Christophe; Llorca, Pierre-Michel; Saragoussi, Delphine; Verpillat, Patrice; Wade, Alan G; Zighed, Djamel A
2017-01-01
Background and objective : Automated healthcare databases (AHDB) are an important data source for real life drug and healthcare use. In the filed of depression, lack of detailed clinical data requires the use of binary proxies with important limitations. The study objective was to create a Depressive Health State Index (DHSI) as a continuous health state measure for depressed patients using available data in an AHDB. Methods: The study was based on historical cohort design using the UK Clinical Practice Research Datalink (CPRD). Depressive episodes (depression diagnosis with an antidepressant prescription) were used to create the DHSI through 6 successive steps: (1) Defining study design; (2) Identifying constituent parameters; (3) Assigning relative weights to the parameters; (4) Ranking based on the presence of parameters; (5) Standardizing the rank of the DHSI; (6) Developing a regression model to derive the DHSI in any other sample. Results : The DHSI ranged from 0 (worst) to 100 (best health state) comprising 29 parameters. The proportion of depressive episodes with a remission proxy increased with DHSI quartiles. Conclusion : A continuous outcome for depressed patients treated by antidepressants was created in an AHDB using several different variables and allowed more granularity than currently used proxies.
NASA Astrophysics Data System (ADS)
Huang, Guoqin; Zhang, Meiqin; Huang, Hui; Guo, Hua; Xu, Xipeng
2018-04-01
Circular sawing is an important method for the processing of natural stone. The ability to predict sawing power is important in the optimisation, monitoring and control of the sawing process. In this paper, a predictive model (PFD) of sawing power, which is based on the tangential force distribution at the sawing contact zone, was proposed, experimentally validated and modified. With regard to the influence of sawing speed on tangential force distribution, the modified PFD (MPFD) performed with high predictive accuracy across a wide range of sawing parameters, including sawing speed. The mean maximum absolute error rate was within 6.78%, and the maximum absolute error rate was within 11.7%. The practicability of predicting sawing power by the MPFD with few initial experimental samples was proved in case studies. On the premise of high sample measurement accuracy, only two samples are required for a fixed sawing speed. The feasibility of applying the MPFD to optimise sawing parameters while lowering the energy consumption of the sawing system was validated. The case study shows that energy use was reduced 28% by optimising the sawing parameters. The MPFD model can be used to predict sawing power, optimise sawing parameters and control energy.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
The use of Landsat for monitoring water parameters in the coastal zone
NASA Technical Reports Server (NTRS)
Bowker, D. E.; Witte, W. G.
1977-01-01
Landsats 1 and 2 have been successful in detecting and quantifying suspended sediment and several other important parameters in the coastal zone, including chlorophyll, particles, alpha (light transmission), tidal conditions, acid and sewage dumps, and in some instances oil spills. When chlorophyll a is present in detectable quantities, however, it is shown to interfere with the measurement of sediment. The Landsat banding problem impairs the instrument resolution and places a requirement on the sampling program to collect surface data from a sufficiently large area. A sampling method which satisfies this condition is demonstrated.
The State-of-the-Art of Materials Technology Used for Fossil and Nuclear Power Plants in China
NASA Astrophysics Data System (ADS)
Weng, Yuqing
Combined with the development of energy in China during the past 30 years, this paper clarified that high steam parameters ultra-supercritical (USC) coal-fired power plants and 1000MW nuclear power plants are the most important method to optimize energy structure and achieve national goals of energy saving and CO2 emission in China. Additionally, requirement of materials technology in high steam parameters USC coal-fired power plants and 1000MW nuclear power plants, current research and major development of relevant materials technology in China were briefly described in this paper.
Liwarska-Bizukojc, Ewa; Biernacki, Rafal
2010-10-01
In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.
Adjoint-Based Climate Model Tuning: Application to the Planet Simulator
NASA Astrophysics Data System (ADS)
Lyu, Guokun; Köhl, Armin; Matei, Ion; Stammer, Detlef
2018-01-01
The adjoint method is used to calibrate the medium complexity climate model "Planet Simulator" through parameter estimation. Identical twin experiments demonstrate that this method can retrieve default values of the control parameters when using a long assimilation window of the order of 2 months. Chaos synchronization through nudging, required to overcome limits in the temporal assimilation window in the adjoint method, is employed successfully to reach this assimilation window length. When assimilating ERA-Interim reanalysis data, the observations of air temperature and the radiative fluxes are the most important data for adjusting the control parameters. The global mean net longwave fluxes at the surface and at the top of the atmosphere are significantly improved by tuning two model parameters controlling the absorption of clouds and water vapor. The global mean net shortwave radiation at the surface is improved by optimizing three model parameters controlling cloud optical properties. The optimized parameters improve the free model (without nudging terms) simulation in a way similar to that in the assimilation experiments. Results suggest a promising way for tuning uncertain parameters in nonlinear coupled climate models.
NASA Astrophysics Data System (ADS)
Deepu, M. J.; Farivar, H.; Prahl, U.; Phanikumar, G.
2017-04-01
Dual phase steels are versatile advanced high strength steels that are being used for sheet metal applications in automotive industry. It also has the potential for application in bulk components like gear. The inter-critical annealing in dual phase steels is one of the crucial steps that determine the mechanical properties of the material. Selection of the process parameters for inter-critical annealing, in particular, the inter-critical annealing temperature and time is important as it plays a major role in determining the volume fractions of ferrite and martensite, which in turn determines the mechanical properties. Selection of these process parameters to obtain a particular required mechanical property requires large number of experimental trials. Simulation of microstructure evolution and virtual compression/tensile testing can help in reducing the number of such experimental trials. In the present work, phase field modeling implemented in the commercial software Micress® is used to predict the microstructure evolution during inter-critical annealing. Virtual compression tests are performed on the simulated microstructure using finite element method implemented in the commercial software, to obtain the effective flow curve of the macroscopic material. The flow curves obtained by simulation are experimentally validated with physical simulation in Gleeble® and compared with that obtained using linear rule of mixture. The methodology could be used in determining the inter-critical annealing process parameters required for achieving a particular flow curve.
NASA Astrophysics Data System (ADS)
Wong, T. E.; Noone, D. C.; Kleiber, W.
2014-12-01
The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.
Improved Strength and Damage Modeling of Geologic Materials
NASA Astrophysics Data System (ADS)
Stewart, Sarah; Senft, Laurel
2007-06-01
Collisions and impact cratering events are important processes in the evolution of planetary bodies. The time and length scales of planetary collisions, however, are inaccessible in the laboratory and require the use of shock physics codes. We present the results from a new rheological model for geological materials implemented in the CTH code [1]. The `ROCK' model includes pressure, temperature, and damage effects on strength, as well as acoustic fluidization during impact crater collapse. We demonstrate that the model accurately reproduces final crater shapes, tensile cracking, and damaged zones from laboratory to planetary scales. The strength model requires basic material properties; hence, the input parameters may be benchmarked to laboratory results and extended to planetary collision events. We show the effects of varying material strength parameters, which are dependent on both scale and strain rate, and discuss choosing appropriate parameters for laboratory and planetary situations. The results are a significant improvement in models of continuum rock deformation during large scale impact events. [1] Senft, L. E., Stewart, S. T. Modeling Impact Cratering in Layered Surfaces, J. Geophys. Res., submitted.
Brady, Oliver J.; Godfray, H. Charles J.; Tatem, Andrew J.; Gething, Peter W.; Cohen, Justin M.; McKenzie, F. Ellis; Perkins, T. Alex; Reiner, Robert C.; Tusting, Lucy S.; Sinka, Marianne E.; Moyes, Catherine L.; Eckhoff, Philip A.; Scott, Thomas W.; Lindsay, Steven W.; Hay, Simon I.; Smith, David L.
2016-01-01
Background Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Methods and Results Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Conclusions Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. PMID:26822603
A review of parameters and heuristics for guiding metabolic pathfinding.
Kim, Sarah M; Peña, Matthew I; Moll, Mark; Bennett, George N; Kavraki, Lydia E
2017-09-15
Recent developments in metabolic engineering have led to the successful biosynthesis of valuable products, such as the precursor of the antimalarial compound, artemisinin, and opioid precursor, thebaine. Synthesizing these traditionally plant-derived compounds in genetically modified yeast cells introduces the possibility of significantly reducing the total time and resources required for their production, and in turn, allows these valuable compounds to become cheaper and more readily available. Most biosynthesis pathways used in metabolic engineering applications have been discovered manually, requiring a tedious search of existing literature and metabolic databases. However, the recent rapid development of available metabolic information has enabled the development of automated approaches for identifying novel pathways. Computer-assisted pathfinding has the potential to save biochemists time in the initial discovery steps of metabolic engineering. In this paper, we review the parameters and heuristics used to guide the search in recent pathfinding algorithms. These parameters and heuristics capture information on the metabolic network structure, compound structures, reaction features, and organism-specificity of pathways. No one metabolic pathfinding algorithm or search parameter stands out as the best to use broadly for solving the pathfinding problem, as each method and parameter has its own strengths and shortcomings. As assisted pathfinding approaches continue to become more sophisticated, the development of better methods for visualizing pathway results and integrating these results into existing metabolic engineering practices is also important for encouraging wider use of these pathfinding methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yuanyuan; Diao, Ruisheng; Huang, Renke
Maintaining good quality of power plant stability models is of critical importance to ensure the secure and economic operation and planning of today’s power grid with its increasing stochastic and dynamic behavior. According to North American Electric Reliability (NERC) standards, all generators in North America with capacities larger than 10 MVA are required to validate their models every five years. Validation is quite costly and can significantly affect the revenue of generator owners, because the traditional staged testing requires generators to be taken offline. Over the past few years, validating and calibrating parameters using online measurements including phasor measurement unitsmore » (PMUs) and digital fault recorders (DFRs) has been proven to be a cost-effective approach. In this paper, an innovative open-source tool suite is presented for validating power plant models using PPMV tool, identifying bad parameters with trajectory sensitivity analysis, and finally calibrating parameters using an ensemble Kalman filter (EnKF) based algorithm. The architectural design and the detailed procedures to run the tool suite are presented, with results of test on a realistic hydro power plant using PMU measurements for 12 different events. The calibrated parameters of machine, exciter, governor and PSS models demonstrate much better performance than the original models for all the events and show the robustness of the proposed calibration algorithm.« less
Robust Online Hamiltonian Learning
NASA Astrophysics Data System (ADS)
Granade, Christopher; Ferrie, Christopher; Wiebe, Nathan; Cory, David
2013-05-01
In this talk, we introduce a machine-learning algorithm for the problem of inferring the dynamical parameters of a quantum system, and discuss this algorithm in the example of estimating the precession frequency of a single qubit in a static field. Our algorithm is designed with practicality in mind by including parameters that control trade-offs between the requirements on computational and experimental resources. The algorithm can be implemented online, during experimental data collection, or can be used as a tool for post-processing. Most importantly, our algorithm is capable of learning Hamiltonian parameters even when the parameters change from experiment-to-experiment, and also when additional noise processes are present and unknown. Finally, we discuss the performance of the our algorithm by appeal to the Cramer-Rao bound. This work was financially supported by the Canadian government through NSERC and CERC and by the United States government through DARPA. NW would like to acknowledge funding from USARO-DTO.
Research of human kidney thermal properties for the purpose of cryosurgery
NASA Astrophysics Data System (ADS)
Ponomarev, D. E.; Pushkarev, A. V.
2017-11-01
Calculation of the heat transfer is required to correctly predict the results of cryosurgery, cryopreservation, etc. One of the important initial parameters are the thermophysical properties of biological tissues. In the present study, the values of the heat capacity, cryoscopic temperature and enthalpy of the phase transition of the kidney samples in vitro were obtained by differential scanning calorimetry.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
A Procedure to Measure the in-Situ Hygrothermal Behavior of Earth Walls
Chabriac, Pierre-Antoine; Fabbri, Antonin; Morel, Jean-Claude; Laurent, Jean-Paul; Blanc-Gonnet, Joachim
2014-01-01
Rammed earth is a sustainable material with low embodied energy. However, its development as a building material requires a better evaluation of its moisture-thermal buffering abilities and its mechanical behavior. Both of these properties are known to strongly depend on the amount of water contained in wall pores and its evolution. Thus the aim of this paper is to present a procedure to measure this key parameter in rammed earth or cob walls by using two types of probes operating on the Time Domain Reflectometry (TDR) principle. A calibration procedure for the probes requiring solely four parameters is described. This calibration procedure is then used to monitor the hygrothermal behavior of a rammed earth wall (1.5 m × 1 m × 0.5 m), instrumented by six probes during its manufacture, and submitted to insulated, natural convection and forced convection conditions. These measurements underline the robustness of the calibration procedure over a large range of water content, even if the wall is submitted to quite important temperature variations. They also emphasize the importance of gravity on water content heterogeneity when the saturation is high, as well as the role of liquid-to-vapor phase change on the thermal behavior. PMID:28788603
NASA Astrophysics Data System (ADS)
Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia
2017-04-01
The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Wei, Zhengying; Wei, Pei; Chen, Shenggui; Lu, Bingheng; Du, Jun; Li, Junfeng; Zhang, Shuzhe
2017-12-01
In this work, a set of experiments was designed to investigate the effect of process parameters on the relative density of the AlSi10Mg parts manufactured by SLM. The influence of laser scan speed v, laser power P and hatch space H, which were considered as the dominant parameters, on the powder melting and densification behavior was also studied experimentally. In addition, the laser energy density was introduced to evaluate the combined effect of the above dominant parameters, so as to control the SLM process integrally. As a result, a high relative density (> 97%) was obtained by SLM at an optimized laser energy density of 3.5-5.5 J/mm2. Moreover, a parameter-densification map was established to visually select the optimum process parameters for the SLM-processed AlSi10Mg parts with elevated density and required mechanical properties. The results provide an important experimental guidance for obtaining AlSi10Mg components with full density and gradient functional porosity by SLM.
Rendering of HDR content on LDR displays: an objective approach
NASA Astrophysics Data System (ADS)
Krasula, Lukáš; Narwaria, Manish; Fliegel, Karel; Le Callet, Patrick
2015-09-01
Dynamic range compression (or tone mapping) of HDR content is an essential step towards rendering it on traditional LDR displays in a meaningful way. This is however non-trivial and one of the reasons is that tone mapping operators (TMOs) usually need content-specific parameters to achieve the said goal. While subjective TMO parameter adjustment is the most accurate, it may not be easily deployable in many practical applications. Its subjective nature can also influence the comparison of different operators. Thus, there is a need for objective TMO parameter selection to automate the rendering process. To that end, we investigate into a new objective method for TMO parameters optimization. Our method is based on quantification of contrast reversal and naturalness. As an important advantage, it does not require any prior knowledge about the input HDR image and works independently on the used TMO. Experimental results using a variety of HDR images and several popular TMOs demonstrate the value of our method in comparison to default TMO parameter settings.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
qPIPSA: Relating enzymatic kinetic parameters and interaction fields
Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C
2007-01-01
Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319
Optimally designing games for behavioural research
Rafferty, Anna N.; Zaharia, Matei; Griffiths, Thomas L.
2014-01-01
Computer games can be motivating and engaging experiences that facilitate learning, leading to their increasing use in education and behavioural experiments. For these applications, it is often important to make inferences about the knowledge and cognitive processes of players based on their behaviour. However, designing games that provide useful behavioural data are a difficult task that typically requires significant trial and error. We address this issue by creating a new formal framework that extends optimal experiment design, used in statistics, to apply to game design. In this framework, we use Markov decision processes to model players' actions within a game, and then make inferences about the parameters of a cognitive model from these actions. Using a variety of concept learning games, we show that in practice, this method can predict which games will result in better estimates of the parameters of interest. The best games require only half as many players to attain the same level of precision. PMID:25002821
A method to measure internal contact angle in opaque systems by magnetic resonance imaging.
Zhu, Weiqin; Tian, Ye; Gao, Xuefeng; Jiang, Lei
2013-07-23
Internal contact angle is an important parameter for internal wettability characterization. However, due to the limitation of optical imaging, methods available for contact angle measurement are only suitable for transparent or open systems. For most of the practical situations that require contact angle measurement in opaque or enclosed systems, the traditional methods are not effective. Based upon the requirement, a method suitable for contact angle measurement in nontransparent systems is developed by employing MRI technology. In the Article, the method is demonstrated by measuring internal contact angles in opaque cylindrical tubes. It proves that the method also shows great feasibility in transparent situations and opaque capillary systems. By using the method, contact angle in opaque systems could be measured successfully, which is significant in understanding the wetting behaviors in nontransparent systems and calculating interfacial parameters in enclosed systems.
Scholz, Norman; Behnke, Thomas; Resch-Genger, Ute
2018-01-01
Micelles are of increasing importance as versatile carriers for hydrophobic substances and nanoprobes for a wide range of pharmaceutical, diagnostic, medical, and therapeutic applications. A key parameter indicating the formation and stability of micelles is the critical micelle concentration (CMC). In this respect, we determined the CMC of common anionic, cationic, and non-ionic surfactants fluorometrically using different fluorescent probes and fluorescence parameters for signal detection and compared the results with conductometric and surface tension measurements. Based upon these results, requirements, advantages, and pitfalls of each method are discussed. Our study underlines the versatility of fluorometric methods that do not impose specific requirements on surfactants and are especially suited for the quantification of very low CMC values. Conductivity and surface tension measurements yield smaller uncertainties particularly for high CMC values, yet are more time- and substance consuming and not suitable for every surfactant.
NASA Technical Reports Server (NTRS)
Korram, S.
1977-01-01
The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
NASA Astrophysics Data System (ADS)
Zhou, Qianxiang; Liu, Zhongqi
With the development of manned space technology, space rendezvous and docking (RVD) technology will play a more and more important role. The astronauts’ participation in a final close period of man-machine combination control is an important way of RVD technology. Spacecraft RVD control involves control problem of a total of 12 degrees of freedom (location) and attitude which it relative to the inertial space the orbit. Therefore, in order to reduce the astronauts’ operation load and reduce the security requirements to the ground station and achieve an optimal performance of the whole man-machine system, it is need to study how to design the number of control parameters of astronaut or aircraft automatic control system. In this study, with the laboratory conditions on the ground, a method was put forward to develop an experimental system in which the performance evaluation of spaceship RVD integration control by man and machine could be completed. After the RVD precision requirements were determined, 26 male volunteers aged 20-40 took part in the performance evaluation experiments. The RVD integration control success rates and total thruster ignition time were chosen as evaluation indices. Results show that if less than three RVD parameters control tasks were finished by subject and the rest of parameters control task completed by automation, the RVD success rate would be larger than eighty-eight percent and the fuel consumption would be optimized. In addition, there were two subjects who finished the whole six RVD parameters control tasks by enough train. In conclusion, if the astronauts' role should be integrated into the RVD control, it was suitable for them to finish the heading, pitch and roll control in order to assure the man-machine system high performance. If astronauts were needed to finish all parameter control, two points should be taken into consideration, one was enough fuel and another was enough long operation time.
Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc
2013-01-01
It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. PMID:22560899
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Oizumi, Hiroaki; Itani, Toshiro; Tagawa, Seiichi
2010-11-01
The development of extreme ultraviolet (EUV) lithography has progressed owing to worldwide effort. As the development status of EUV lithography approaches the requirements for the high-volume production of semiconductor devices with a minimum line width of 22 nm, the extraction of resist parameters becomes increasingly important from the viewpoints of the accurate evaluation of resist materials for resist screening and the accurate process simulation for process and mask designs. In this study, we demonstrated that resist parameters (namely, quencher concentration, acid diffusion constant, proportionality constant of line edge roughness, and dissolution point) can be extracted from the scanning electron microscopy (SEM) images of patterned resists without the knowledge on the details of resist contents using two types of latest EUV resist.
The HelCat dual-source plasma device.
Lynn, Alan G; Gilmore, Mark; Watts, Christopher; Herrea, Janis; Kelly, Ralph; Will, Steve; Xie, Shuangwei; Yan, Lincan; Zhang, Yue
2009-10-01
The HelCat (Helicon-Cathode) device has been constructed to support a broad range of basic plasma science experiments relevant to the areas of solar physics, laboratory astrophysics, plasma nonlinear dynamics, and turbulence. These research topics require a relatively large plasma source capable of operating over a broad region of parameter space with a plasma duration up to at least several milliseconds. To achieve these parameters a novel dual-source system was developed utilizing both helicon and thermionic cathode sources. Plasma parameters of n(e) approximately 0.5-50 x 10(18) m(-3) and T(e) approximately 3-12 eV allow access to a wide range of collisionalities important to the research. The HelCat device and initial characterization of plasma behavior during dual-source operation are described.
Numerical framework for the modeling of electrokinetic flows
NASA Astrophysics Data System (ADS)
Deshpande, Manish; Ghaddar, Chahid; Gilbert, John R.; St. John, Pamela M.; Woudenberg, Timothy M.; Connell, Charles R.; Molho, Joshua; Herr, Amy; Mungal, Godfrey; Kenny, Thomas W.
1998-09-01
This paper presents a numerical framework for design-based analyses of electrokinetic flow in interconnects. Electrokinetic effects, which can be broadly divided into electrophoresis and electroosmosis, are of importance in providing a transport mechanism in microfluidic devices for both pumping and separation. Models for the electrokinetic effects can be derived and coupled to the fluid dynamic equations through appropriate source terms. In the design of practical microdevices, however, accurate coupling of the electrokinetic effects requires the knowledge of several material and physical parameters, such as the diffusivity and the mobility of the solute in the solvent. Additionally wall-based effects such as chemical binding sites might exist that affect the flow patterns. In this paper, we address some of these issues by describing a synergistic numerical/experimental process to extract the parameters required. Experiments were conducted to provide the numerical simulations with a mechanism to extract these parameters based on quantitative comparisons with each other. These parameters were then applied in predicting further experiments to validate the process. As part of this research, we have created NetFlow, a tool for micro-fluid analyses. The tool can be validated and applied in existing technologies by first creating test structures to extract representations of the physical phenomena in the device, and then applying them in the design analyses to predict correct behavior.
Influence of Van der Waals interaction on the thermodynamics properties of NaCl
NASA Astrophysics Data System (ADS)
Marcondes, M. L.; Wentzcovitch, R. M.; Assali, L. V. C.
2016-12-01
Equations of state (EoS) are extremely important in several scientific domains. However, many applications require EoS parameters at high pressures and temperatures. Experimental determination of these parameters is limited in such conditions and ab initio calculations have become important in computing them. Density Functional Theory (DFT) with its various approximations for exchange and correlation energy is the method of choice, but lack of a good description of the exchange-correlation energy results in large errors in EoS parameters. It is well known that the alkali halides have been problematic from the onset and the quest for DFT functionals appropriate for such ionic and relatively weakly bonded systems has remained an active topic of research. Here we use DFT + van der Waals functionals to calculate the thermal equation of state and thermodynamic properties of the B1 NaCl phase. Our results show a remarkable improvement over the performance of standard the LDA and GGA functionals. This is hardly surprising given that ions in this system have nearly closed shell configurations.
Product design for energy reduction in concurrent engineering: An Inverted Pyramid Approach
NASA Astrophysics Data System (ADS)
Alkadi, Nasr M.
Energy factors in product design in concurrent engineering (CE) are becoming an emerging dimension for several reasons; (a) the rising interest in "green design and manufacturing", (b) the national energy security concerns and the dramatic increase in energy prices, (c) the global competition in the marketplace and global climate change commitments including carbon tax and emission trading systems, and (d) the widespread recognition of the need for sustainable development. This research presents a methodology for the intervention of energy factors in concurrent engineering product development process to significantly reduce the manufacturing energy requirement. The work presented here is the first attempt at integrating the design for energy in concurrent engineering framework. It adds an important tool to the DFX toolbox for evaluation of the impact of design decisions on the product manufacturing energy requirement early during the design phase. The research hypothesis states that "Product Manufacturing Energy Requirement is a Function of Design Parameters". The hypothesis was tested by conducting experimental work in machining and heat treating that took place at the manufacturing lab of the Industrial and Management Systems Engineering Department (IMSE) at West Virginia University (WVU) and at a major U.S steel manufacturing plant, respectively. The objective of the machining experiment was to study the effect of changing specific product design parameters (Material type and diameter) and process design parameters (metal removal rate) on a gear head lathe input power requirement through performing defined sets of machining experiments. The objective of the heat treating experiment was to study the effect of varying product charging temperature on the fuel consumption of a walking beams reheat furnace. The experimental work in both directions have revealed important insights into energy utilization in machining and heat-treating processes and its variance based on product, process, and system design parameters. In depth evaluation to how the design and manufacturing normally happen in concurrent engineering provided a framework to develop energy system levels in machining within the concurrent engineering environment using the method of "Inverted Pyramid Approach", (IPA). The IPA features varying levels of output energy based information depending on the input design parameters that is available during each stage (level) of the product design. The experimental work, the in-depth evaluation of design and manufacturing in CE, and the developed energy system levels in machining provided a solid base for the development of the model for the design for energy reduction in CE. The model was used to analyze an example part where 12 evolving designs were thoroughly reviewed to investigate the sensitivity of energy to design parameters in machining. The model allowed product design teams to address manufacturing energy concerns early during the design stage. As a result, ranges for energy sensitive design parameters impacting product manufacturing energy consumption were found in earlier levels. As designer proceeds to deeper levels in the model, this range tightens and results in significant energy reductions.
NASA Astrophysics Data System (ADS)
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2014-03-01
To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.
NASA Technical Reports Server (NTRS)
Skoog, Richard B
1951-01-01
A theoretical analysis of the effects of aeroelasticity on the stick-fixed static longitudinal stability and elevator angle required for balance of an airplane is presented together with calculated effects for a swept-wing bomber of relatively high flexibility. Although large changes in stability due to certain parameters are indicated for the example airplane, the over-all stability change after considering all parameters was quite small, compared to the individual effects, due to the counterbalancing of wing and tail contributions. The effect of flexibility on longitudinal control for the example airplane was found to be of little real importance.
Growth-rate dependent global effects on gene expression in bacteria
Klumpp, Stefan; Zhang, Zhongge; Hwa, Terence
2010-01-01
Summary Bacterial gene expression depends not only on specific regulations but also directly on bacterial growth, because important global parameters such as the abundance of RNA polymerases and ribosomes are all growth-rate dependent. Understanding these global effects is necessary for a quantitative understanding of gene regulation and for the robust design of synthetic genetic circuits. The observed growth-rate dependence of constitutive gene expression can be explained by a simple model using the measured growth-rate dependence of the relevant cellular parameters. More complex growth dependences for genetic circuits involving activators, repressors and feedback control were analyzed, and salient features were verified experimentally using synthetic circuits. The results suggest a novel feedback mechanism mediated by general growth-dependent effects and not requiring explicit gene regulation, if the expressed protein affects cell growth. This mechanism can lead to growth bistability and promote the acquisition of important physiological functions such as antibiotic resistance and tolerance (persistence). PMID:20064380
NASA Astrophysics Data System (ADS)
Joshi, R. H.; Thakore, B. Y.; Bhatt, N. K.; Vyas, P. R.; Jani, A. R.
2018-02-01
A density functional theory along with electronic contribution is used to compute quasiharmonic total energy for silver, whereas explicit phonon anharmonic contribution is added through perturbative term in temperature. Within the Mie-Grüneisen approach, we propose a consistent computational scheme for calculating various thermophysical properties of a substance, in which the required Grüneisen parameter γth is calculated from the knowledge of binding energy. The present study demonstrates that no separate relation for volume dependence for γth is needed, and complete thermodynamics under simultaneous high-temperature and high-pressure condition can be derived in a consistent manner. We have calculated static and dynamic equation of states and some important thermodynamic properties along the shock Hugoniot. A careful examination of temperature dependence of Grüneisen parameter reveals the importance of temperature-effect on various thermal properties.
Growth assessment in diagnosis of Fetal Growth Restriction. Review.
Albu, A R; Horhoianu, I A; Dumitrascu, M C; Horhoianu, V
2014-06-15
The assessment of fetal growth represents a fundamental step towards the identification of the true growth restricted fetus that is associated to important perinatal morbidity and mortality. The possible ways of detecting abnormal fetal growth are taken into consideration in this review and their strong and weak points are discussed. An important debate still remains about how to discriminate between the physiologically small fetus that does not require special surveillance and the truly growth restricted fetus who is predisposed to perinatal complications, even if its parameters are above the cut-off limits established. In this article, we present the clinical tools of fetal growth assessment: Symphyseal-Fundal Height (SFH) measurement, the fetal ultrasound parameters widely taken into consideration when discussing fetal growth: Abdominal Circumference (AC) and Estimated Fetal Weight (EFW); several types of growth charts and their characteristics: populational growth charts, standard growth charts, individualized growth charts, customized growth charts and growth trajectories.
Opieliński, Krzysztof J; Gudra, Tadeusz
2002-05-01
The effective ultrasonic energy radiation into the air of piezoelectric transducers requires using multilayer matching systems with accurately selected acoustic impedances and the thickness of particular layers. This problem is of particular importance in the case of ultrasonic transducers working at a frequency above 1 MHz. Because the possibilities of choosing material with required acoustic impedance are limited (the counted values cannot always be realised and applied in practice) it is necessary to correct the differences between theoretical values and the possibilities of practical application of given acoustic impedances. Such a correction can be done by manipulating other parameters of matching layers (e.g. by changing their thickness). The efficiency of the energy transmission from the piezoceramic transducer through different layers with different thickness enabling a compensation of non-ideal real values by changing their thickness was computer analysed. The result of this analysis is the conclusion that from the technological point of view a layer with defined thickness is easier and faster to produce than elaboration of a new material with required acoustic parameter.
NASA Technical Reports Server (NTRS)
Parmar, Devendra S.; Shams, Qamar A.
2002-01-01
The strategy of NASA to explore space objects in the vicinity of Earth and other planets of the solar system includes robotic and human missions. This strategy requires a road map for technology development that will support the robotic exploration and provide safety for the humans traveling to other celestial bodies. Aeroassist is one of the key elements of technology planning for the success of future robot and human exploration missions to other celestial bodies. Measurement of aerothermodynamic parameters such as temperature, pressure, and acceleration is of prime importance for aeroassist technology implementation and for the safety and affordability of the mission. Instrumentation and methods to measure such parameters have been reviewed in this report in view of past practices, current commercial availability of instrumentation technology, and the prospects of improvement and upgrade according to the requirements. Analysis of the usability of each identified instruments in terms of cost for efficient weight-volume ratio, power requirement, accuracy, sample rates, and other appropriate metrics such as harsh environment survivability has been reported.
Nucleus segmentation in histology images with hierarchical multilevel thresholding
NASA Astrophysics Data System (ADS)
Ahmady Phoulady, Hady; Goldgof, Dmitry B.; Hall, Lawrence O.; Mouton, Peter R.
2016-03-01
Automatic segmentation of histological images is an important step for increasing throughput while maintaining high accuracy, avoiding variation from subjective bias, and reducing the costs for diagnosing human illnesses such as cancer and Alzheimer's disease. In this paper, we present a novel method for unsupervised segmentation of cell nuclei in stained histology tissue. Following an initial preprocessing step involving color deconvolution and image reconstruction, the segmentation step consists of multilevel thresholding and a series of morphological operations. The only parameter required for the method is the minimum region size, which is set according to the resolution of the image. Hence, the proposed method requires no training sets or parameter learning. Because the algorithm requires no assumptions or a priori information with regard to cell morphology, the automatic approach is generalizable across a wide range of tissues. Evaluation across a dataset consisting of diverse tissues, including breast, liver, gastric mucosa and bone marrow, shows superior performance over four other recent methods on the same dataset in terms of F-measure with precision and recall of 0.929 and 0.886, respectively.
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
Characterization of Softmagnetic Thin Layers Using Barkhausen Noise Microscopy
2001-04-01
magnetoresistive (MR) sensors softmagnetic thin layer systems are used. Optimal performance of these layers requires homogeneous magnetic properties , especially a...Sendust, used in inductive sensors and nanocrystalline NiFe , used in MR-sensors. In quality correlations to Barkhausen noise parameters were found...Brillouin scattering are frequently used. An important issue is the influence of mechanical properties , e.g. residual stress on the magnetic performance
François, Clément; Tanasescu, Adrian; Lamy, François-Xavier; Despiegel, Nicolas; Falissard, Bruno; Chalem, Ylana; Lançon, Christophe; Llorca, Pierre-Michel; Saragoussi, Delphine; Verpillat, Patrice; Wade, Alan G.; Zighed, Djamel A.
2017-01-01
ABSTRACT Background and objective: Automated healthcare databases (AHDB) are an important data source for real life drug and healthcare use. In the filed of depression, lack of detailed clinical data requires the use of binary proxies with important limitations. The study objective was to create a Depressive Health State Index (DHSI) as a continuous health state measure for depressed patients using available data in an AHDB. Methods: The study was based on historical cohort design using the UK Clinical Practice Research Datalink (CPRD). Depressive episodes (depression diagnosis with an antidepressant prescription) were used to create the DHSI through 6 successive steps: (1) Defining study design; (2) Identifying constituent parameters; (3) Assigning relative weights to the parameters; (4) Ranking based on the presence of parameters; (5) Standardizing the rank of the DHSI; (6) Developing a regression model to derive the DHSI in any other sample. Results: The DHSI ranged from 0 (worst) to 100 (best health state) comprising 29 parameters. The proportion of depressive episodes with a remission proxy increased with DHSI quartiles. Conclusion: A continuous outcome for depressed patients treated by antidepressants was created in an AHDB using several different variables and allowed more granularity than currently used proxies. PMID:29081921
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
A review of pharmaceutical extrusion: critical process parameters and scaling-up.
Thiry, J; Krier, F; Evrard, B
2015-02-01
Hot melt extrusion has been a widely used process in the pharmaceutical area for three decades. In this field, it is important to optimize the formulation in order to meet specific requirements. However, the process parameters of the extruder should be as much investigated as the formulation since they have a major impact on the final product characteristics. Moreover, a design space should be defined in order to obtain the expected product within the defined limits. This gives some freedom to operate as long as the processing parameters stay within the limits of the design space. Those limits can be investigated by varying randomly the process parameters but it is recommended to use design of experiments. An examination of the literature is reported in this review to summarize the impact of the variation of the process parameters on the final product properties. Indeed, the homogeneity of the mixing, the state of the drug (crystalline or amorphous), the dissolution rate, the residence time, can be influenced by variations in the process parameters. In particular, the impact of the following process parameters: temperature, screw design, screw speed and feeding, on the final product, has been reviewed. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Atmospheric corrections for satellite water quality studies
NASA Technical Reports Server (NTRS)
Piech, K. R.; Schott, J. R.
1975-01-01
Variations in the relative value of the blue and green reflectances of a lake can be correlated with important optical and biological parameters measured from surface vessels. Measurement of the relative reflectance values from color film imagery requires removal of atmospheric effects. Data processing is particularly crucial because: (1) lakes are the darkest objects in a scene; (2) minor reflectance changes can correspond to important physical changes; (3) lake systems extend over broad areas in which atmospheric conditions may fluctuate; (4) seasonal changes are of importance; and, (5) effects of weather are important, precluding flights under only ideal weather conditions. Data processing can be accomplished through microdensitometry of scene shadow areas. Measurements of reflectance ratios can be made to an accuracy of plus or minus 12%, sufficient to permit monitoring of important eutrophication indices.
The role of updraft velocity in temporal variability of cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia; Nenes, Athanasios; Lee, Dong Min; Oreopoulos, Lazaros
2016-04-01
Significant effort has been dedicated to incorporating direct aerosol-cloud links, through parameterization of liquid droplet activation and ice crystal nucleation, within climate models. This significant accomplishment has generated the need for understanding which parameters affecting hydrometer formation drives its variability in coupled climate simulations, as it provides the basis for optimal parameter estimation as well as robust comparison with data, and other models. Sensitivity analysis alone does not address this issue, given that the importance of each parameter for hydrometer formation depends on its variance and sensitivity. To address the above issue, we develop and use a series of attribution metrics defined with adjoint sensitivities to attribute the temporal variability in droplet and crystal number to important aerosol and dynamical parameters. This attribution analysis is done both for the NASA Global Modeling and Assimilation Office Goddard Earth Observing System Model, Version 5 and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1. Within the GEOS simulation, up to 48% of temporal variability in output ice crystal number and 61% in droplet number can be attributed to input updraft velocity fluctuations, while for the CAM simulation, they explain as much as 89% of the ice crystal number variability. This above results suggest that vertical velocity in both model frameworks is seen to be a very important (or dominant) driver of hydrometer variability. Yet, observations of vertical velocity are seldomly available (or used) to evaluate the vertical velocities in simulations; this strikingly contrasts the amount and quality of data available for aerosol-related parameters. Consequentially, there is a strong need for retrievals or measurements of vertical velocity for addressing this important knowledge gap that requires a significant investment and effort by the atmospheric community. The attribution metrics as a tool of understanding for hydrometer variability can be instrumental for understanding the source of differences between models used for aerosol-cloud-climate interaction studies.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Brady, Oliver J; Godfray, H Charles J; Tatem, Andrew J; Gething, Peter W; Cohen, Justin M; McKenzie, F Ellis; Perkins, T Alex; Reiner, Robert C; Tusting, Lucy S; Sinka, Marianne E; Moyes, Catherine L; Eckhoff, Philip A; Scott, Thomas W; Lindsay, Steven W; Hay, Simon I; Smith, David L
2016-02-01
Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.
Dallmann, André; Ince, Ibrahim; Meyer, Michaela; Willmann, Stefan; Eissing, Thomas; Hempel, Georg
2017-11-01
In the past years, several repositories for anatomical and physiological parameters required for physiologically based pharmacokinetic modeling in pregnant women have been published. While providing a good basis, some important aspects can be further detailed. For example, they did not account for the variability associated with parameters or were lacking key parameters necessary for developing more detailed mechanistic pregnancy physiologically based pharmacokinetic models, such as the composition of pregnancy-specific tissues. The aim of this meta-analysis was to provide an updated and extended database of anatomical and physiological parameters in healthy pregnant women that also accounts for changes in the variability of a parameter throughout gestation and for the composition of pregnancy-specific tissues. A systematic literature search was carried out to collect study data on pregnancy-related changes of anatomical and physiological parameters. For each parameter, a set of mathematical functions was fitted to the data and to the standard deviation observed among the data. The best performing functions were selected based on numerical and visual diagnostics as well as based on physiological plausibility. The literature search yielded 473 studies, 302 of which met the criteria to be further analyzed and compiled in a database. In total, the database encompassed 7729 data. Although the availability of quantitative data for some parameters remained limited, mathematical functions could be generated for many important parameters. Gaps were filled based on qualitative knowledge and based on physiologically plausible assumptions. The presented results facilitate the integration of pregnancy-dependent changes in anatomy and physiology into mechanistic population physiologically based pharmacokinetic models. Such models can ultimately provide a valuable tool to investigate the pharmacokinetics during pregnancy in silico and support informed decision making regarding optimal dosing regimens in this vulnerable special population.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
The Impact Of Surface Shape Of Chip-Breaker On Machined Surface
NASA Astrophysics Data System (ADS)
Šajgalík, Michal; Czán, Andrej; Martinček, Juraj; Varga, Daniel; Hemžský, Pavel; Pitela, David
2015-12-01
Machined surface is one of the most used indicators of workpiece quality. But machined surface is influenced by several factors such as cutting parameters, cutting material, shape of cutting tool or cutting insert, micro-structure of machined material and other known as technological parameters. By improving of these parameters, we can improve machined surface. In the machining, there is important to identify the characteristics of main product of these processes - workpiece, but also the byproduct - the chip. Size and shape of chip has impact on lifetime of cutting tools and its inappropriate form can influence the machine functionality and lifetime, too. This article deals with elimination of long chip created when machining of shaft in automotive industry and with impact of shape of chip-breaker on shape of chip in various cutting conditions based on production requirements.
Cernuda, Carlos; Lughofer, Edwin; Klein, Helmut; Forster, Clemens; Pawliczek, Marcin; Brandstetter, Markus
2017-01-01
During the production process of beer, it is of utmost importance to guarantee a high consistency of the beer quality. For instance, the bitterness is an essential quality parameter which has to be controlled within the specifications at the beginning of the production process in the unfermented beer (wort) as well as in final products such as beer and beer mix beverages. Nowadays, analytical techniques for quality control in beer production are mainly based on manual supervision, i.e., samples are taken from the process and analyzed in the laboratory. This typically requires significant lab technicians efforts for only a small fraction of samples to be analyzed, which leads to significant costs for beer breweries and companies. Fourier transform mid-infrared (FT-MIR) spectroscopy was used in combination with nonlinear multivariate calibration techniques to overcome (i) the time consuming off-line analyses in beer production and (ii) already known limitations of standard linear chemometric methods, like partial least squares (PLS), for important quality parameters Speers et al. (J I Brewing. 2003;109(3):229-235), Zhang et al. (J I Brewing. 2012;118(4):361-367) such as bitterness, citric acid, total acids, free amino nitrogen, final attenuation, or foam stability. The calibration models are established with enhanced nonlinear techniques based (i) on a new piece-wise linear version of PLS by employing fuzzy rules for local partitioning the latent variable space and (ii) on extensions of support vector regression variants (-PLSSVR and ν-PLSSVR), for overcoming high computation times in high-dimensional problems and time-intensive and inappropriate settings of the kernel parameters. Furthermore, we introduce a new model selection scheme based on bagged ensembles in order to improve robustness and thus predictive quality of the final models. The approaches are tested on real-world calibration data sets for wort and beer mix beverages, and successfully compared to linear methods, showing a clear out-performance in most cases and being able to meet the model quality requirements defined by the experts at the beer company. Figure Workflow for calibration of non-Linear model ensembles from FT-MIR spectra in beer production .
On the interpretation of weight vectors of linear models in multivariate neuroimaging.
Haufe, Stefan; Meinecke, Frank; Görgen, Kai; Dähne, Sven; Haynes, John-Dylan; Blankertz, Benjamin; Bießmann, Felix
2014-02-15
The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Advanced Integrated Display System V/STOL Program Performance Specification. Volume I.
1980-06-01
sensor inputs required before the sensor can be designated acceptable. The reactivation count of each sensor parameter which satisfies its veri...129 3.5.2 AIDS Configuration Parameters .............. 133 3.5.3 AIDS Throughput Requirements ............... 133 4 QUALITY ASSURANCE...lists the adaptation parameters of the AIDS software; these parameters include the throughput and memory requirements of the software. 3.2 SYSTEM
Manifold learning of brain MRIs by deep learning.
Brosch, Tom; Tam, Roger
2013-01-01
Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
Development of Overarm Throwing Technique Reflects Throwing Ability during Childhood
KASUYAMA, Tatsuya; MUTOU, Ikuo; SASAMOTO, Hitoshi
2016-01-01
Background: It is important to acquire fundamental movement skills during childhood. Throwing is a representative manipulative skill required for various intrinsic factors. However, the relationship between intrinsic factors and throwing ability in childhood is unclear. The purpose of this study was to investigate intrinsic factors related to the ball throwing distance of Japanese elementary school children. Methods: Japanese elementary school children from grades 1-6 (aged 6-12 years; n=112) participated in this study. The main outcome was throwing ability, which was measured as the ball throwing distance. We measured five general anthropometric parameters, seven physical fitness parameters, and the Roberton's developmental sequence for all subjects. The relationships between the throwing ability and the 13 parameters were analysed. Results: The Roberton's developmental sequence was the best predictor of ball throwing distance (r=0.80, p≤0.01). The best multiple regression model, which included sex, handgrip strength, shuttle run test, and the Roberton's developmental sequence, accounted for 81% of the total variance. Conclusions: The development of correct throwing technique reflects throwing abilities in childhood. In addition to the throwing sequence, enhancement of grip strength and aerobic capacity are also required for children's throwing ability. PMID:28289578
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
NMR methods for metabolomics of mammalian cell culture bioreactors.
Aranibar, Nelly; Reily, Michael D
2014-01-01
Metabolomics has become an important tool for measuring pools of small molecules in mammalian cell cultures expressing therapeutic proteins. NMR spectroscopy has played an important role, largely because it requires minimal sample preparation, does not require chromatographic separation, and is quantitative. The concentrations of large numbers of small molecules in the extracellular media or within the cells themselves can be measured directly on the culture supernatant and on the supernatant of the lysed cells, respectively, and correlated with endpoints such as titer, cell viability, or glycosylation patterns. The observed changes can be used to generate hypotheses by which these parameters can be optimized. This chapter focuses on the sample preparation, data acquisition, and analysis to get the most out of NMR metabolomics data from CHO cell cultures but could easily be extended to other in vitro culture systems.
Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc
2012-07-01
It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. Copyright © 2012 Elsevier Ltd. All rights reserved.
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Testing parameter requirements. 761.389 Section 761.389 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC... Under § 761.79(d)(4) § 761.389 Testing parameter requirements. There are no restrictions on the...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Testing parameter requirements. 761.389 Section 761.389 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC... Under § 761.79(d)(4) § 761.389 Testing parameter requirements. There are no restrictions on the...
Major Design Drivers for LEO Space Surveillance in Europe and Solution Concepts
NASA Astrophysics Data System (ADS)
Krag, Holger; Flohrer, Tim; Klinkrad, Heiner
Europe is preparing for the development of an autonomous system for space situational aware-ness. One important segment of this new system will be dedicated to surveillance and tracking of space objects in Earth orbits. First concept and capability analysis studies have led to a draft system proposal. This proposal foresees, in a first deployment step, a groundbased system consisting of radar sensors and a network of optical telescopes. These sensors will be designed to have the capability of building up and maintaining a catalogue of space objects. A number of related services will be provided, including collision avoidance and the prediction of uncontrolled reentry events. Currently, the user requirements are consolidated, defining the different services, and the related accuracy and timeliness of the derived products. In this consolidation process parameters like the lower diameter limit above which catalogue coverage is to be achieved, the degree of population coverage in various orbital regions and the accuracy of the orbit data maintained in the catalogue are important design drivers for the selection of number and location of the sensors, and the definition of the required sensor performance. Further, the required minimum time for the detection of a manoeuvre, a newly launched object or a fragmentation event, significantly determines the required surveillance performance. In the requirement consolidation process the performance to be specified has to be based on a careful analysis which takes into account accuracy constraints of the services to be provided, the technical feasibility, complexity and costs. User requirements can thus not be defined with-out understanding the consequences they would pose on the system design. This paper will outline the design definition process for the surveillance and tracking segment of the European space situational awareness system. The paper will focus on the low-Earth orbits (LEO). It will present the core user requirements and the definition of the derived services. The de-sired performance parameters will be explained together with presenting their rationale and justification. This will be followed by an identification of the resulting major design drivers. The influence of these drivers on the system design will be analysed, including limiting object diameter, population coverage, orbit maintenance accuracy, and the minimum time to detect events like manoeuvres or breakups. The underlying simulation and verification concept will be explained. Finally, a first compilation of performance parameters for the surveillance and tracking segment will be presented and discussed.
NASA Technical Reports Server (NTRS)
Russell, E. E.; Chandos, R. A.; Kodak, J. C.; Pellicori, S. F.; Tomasko, M. G.
1974-01-01
The constraints that are imposed on the Outer Planet Missions (OPM) imager design are of critical importance. Imager system modeling analyses define important parameters and systematic means for trade-offs applied to specific Jupiter orbiter missions. Possible image sequence plans for Jupiter missions are discussed in detail. Considered is a series of orbits that allow repeated near encounters with three of the Jovian satellites. The data handling involved in the image processing is discussed, and it is shown that only minimal processing is required for the majority of images for a Jupiter orbiter mission.
Pelvic incidence variation among individuals: functional influence versus genetic determinism.
Chen, Hong-Fang; Zhao, Chang-Qing
2018-03-20
Pelvic incidence has become one of the most important sagittal parameters in spinal surgery. Despite its great importance, pelvic incidence can vary from 33° to 85° in the normal population. The reasons for this great variability in pelvic incidence remain unexplored. The objective of this article is to present some possible interpretations for the great variability in pelvic incidence under both normal and pathological conditions and to further understand the determinants of pelvic incidence from the perspective of the functional requirements for bipedalism and genetic backgrounds via a literature review. We postulate that both pelvic incidence and pelvic morphology may be genetically predetermined, and a great variability in pelvic incidence may already exist even before birth. This great variability may also serve as a further reminder that the sagittal profile, bipedal locomotion mode, and genetic background of every individual are unique and specific, and clinicians should avoid making universally applying broad generalizations of pelvic incidence. Although PI is an important parameter and there are many theories behind its variability, we still do not have clear mechanistic answers.
Takemoto, Kazuhiro; Aie, Kazuki
2017-05-25
Host-pathogen interactions are important in a wide range of research fields. Given the importance of metabolic crosstalk between hosts and pathogens, a metabolic network-based reverse ecology method was proposed to infer these interactions. However, the validity of this method remains unclear because of the various explanations presented and the influence of potentially confounding factors that have thus far been neglected. We re-evaluated the importance of the reverse ecology method for evaluating host-pathogen interactions while statistically controlling for confounding effects using oxygen requirement, genome, metabolic network, and phylogeny data. Our data analyses showed that host-pathogen interactions were more strongly influenced by genome size, primary network parameters (e.g., number of edges), oxygen requirement, and phylogeny than the reserve ecology-based measures. These results indicate the limitations of the reverse ecology method; however, they do not discount the importance of adopting reverse ecology approaches altogether. Rather, we highlight the need for developing more suitable methods for inferring host-pathogen interactions and conducting more careful examinations of the relationships between metabolic networks and host-pathogen interactions.
Chapter A6. Section 6.1. Temperature
Revised by Wilde, Franceska D.
2006-01-01
Accurate temperature measurements are required for accurate determinations of important environmental parameters such as pH, specific electrical conductance, and dissolved oxygen, and to the determination of chemical reaction rates and equilibria, biological activity, and physical fluid properties. This section of the National Field Manual (NFM) describes U.S. Geological Survey (USGS) guidance and protocols for measurement of temperature in air, ground water, and surface water and calibration of the equipment used.
NASA Technical Reports Server (NTRS)
Green, R. O.; Roberts, D. A.
1994-01-01
Plant species composition and plant architectural attributes are critical parameters required for the measuring, monitoring and modeling of terrestrial ecosystems. Remote sensing is commonly cited as an important tool for deriving vegetation properties at an appropriate scale for ecosystem studies, ranging from local, to regional and even synoptic scales (e.g. Wessman 1992).
Electrical Characterization of Semiconductor Materials and Devices
NASA Astrophysics Data System (ADS)
Deen, M.; Pascal, Fabien
Semiconductor materials and devices continue to occupy a preeminent technological position due to their importance when building integrated electronic systems used in a wide range of applications from computers, cell-phones, personal digital assistants, digital cameras and electronic entertainment systems, to electronic instrumentation for medical diagnositics and environmental monitoring. Key ingredients of this technological dominance have been the rapid advances made in the quality and processing of materials - semiconductors, conductors and dielectrics - which have given metal oxide semiconductor device technology its important characteristics of negligible standby power dissipation, good input-output isolation, surface potential control and reliable operation. However, when assessing material quality and device reliability, it is important to have fast, nondestructive, accurate and easy-to-use electrical characterization techniques available, so that important parameters such as carrier doping density, type and mobility of carriers, interface quality, oxide trap density, semiconductor bulk defect density, contact and other parasitic resistances and oxide electrical integrity can be determined. This chapter describes some of the more widely employed and popular techniques that are used to determine these important parameters. The techniques presented in this chapter range in both complexity and test structure requirements from simple current-voltage measurements to more sophisticated low-frequency noise, charge pumping and deep-level transient spectroscopy techniques.
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... must I collect with my continuous parameter monitoring systems and is this requirement enforceable? 62... with my continuous parameter monitoring systems and is this requirement enforceable? (a) Where continuous parameter monitoring systems are used, obtain 1-hour arithmetic averages for three parameters: (1...
Code of Federal Regulations, 2011 CFR
2011-07-01
... must I collect with my continuous parameter monitoring systems and is this requirement enforceable? 62... with my continuous parameter monitoring systems and is this requirement enforceable? (a) Where continuous parameter monitoring systems are used, obtain 1-hour arithmetic averages for three parameters: (1...
System identification for modeling for control of flexible structures
NASA Technical Reports Server (NTRS)
Mettler, Edward; Milman, Mark
1986-01-01
The major components of a design and operational flight strategy for flexible structure control systems are presented. In this strategy an initial distributed parameter control design is developed and implemented from available ground test data and on-orbit identification using sophisticated modeling and synthesis techniques. The reliability of this high performance controller is directly linked to the accuracy of the parameters on which the design is based. Because uncertainties inevitably grow without system monitoring, maintaining the control system requires an active on-line system identification function to supply parameter updates and covariance information. Control laws can then be modified to improve performance when the error envelopes are decreased. In terms of system safety and stability the covariance information is of equal importance as the parameter values themselves. If the on-line system ID function detects an increase in parameter error covariances, then corresponding adjustments must be made in the control laws to increase robustness. If the error covariances exceed some threshold, an autonomous calibration sequence could be initiated to restore the error enveloped to an acceptable level.
Use of the Kalman Filter for Aortic Pressure Waveform Noise Reduction
Lu, Hsiang-Wei; Wu, Chung-Che; Aliyazicioglu, Zekeriya; Kang, James S.
2017-01-01
Clinical applications that require extraction and interpretation of physiological signals or waveforms are susceptible to corruption by noise or artifacts. Real-time hemodynamic monitoring systems are important for clinicians to assess the hemodynamic stability of surgical or intensive care patients by interpreting hemodynamic parameters generated by an analysis of aortic blood pressure (ABP) waveform measurements. Since hemodynamic parameter estimation algorithms often detect events and features from measured ABP waveforms to generate hemodynamic parameters, noise and artifacts integrated into ABP waveforms can severely distort the interpretation of hemodynamic parameters by hemodynamic algorithms. In this article, we propose the use of the Kalman filter and the 4-element Windkessel model with static parameters, arterial compliance C, peripheral resistance R, aortic impedance r, and the inertia of blood L, to represent aortic circulation for generating accurate estimations of ABP waveforms through noise and artifact reduction. Results show the Kalman filter could very effectively eliminate noise and generate a good estimation from the noisy ABP waveform based on the past state history. The power spectrum of the measured ABP waveform and the synthesized ABP waveform shows two similar harmonic frequencies. PMID:28611850
NASA Astrophysics Data System (ADS)
Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.
2014-12-01
Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.
Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement
NASA Astrophysics Data System (ADS)
Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.
2013-09-01
Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.
NASA Astrophysics Data System (ADS)
Khidhir, Basim A.; Mohamed, Bashir
2011-02-01
Machining parameters has an important factor on tool wear and surface finish, for that the manufacturers need to obtain optimal operating parameters with a minimum set of experiments as well as minimizing the simulations in order to reduce machining set up costs. The cutting speed is one of the most important cutting parameter to evaluate, it clearly most influences on one hand, tool life, tool stability, and cutting process quality, and on the other hand controls production flow. Due to more demanding manufacturing systems, the requirements for reliable technological information have increased. For a reliable analysis in cutting, the cutting zone (tip insert-workpiece-chip system) as the mechanics of cutting in this area are very complicated, the chip is formed in the shear plane (entrance the shear zone) and is shape in the sliding plane. The temperature contributed in the primary shear, chamfer and sticking, sliding zones are expressed as a function of unknown shear angle on the rake face and temperature modified flow stress in each zone. The experiments were carried out on a CNC lathe and surface finish and tool tip wear are measured in process. Machining experiments are conducted. Reasonable agreement is observed under turning with high depth of cut. Results of this research help to guide the design of new cutting tool materials and the studies on evaluation of machining parameters to further advance the productivity of nickel based alloy Hastelloy - 276 machining.
Chen, Siyuan; Epps, Julien
2014-12-01
Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.
NASA Astrophysics Data System (ADS)
Vitelaru, Catalin; Aijaz, Asim; Constantina Parau, Anca; Kiss, Adrian Emil; Sobetkii, Arcadie; Kubart, Tomas
2018-04-01
Pressure and target voltage driven discharge runaway from low to high discharge current density regimes in high power impulse magnetron sputtering of carbon is investigated. The main purpose is to provide a meaningful insight of the discharge dynamics, with the ultimate goal to establish a correlation between discharge properties and process parameters to control the film growth. This is achieved by examining a wide range of pressures (2–20 mTorr) and target voltages (700–850 V) and measuring ion saturation current density at the substrate position. We show that the minimum plasma impedance is an important parameter identifying the discharge transition as well as establishing a stable operating condition. Using the formalism of generalized recycling model, we introduce a new parameter, ‘recycling ratio’, to quantify the process gas recycling for specific process conditions. The model takes into account the ion flux to the target, the amount of gas available, and the amount of gas required for sustaining the discharge. We show that this parameter describes the relation between the gas recycling and the discharge current density. As a test case, we discuss the pressure and voltage driven transitions by changing the gas composition when adding Ne into the discharge. We propose that standard Ar HiPIMS discharges operated with significant gas recycling do not require Ne to increase the carbon ionization.
Comprehensive non-dimensional normalization of gait data.
Pinzone, Ornella; Schwartz, Michael H; Baker, Richard
2016-02-01
Normalizing clinical gait analysis data is required to remove variability due to physical characteristics such as leg length and weight. This is particularly important for children where both are associated with age. In most clinical centres conventional normalization (by mass only) is used whereas there is a stronger biomechanical argument for non-dimensional normalization. This study used data from 82 typically developing children to compare how the two schemes performed over a wide range of temporal-spatial and kinetic parameters by calculating the coefficients of determination with leg length, weight and height. 81% of the conventionally normalized parameters had a coefficient of determination above the threshold for a statistical association (p<0.05) compared to 23% of those normalized non-dimensionally. All the conventionally normalized parameters exceeding this threshold showed a reduced association with non-dimensional normalization. In conclusion, non-dimensional normalization is more effective that conventional normalization in reducing the effects of height, weight and age in a comprehensive range of temporal-spatial and kinetic parameters. Copyright © 2015 Elsevier B.V. All rights reserved.
Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.
Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less
Selgrade, J F; Harris, L A; Pasteur, R D
2009-10-21
This study presents a 13-dimensional system of delayed differential equations which predicts serum concentrations of five hormones important for regulation of the menstrual cycle. Parameters for the system are fit to two different data sets for normally cycling women. For these best fit parameter sets, model simulations agree well with the two different data sets but one model also has an abnormal stable periodic solution, which may represent polycystic ovarian syndrome. This abnormal cycle occurs for the model in which the normal cycle has estradiol levels at the high end of the normal range. Differences in model behavior are explained by studying hysteresis curves in bifurcation diagrams with respect to sensitive model parameters. For instance, one sensitive parameter is indicative of the estradiol concentration that promotes pituitary synthesis of a large amount of luteinizing hormone, which is required for ovulation. Also, it is observed that models with greater early follicular growth rates may have a greater risk of cycling abnormally.
Determination of the QCD Λ Parameter and the Accuracy of Perturbation Theory at High Energies.
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2016-10-28
We discuss the determination of the strong coupling α_{MS[over ¯]}(m_{Z}) or, equivalently, the QCD Λ parameter. Its determination requires the use of perturbation theory in α_{s}(μ) in some scheme s and at some energy scale μ. The higher the scale μ, the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ parameter in three-flavor QCD, we perform lattice computations in a scheme that allows us to nonperturbatively reach very high energies, corresponding to α_{s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a 3% error in the Λ parameter, while data around α_{s}≈0.2 are clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems
DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.
2017-07-08
Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less
NASA Astrophysics Data System (ADS)
Rossinskyi, Volodymyr
2018-02-01
The biological wastewater treatment technologies in anoxic and aerobic bioreactors with recycle of sludge mixture are used for the effective removal of organic compounds from wastewater. The change rate of sludge mixture recirculation between bioreactors leads to a change and redistribution of concentrations of organic compounds in sludge mixture in bioreactors and change hydrodynamic regimes in bioreactors. Determination of the coefficient of internal recirculation of sludge mixture between bioreactors is important for the choice of technological parameters of biological treatment (wastewater treatment duration in anoxic and aerobic bioreactors, flow capacity of recirculation pumps). Determination of the coefficient of internal recirculation of sludge mixture requires integrated consideration of hydrodynamic parameter (flow rate), kinetic parameter (rate of oxidation of organic compounds) and physical-chemical parameter of wastewater (concentration of organic compounds). The conducted numerical experiment from the proposed mathematical equations allowed to obtain analytical dependences of the coefficient of internal recirculation sludge mixture between bioreactors on the concentration of organic compounds in wastewater, the duration of wastewater treatment in bioreactors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris
This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less
Optimal Draft requirement for vibratory tillage equipment using Genetic Algorithm Technique
NASA Astrophysics Data System (ADS)
Rao, Gowripathi; Chaudhary, Himanshu; Singh, Prem
2018-03-01
Agriculture is an important sector of Indian economy. Primary and secondary tillage operations are required for any land preparation process. Conventionally different tractor-drawn implements such as mouldboard plough, disc plough, subsoiler, cultivator and disc harrow, etc. are used for primary and secondary manipulations of soils. Among them, oscillatory tillage equipment is one such type which uses vibratory motion for tillage purpose. Several investigators have reported that the requirement for draft consumption in primary tillage implements is more as compared to oscillating one because they are always in contact with soil. Therefore in this paper, an attempt is made to find out the optimal parameters from the experimental data available in the literature to obtain minimum draft consumption through genetic algorithm technique.
FAST TRACK COMMUNICATION: Phenomenology of the equivalence principle with light scalars
NASA Astrophysics Data System (ADS)
Damour, Thibault; Donoghue, John F.
2010-10-01
Light scalar particles with couplings of sub-gravitational strength, which can generically be called 'dilatons', can produce violations of the equivalence principle. However, in order to understand experimental sensitivities one must know the coupling of these scalars to atomic systems. We report here on a study of the required couplings. We give a general Lagrangian with five independent dilaton parameters and calculate the 'dilaton charge' of atomic systems for each of these. Two combinations are particularly important. One is due to the variations in the nuclear binding energy, with a sensitivity scaling with the atomic number as A-1/3. The other is due to electromagnetism. We compare limits on the dilaton parameters from existing experiments.
NASA Technical Reports Server (NTRS)
Wissinger, A.; Scott, R. M.; Peters, W.; Augustyn, W., Jr.; Arnold, R.; Offner, A.; Damast, M.; Boyce, B.; Kinnaird, R.; Mangus, J. D.
1971-01-01
A means is presented whereby the effect of various changes in the most important parameters of a three meter aperature space astronomy telescope can be evaluated to determine design trends and to optimize the optical design configuration. Methods are defined for evaluating the theoretical optical performance of axisymmetric, centrally obscured telescopes based upon the intended astronomy research usage. A series of design parameter variations is presented to determine the optimum telescope configuration. The design optimum requires very fast primary mirrors, so the study also examines the current state of the art in fabricating large, fast primary mirrors. The conclusion is that a 3-meter primary mirror having a focal ratio as low as f/2 is feasible using currently established techniques.
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
Potential efficiencies of open- and closed-cycle CO, supersonic, electric-discharge lasers
NASA Technical Reports Server (NTRS)
Monson, D. J.
1976-01-01
Computed open- and closed-cycle system efficiencies (laser power output divided by electrical power input) are presented for a CW carbon monoxide, supersonic, electric-discharge laser. Closed-system results include the compressor power required to overcome stagnation pressure losses due to supersonic heat addition and a supersonic diffuser. The paper shows the effect on the system efficiencies of varying several important parameters. These parameters include: gas mixture, gas temperature, gas total temperature, gas density, total discharge energy loading, discharge efficiency, saturated gain coefficient, optical cavity size and location with respect to the discharge, and supersonic diffuser efficiency. Maximum open-cycle efficiency of 80-90% is predicted; the best closed-cycle result is 60-70%.
Attitude determination and parameter estimation using vector observations - Theory
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Procedures for attitude determination based on Wahba's loss function are generalized to include the estimation of parameters other than the attitude, such as sensor biases. Optimization with respect to the attitude is carried out using the q-method, which does not require an a priori estimate of the attitude. Optimization with respect to the other parameters employs an iterative approach, which does require an a priori estimate of these parameters. Conventional state estimation methods require a priori estimates of both the parameters and the attitude, while the algorithm presented in this paper always computes the exact optimal attitude for given values of the parameters. Expressions for the covariance of the attitude and parameter estimates are derived.
Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R
2007-09-01
A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
Scaling, Similarity, and the Fourth Paradigm for Hydrology
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross
2017-01-01
In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.
NASA Technical Reports Server (NTRS)
Ebeling, Charles; Beasley, Kenneth D.
1992-01-01
The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.
Technical variables in high-throughput miRNA expression profiling: much work remains to be done.
Nelson, Peter T; Wang, Wang-Xia; Wilfred, Bernard R; Tang, Guiliang
2008-11-01
MicroRNA (miRNA) gene expression profiling has provided important insights into plant and animal biology. However, there has not been ample published work about pitfalls associated with technical parameters in miRNA gene expression profiling. One source of pertinent information about technical variables in gene expression profiling is the separate and more well-established literature regarding mRNA expression profiling. However, many aspects of miRNA biochemistry are unique. For example, the cellular processing and compartmentation of miRNAs, the differential stability of specific miRNAs, and aspects of global miRNA expression regulation require specific consideration. Additional possible sources of systematic bias in miRNA expression studies include the differential impact of pre-analytical variables, substrate specificity of nucleic acid processing enzymes used in labeling and amplification, and issues regarding new miRNA discovery and annotation. We conclude that greater focus on technical parameters is required to bolster the validity, reliability, and cultural credibility of miRNA gene expression profiling studies.
Design, fabrication, and operation of a test rig for high-speed tapered-roller bearings
NASA Technical Reports Server (NTRS)
Signer, H. R.
1974-01-01
A tapered-roller bearing test machine was designed, fabricated and successfully operated at speeds to 20,000 rpm. Infinitely variable radial loads to 26,690 N (6,000 lbs.) and thrust loads to 53,380 N (12,000 lbs.) can be applied to test bearings. The machine instrumentation proved to have the accuracy and reliability required for parametric bearing performance testing and has the capability of monitoring all programmed test parameters at continuous operation during life testing. This system automatically shuts down a test if any important test parameter deviates from the programmed conditions, or if a bearing failure occurs. A lubrication system was developed as an integral part of the machine, capable of lubricating test bearings by external jets and by means of passages feeding through the spindle and bearing rings into the critical internal bearing surfaces. In addition, provisions were made for controlled oil cooling of inner and outer rings to effect the type of bearing thermal management that is required when testing at high speeds.
Investigation of chemical vapor deposition of garnet films for bubble domain memories
NASA Technical Reports Server (NTRS)
Besser, P. J.; Hamilton, T. N.
1973-01-01
The important process parameters and control required to grow reproducible device quality ferrimagnetic films by chemical vapor deposition (CVD) were studied. The investigation of the critical parameters in the CVD growth process led to the conclusion that the required reproducibility of film properties cannot be achieved with individually controlled separate metal halide sources. Therefore, the CVD growth effort was directed toward replacement of the halide sources with metallic sources with the ultimate goal being the reproducible growth of complex garnet compositions utilizing a single metal alloy source. The characterization of the YGdGaIG films showed that certain characteristics of this material, primarily the low domain wall energy and the large temperature sensitivity, severely limited its potential as a useful material for bubble domain devices. Consequently, at the time of the change from halide to metallic sources, the target film compositions were shifted to more useful materials such as YGdTmGaIG, YEuGaIG and YSmGaIG.
Scaling and Systems Considerations in Pulsed Inductive Thrusters
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.
2007-01-01
Performance scaling in pulsed inductive thrusters is discussed in the context of previous experimental studies and modeling results. Two processes, propellant ionization and acceleration, are interconnected where overall thruster performance and operation are concerned, but they are separated here to gain physical insight into each process and arrive at quantitative criteria that should be met to address or mitigate inherent inductive thruster difficulties. The effects of preionization in lowering the discharge energy requirements relative to a case where no preionization is employed, and in influencing the location of the initial current sheet, are described. The relevant performance scaling parameters for the acceleration stage are reviewed, emphasizing their physical importance and the numerical values required for efficient acceleration. The scaling parameters are then related to the design of the pulsed power train providing current to the acceleration stage. The impact of various choices in pulsed power train and circuit topology selection are reviewed, paying special attention to how these choices mitigate or exacerbate switching, lifetime, and power consumption issues.
Jäger, B
1983-09-01
The technology of composting must guarantee the material-chemical, biological and physical-technical reaction conditions essential for the rotting process. In this, the constituents of the input material and the C/N ratio play an important role. Maintaining optimum decomposition conditions is rendered difficult by the fact that the physical-technical reaction parameters partly exclude each other. These are: optimum humidity, adequate air/oxygen supply, large active surface, loose structure with sufficient decomposition volume. The processing of the raw refuse required to maintain the physical-technical reaction parameters can be carried out either by the conventional method of preliminary fragmentizing, sieving and mixing or else in conjunction with separating recycling in adapted systems. The latter procedure obviates some drawbacks which mainly result from the high expenditure required for preliminary fragmentation of the raw refuse. Moreover, presorting affords the possibility of reducing the heavy-metal content of the organic composing fraction and this approaches a solution to the noxa disposal problem which at present stands in the way of being accepted as an ecological waste disposal method.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Pulsed Electromagnetic Acceleration of Plasmas
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Cassibry, Jason T.; Markusic, Tom E.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
A major shift in paradigm in driving pulsed plasma thruster is necessary if the original goal of accelerating a plasma sheet efficiently to high velocities as a plasma "slug" is to be realized. Firstly, the plasma interior needs to be highly collisional so that it can be dammed by the plasma edge layer not (upstream) adjacent to the driving 'vacuum' magnetic field. Secondly, the plasma edge layer needs to be strongly magnetized so that its Hall parameter is of the order of unity in this region to ensure excellent coupling of the Lorentz force to the plasma. Thirdly, to prevent and/or suppress the occurrence of secondary arcs or restrike behind the plasma, the region behind the plasma needs to be collisionless and extremely magnetized with sufficiently large Hall parameter. This places a vacuum requirement on the bore conditions prior to the shot. These requirements are quantified in the paper and lead to the introduction of three new design parameters corresponding to these three plasma requirements. The first parameter, labeled in the paper as gamma (sub 1), pertains to the permissible ratio of the diffusive excursion of the plasma during the course of the acceleration to the plasma longitudinal dimension. The second parameter is the required Hall parameter of the edge plasma region, and the third parameter the required Hall parameter of the region behind the plasma. Experimental research is required to quantify the values of these design parameters. Based upon fundamental theory of the transport processes in plasma, some theoretical guidance on the choice of these parameters are provided to help designing the necessary experiments to acquire these data.
Model-experiment interaction to improve representation of phosphorus limitation in land models
NASA Astrophysics Data System (ADS)
Norby, R. J.; Yang, X.; Cabugao, K. G. M.; Childs, J.; Gu, L.; Haworth, I.; Mayes, M. A.; Porter, W. S.; Walker, A. P.; Weston, D. J.; Wright, S. J.
2015-12-01
Carbon-nutrient interactions play important roles in regulating terrestrial carbon cycle responses to atmospheric and climatic change. None of the CMIP5 models has included routines to represent the phosphorus (P) cycle, although P is commonly considered to be the most limiting nutrient in highly productive, lowland tropical forests. Model simulations with the Community Land Model (CLM-CNP) show that inclusion of P coupling leads to a smaller CO2 fertilization effect and warming-induced CO2 release from tropical ecosystems, but there are important uncertainties in the P model, and improvements are limited by a dearth of data. Sensitivity analysis identifies the relative importance of P cycle parameters in determining P availability and P limitation, and thereby helps to define the critical measurements to make in field campaigns and manipulative experiments. To improve estimates of P supply, parameters that describe maximum amount of labile P in soil and sorption-desorption processes are necessary for modeling the amount of P available for plant uptake. Biochemical mineralization is poorly constrained in the model and will be improved through field observations that link root traits to mycorrhizal activity, phosphatase activity, and root depth distribution. Model representation of P demand by vegetation, which currently is set by fixed stoichiometry and allometric constants, requires a different set of data. Accurate carbon cycle modeling requires accurate parameterization of the photosynthetic machinery: Vc,max and Jmax. Relationships between the photosynthesis parameters and foliar nutrient (N and P) content are being developed, and by including analysis of covariation with other plant traits (e.g., specific leaf area, wood density), we can provide a basis for more dynamic, trait-enabled modeling. With this strong guidance from model sensitivity and uncertainty analysis, field studies are underway in Puerto Rico and Panama to collect model-relevant data on P supply and demand functions. New FACE and soil warming experiments in P-limited ecosystems in subtropical Australia, and tropical Brazil, Puerto Rico, and Panama will provide important benchmarks for the performance of P-enabled models under future conditions.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Experimental analysis of green roof substrate detention characteristics.
Yio, Marcus H N; Stovin, Virginia; Werdin, Jörg; Vesuviano, Gianni
2013-01-01
Green roofs may make an important contribution to urban stormwater management. Rainfall-runoff models are required to evaluate green roof responses to specific rainfall inputs. The roof's hydrological response is a function of its configuration, with the substrate - or growing media - providing both retention and detention of rainfall. The objective of the research described here is to quantify the detention effects due to green roof substrates, and to propose a suitable hydrological modelling approach. Laboratory results from experimental detention tests on green roof substrates are presented. It is shown that detention increases with substrate depth and as a result of increasing substrate organic content. Model structures based on reservoir routing are evaluated, and it is found that a one-parameter reservoir routing model coupled with a parameter that describes the delay to start of runoff best fits the observed data. Preliminary findings support the hypothesis that the reservoir routing parameter values can be defined from the substrate's physical characteristics.
AST Combustion Workshop: Diagnostics Working Group Report
NASA Technical Reports Server (NTRS)
Locke, Randy J.; Hicks, Yolanda R.; Hanson, Ronald K.
1996-01-01
A workshop was convened under NASA's Advanced Subsonics Technologies (AST) Program. Many of the principal combustion diagnosticians from industry, academia, and government laboratories were assembled in the Diagnostics/Testing Subsection of this workshop to discuss the requirements and obstacles to the successful implementation of advanced diagnostic techniques to the test environment of the proposed AST combustor. The participants, who represented the major relevant areas of advanced diagnostic methods currently applied to combustion and related fields, first established the anticipated AST combustor flowfield conditions. Critical flow parameters were then examined and prioritized as to their importance to combustor/fuel injector design and manufacture, environmental concerns, and computational interests. Diagnostic techniques were then evaluated in terms of current status, merits and obstacles for each flow parameter. All evaluations are presented in tabular form and recommendations are made on the best-suited diagnostic method to implement for each flow parameter in order of applicability and intrinsic value.
Use of an Expansion Tube to Examine Scramjet Combustion at Hypersonic Velocities
NASA Technical Reports Server (NTRS)
Rizkalla, Oussama; Bakos, Robert J.; Chinitz, Wallace; Pulsonetti, Maria V; Erdos, John I.
1989-01-01
Combustion testing at total enthalpy conditions corresponding to flight Math numbers in excess of 12 requires the use of impulse facilities. The expansion tube is the only operational facility of its size which can provide these conditions without excessive oxygen dissociation or driver gas contamination. Expansio tube operation is described herein and the operational parameters having the largest impact on its performance are determined. These are: driver-to-intermediate chamber pressure ratio, driver gas molecular weight and specific heat ratio, and driver gas temperature. Increases in the lase named parameter will markedly affect the test section static pressure. Preliminary calibration tests are discussed and test gas conditions which have been achieved are presented. Calculated and experimental test times are compared and the parameters affecting test time are discussed. The direction of future work using this important experimental tool is indicated.
Use of an expansion tube to examine scramjet combustion at hypersonic velocities
NASA Technical Reports Server (NTRS)
Rizkalla, O.; Bakos, R. J.; Pulsonetti, M.; Chinitz, Wallace; Erdos, John I.
1989-01-01
Combustion testing at total enthalpy conditions corresponding to flight Mach numbers in excess of 12 requires the use of impulse facilities. The expansion tube is the only operational facility of its size which can provide these conditions without excessive oxygen dissociation or driver gas contamination. Expansion tube operation is described herein and the operational parameters having the largest impact on its performance are determined. These are: driver-to-intermediate chamber pressure ratio, driver gas molecular weight and specific heat ratio, and driver gas temperature. Increases in the last-named parameter will markedly affect the test section static pressure. Preliminary calibration tests are discussed and test gas conditions which have been achieved are presented. Calculated and experimental test times are compared and the parameters affecting test time are discussed. The direction of future work using this important experimental tool is indicated.
Shah, Neha; Mehta, Tejal; Aware, Rahul; Shetty, Vasant
2017-12-01
The present work aims at studying process parameters affecting coating of minitablets (3 mm in diameter) through Wurster coating process. Minitablets of Naproxen with high drug loading were manufactured using 3 mm multi-tip punches. The release profile of core pellets (published) and minitablets was compared with that of marketed formulation. The core formulation of minitablets was found to show similarity in dissolution profile with marketed formulation and hence was further carried forward for functional coating over it. Wurster processing was implemented to pursue functional coating over core formulation. Different process parameters were screened and control strategy was applied for factors significantly affecting the process. Modified Plackett Burman Design was applied for studying important factors. Based on the significant factors and minimum level of coating required for functionalization, optimized process was executed. Final coated batch was evaluated for coating thickness, surface morphology, and drug release study.
Generalized Sheet Transition Conditions for a Metascreen—A Fishnet Metasurface
NASA Astrophysics Data System (ADS)
Holloway, Christopher L.; Kuester, Edward F.
2018-05-01
We used a multiple-scale homogenization method to derive generalized sheet transition conditions (GSTCs) for electromagnetic fields at the surface of a metascreen---a metasurface with a "fishnet" structure. These surfaces are characterized by periodically-spaced arbitrary-shaped apertures in an otherwise relatively impenetrable surface. The parameters in these GSTCs are interpreted as effective surface susceptibilities and surface porosities, which are related to the geometry of the apertures that constitute the metascreen. Finally, we emphasize the subtle but important difference between the GSTCs required for metascreens and those required for metafilms (a metasurface with a "cermet" structure, i.e., an array of isolated (non-touching) scatterers).
NASA Technical Reports Server (NTRS)
Gray, C. E., Jr.; Snyder, R. E.; Taylor, J. T.; Cires, A.; Fitzgerald, A. L.; Armistead, M. F.
1980-01-01
Preliminary design studies are presented which consider the important parameters in providing 250 knot test velocities at the Aircraft Landing Dynamics Facility. Four major components of this facility are: the hydraulic jet catapult, the test carriage structure, the reaction turning bucket, and the wheels. Using the hydraulic-jet catapult characteristics, a target design point was selected and a carriage structure was sized to meet the required strength requirements. The preliminary design results indicate that to attain 250 knot test velocities for a given hydraulic jet catapult system, a carriage mass of 25,424 kg (56,000 lbm.) cannot be exceeded.
Ocean observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1998-01-01
Significant accomplishments made during the present reporting period: (1) We expanded our "spectral-matching" algorithm (SMA), for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction and derivation of the ocean's bio-optical parameters, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) A modification to the SMA that does not require detailed aerosol models has been developed. This is important as the requirement for realistic aerosol models has been a weakness of the SMA; and (3) We successfully acquired micro pulse lidar data in a Saharan dust outbreak during ACE-2 in the Canary Islands.
PID Tuning Using Extremum Seeking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killingsworth, N; Krstic, M
2005-11-15
Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less
Quantitative model validation of manipulative robot systems
NASA Astrophysics Data System (ADS)
Kartowisastro, Iman Herwidiana
This thesis is concerned with applying the distortion quantitative validation technique to a robot manipulative system with revolute joints. Using the distortion technique to validate a model quantitatively, the model parameter uncertainties are taken into account in assessing the faithfulness of the model and this approach is relatively more objective than the commonly visual comparison method. The industrial robot is represented by the TQ MA2000 robot arm. Details of the mathematical derivation of the distortion technique are given which explains the required distortion of the constant parameters within the model and the assessment of model adequacy. Due to the complexity of a robot model, only the first three degrees of freedom are considered where all links are assumed rigid. The modelling involves the Newton-Euler approach to obtain the dynamics model, and the Denavit-Hartenberg convention is used throughout the work. The conventional feedback control system is used in developing the model. The system behavior to parameter changes is investigated as some parameters are redundant. This work is important so that the most important parameters to be distorted can be selected and this leads to a new term called the fundamental parameters. The transfer function approach has been chosen to validate an industrial robot quantitatively against the measured data due to its practicality. Initially, the assessment of the model fidelity criterion indicated that the model was not capable of explaining the transient record in term of the model parameter uncertainties. Further investigations led to significant improvements of the model and better understanding of the model properties. After several improvements in the model, the fidelity criterion obtained was almost satisfied. Although the fidelity criterion is slightly less than unity, it has been shown that the distortion technique can be applied in a robot manipulative system. Using the validated model, the importance of friction terms in the model was highlighted with the aid of the partition control technique. It was also shown that the conventional feedback control scheme was insufficient for a robot manipulative system due to high nonlinearity which was inherent in the robot manipulator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Zavgorodnya, Oleksandra; Shamshina, Julia L.; Bonner, Jonathan R.; ...
2017-04-27
Here, we report the correlation between key solution properties and spinability of chitin from the ionic liquid (IL) 1-ethyl-3-methylimidazolium acetate ([C 2mim][OAc]), and the similarities and differences to electrospinning solutions of non-ionic polymers in volatile organic compounds (VOCs). We found that when electrospinning is conducted from ILs, conductivity and surface tension are not the key parameters regulating spinability, while solution viscosity and polymer concentration are. Contrarily, for electrospinning of polymers from VOCs, solution conductivity and viscosity have been reported to be among some of the most important factors controlling fiber formation. For chitin electrospun from [C 2mim][OAc], we found bothmore » a critical chitin concentration required for continuous fiber formation (> 0.20 wt%) and a required viscosity for the spinning solution (between ca. 450 – 1500 cP). The high viscosities of the biopolymer-IL solutions made it possible to electrospin solutions with low, less than 1 wt% of polymer concentration and produce thin fibers without the need to adjust the electrospinning parameters. These results suggest new prospects for the control of fiber architecture in non-woven mats, which is crucial for materials performance.« less
Effect of Common Cryoprotectants on Critical Warming Rates and Ice Formation in Aqueous Solutions
Hopkins, Jesse B.; Badeau, Ryan; Warkentin, Matthew; Thorne, Robert E.
2012-01-01
Ice formation on warming is of comparable or greater importance to ice formation on cooling in determining survival of cryopreserved samples. Critical warming rates required for ice-free warming of vitrified aqueous solutions of glycerol, dimethyl sulfoxide, ethylene glycol, polyethylene glycol 200 and sucrose have been measured for warming rates of order 10 to 104 K/s. Critical warming rates are typically one to three orders of magnitude larger than critical cooling rates. Warming rates vary strongly with cooling rates, perhaps due to the presence of small ice fractions in nominally vitrified samples. Critical warming and cooling rate data spanning orders of magnitude in rates provide rigorous tests of ice nucleation and growth models and their assumed input parameters. Current models with current best estimates for input parameters provide a reasonable account of critical warming rates for glycerol solutions at high concentrations/low rates, but overestimate both critical warming and cooling rates by orders of magnitude at lower concentrations and larger rates. In vitrification protocols, minimizing concentrations of potentially damaging cryoprotectants while minimizing ice formation will require ultrafast warming rates, as well as fast cooling rates to minimize the required warming rates. PMID:22728046
Psychosocial issues in space: future challenges.
Sandal, G M
2001-06-01
As the duration of space flights increases and crews become more heterogeneous, psychosocial factors are likely to play an increasingly important role in determining mission success. The operations of the International Space Station and planning of interplanetary missions represent important future challenges for how to select, train and monitor crews. So far, empirical evidence about psychological factors in space is based on simulations and personnel in analog environments (i.e. polar expeditions, submarines). It is apparent that attempts to transfer from these environments to space requires a thorough analysis of the human behavior specific to the fields. Recommendations for research include the effects of multi-nationality on crew interaction, development of tension within crews and between Mission Control, and prediction of critical phases in adaptation over time. Selection of interpersonally compatible crews, pre-mission team training and implementation of tools for self-monitoring of psychological parameters ensure that changes in mission requirements maximize crew performance.
Tethered Satellites as an Enabling Platform for Operational Space Weather Monitoring Systems
NASA Technical Reports Server (NTRS)
Gilchrist, Brian E.; Krause, Linda Habash; Gallagher, Dennis Lee; Bilen, Sven Gunnar; Fuhrhop, Keith; Hoegy, Walt R.; Inderesan, Rohini; Johnson, Charles; Owens, Jerry Keith; Powers, Joseph;
2013-01-01
Tethered satellites offer the potential to be an important enabling technology to support operational space weather monitoring systems. Space weather "nowcasting" and forecasting models rely on assimilation of near-real-time (NRT) space environment data to provide warnings for storm events and deleterious effects on the global societal infrastructure. Typically, these models are initialized by a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g., via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative semi-empirical physics-based forward-prediction calculations. Many challenges are associated with the development of an operational system, from the top-level architecture (e.g., the required space weather observatories to meet the spatial and temporal requirements of these models) down to the individual instruments capable of making the NRT measurements. This study focuses on the latter challenge: we present some examples of how tethered satellites (from 100s of m to 20 km) are uniquely suited to address certain shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements are presented for two examples of space environment observables.
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2003-01-01
Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.
NASA Astrophysics Data System (ADS)
Yang, Geer; Zhang, Aili; Xu, Lisa X.; He, Xiaoming
2009-06-01
In this study, a set of models for predicting the diffusion-limited ice nucleation and growth inside biological cells were established. Both the heterogeneous and homogeneous nucleation mechanisms were considered in the models. Molecular mobility including viscosity and mutual diffusion coefficient of aqueous cryoprotectant (i.e., glycerol here) solutions was estimated using models derived from the free volume theory for glass transition, which makes it possible to predict the two most important physical properties (i.e., viscosity and mutual diffusion coefficient) over wide ranges of temperature and concentration as encountered in cryopreservation. After being verified using experimental data, the models were used to predict the critical cooling rate (defined as the cooling rate required so that the crystallized volume is less than 0.1% of the cell volume) as a function of the initial glycerol concentration in a number of cell types with different sizes. For slowing freezing, it was found that the required critical cooling rate is cell-type dependent with influences from cell size and the ice nucleation and water transport parameters. In general, the critical cooling rate does not change significantly with the initial glycerol concentration used and tends to be higher for smaller cells. For vitrification, the required critical cooling rate does change significantly with the initial glycerol concentration used and tends to decrease with the decrease in cell size. However, the required critical cooling rate can be similar for cells with very different sizes. It was further found that the thermodynamic and kinetic parameters for intracellular ice formation associated with different cells rather than the cell size per se significantly affect the critical cooling rates required for vitrification. For all cell types, it was found that homogeneous nucleation dominates at ultrafast cooling rates and/or high glycerol concentrations, whereas heterogeneous nucleation becomes important only during slow freezing with a low initial glycerol concentration (<1.5-2M), particularly for large cells such as mouse oocytes.
Tomblin Murphy, Gail; Birch, Stephen; MacKenzie, Adrian; Rigby, Janet
2016-12-12
As part of efforts to inform the development of a global human resources for health (HRH) strategy, a comprehensive methodology for estimating HRH supply and requirements was described in a companion paper. The purpose of this paper is to demonstrate the application of that methodology, using data publicly available online, to simulate the supply of and requirements for midwives, nurses, and physicians in the 32 high-income member countries of the Organisation for Economic Co-operation and Development (OECD) up to 2030. A model combining a stock-and-flow approach to simulate the future supply of each profession in each country-adjusted according to levels of HRH participation and activity-and a needs-based approach to simulate future HRH requirements was used. Most of the data to populate the model were obtained from the OECD's online indicator database. Other data were obtained from targeted internet searches and documents gathered as part of the companion paper. Relevant recent measures for each model parameter were found for at least one of the included countries. In total, 35% of the desired current data elements were found; assumed values were used for the other current data elements. Multiple scenarios were used to demonstrate the sensitivity of the simulations to different assumed future values of model parameters. Depending on the assumed future values of each model parameter, the simulated HRH gaps across the included countries could range from shortfalls of 74 000 midwives, 3.2 million nurses, and 1.2 million physicians to surpluses of 67 000 midwives, 2.9 million nurses, and 1.0 million physicians by 2030. Despite important gaps in the data publicly available online and the short time available to implement it, this paper demonstrates the basic feasibility of a more comprehensive, population needs-based approach to estimating HRH supply and requirements than most of those currently being used. HRH planners in individual countries, working with their respective stakeholder groups, would have more direct access to data on the relevant planning parameters and would thus be in an even better position to implement such an approach.
Biomarkers for optimal requirements of amino acids by animals and humans.
Lin, Gang; Liu, Chuang; Wang, Taiji; Wu, Guoyao; Qiao, Shiyan; Li, Defa; Wang, Junjun
2011-06-01
Amino acids are building blocks of proteins and key regulators of nutrient metabolism in cells. However, excessive intake of amino acids can be toxic to the body. Therefore, it is important to precisely determine amino acid requirements by organisms. To date, none of the methods is completely satisfactory to generate comprehensive data on amino acid requirements of animals or humans. Because of many influencing factors, amino acid requirements remain a complex and controversial issue in nutrition that warrants further investigations. Benefiting from the rapid advances in the emerging omics technologies and bioinformatics, biomarker discovery shows great potential in obtaining in-depth understanding of regulatory networks in protein metabolism. This review summarizes the current approaches to assess amino acid requirements of animals and humans, as well as the recent development of biomarkers as potentially functional parameters for recommending requirements of individual amino acids in health and disease. Identification of biomarkers in plasma or serum, which is a noninvasive approach, holds great promise in rapidly advancing the field of protein nutrition.
Folmsbee, Martha; Lentine, Kerry Roche; Wright, Christine; Haake, Gerhard; Mcburnie, Leesa; Ashtekar, Dilip; Beck, Brian; Hutchison, Nick; Okhio-Seaman, Laura; Potts, Barbara; Pawar, Vinayak; Windsor, Helena
2014-01-01
Mycoplasma are bacteria that can penetrate 0.2 and 0.22 μm rated sterilizing-grade filters and even some 0.1 μm rated filters. Primary applications for mycoplasma filtration include large scale mammalian and bacterial cell culture media and serum filtration. The Parenteral Drug Association recognized the absence of standard industry test parameters for testing and classifying 0.1 μm rated filters for mycoplasma clearance and formed a task force to formulate consensus test parameters. The task force established some test parameters by common agreement, based upon general industry practices, without the need for additional testing. However, the culture medium and incubation conditions, for generating test mycoplasma cells, varied from filter company to filter company and was recognized as a serious gap by the task force. Standardization of the culture medium and incubation conditions required collaborative testing in both commercial filter company laboratories and in an Independent laboratory (Table I). The use of consensus test parameters will facilitate the ultimate cross-industry goal of standardization of 0.1 μm filter claims for mycoplasma clearance. However, it is still important to recognize filter performance will depend on the actual conditions of use. Therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. Mycoplasma are small bacteria that have the ability to penetrate sterilizing-grade filters. Filtration of large-scale mammalian and bacterial cell culture media is an example of an industry process where effective filtration of mycoplasma is required. The Parenteral Drug Association recognized the absence of industry standard test parameters for evaluating mycoplasma clearance filters by filter manufacturers and formed a task force to formulate such a consensus among manufacturers. The use of standardized test parameters by filter manufacturers, including the preparation of the culture broth, will facilitate the end user's evaluation of the mycoplasma clearance claims provided by filter vendors. However, it is still important to recognize filter performance will depend on the actual conditions of use; therefore end users should consider, using a risk-based approach, whether process-specific evaluation of filter performance may be warranted for their application. © PDA, Inc. 2014.
Coaxial twin-shaft magnetic fluid seals applied in vacuum wafer-handling robot
NASA Astrophysics Data System (ADS)
Cong, Ming; Wen, Haiying; Du, Yu; Dai, Penglei
2012-07-01
Compared with traditional mechanical seals, magnetic fluid seals have unique characters of high airtightness, minimal friction torque requirements, pollution-free and long life-span, widely used in vacuum robots. With the rapid development of Integrate Circuit (IC), there is a stringent requirement for sealing wafer-handling robots when working in a vacuum environment. The parameters of magnetic fluid seals structure is very important in the vacuum robot design. This paper gives a magnetic fluid seal device for the robot. Firstly, the seal differential pressure formulas of magnetic fluid seal are deduced according to the theory of ferrohydrodynamics, which indicate that the magnetic field gradient in the sealing gap determines the seal capacity of magnetic fluid seal. Secondly, the magnetic analysis model of twin-shaft magnetic fluid seals structure is established. By analyzing the magnetic field distribution of dual magnetic fluid seal, the optimal value ranges of important parameters, including parameters of the permanent magnetic ring, the magnetic pole tooth, the outer shaft, the outer shaft sleeve and the axial relative position of two permanent magnetic rings, which affect the seal differential pressure, are obtained. A wafer-handling robot equipped with coaxial twin-shaft magnetic fluid rotary seals and bellows seal is devised and an optimized twin-shaft magnetic fluid seals experimental platform is built. Test result shows that when the speed of the two rotational shafts ranges from 0-500 r/min, the maximum burst pressure is about 0.24 MPa. Magnetic fluid rotary seals can provide satisfactory performance in the application of wafer-handling robot. The proposed coaxial twin-shaft magnetic fluid rotary seal provides the instruction to design high-speed vacuum robot.
Samsudin, Hayati; Auras, Rafael; Mishra, Dharmendra; Dolan, Kirk; Burgess, Gary; Rubino, Maria; Selke, Susan; Soto-Valdez, Herlinda
2018-01-01
Migration studies of chemicals from contact materials have been widely conducted due to their importance in determining the safety and shelf life of a food product in their packages. The US Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA) require this safety assessment for food contact materials. So, migration experiments are theoretically designed and experimentally conducted to obtain data that can be used to assess the kinetics of chemical release. In this work, a parameter estimation approach was used to review and to determine the mass transfer partition and diffusion coefficients governing the migration process of eight antioxidants from poly(lactic acid), PLA, based films into water/ethanol solutions at temperatures between 20 and 50°C. Scaled sensitivity coefficients were calculated to assess simultaneously estimation of a number of mass transfer parameters. An optimal experimental design approach was performed to show the importance of properly designing a migration experiment. Additional parameters also provide better insights on migration of the antioxidants. For example, the partition coefficients could be better estimated using data from the early part of the experiment instead at the end. Experiments could be conducted for shorter periods of time saving time and resources. Diffusion coefficients of the eight antioxidants from PLA films were between 0.2 and 19×10 -14 m 2 /s at ~40°C. The use of parameter estimation approach provided additional and useful insights about the migration of antioxidants from PLA films. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
NASA Technical Reports Server (NTRS)
Steinfeld, J. I.; Foy, B.; Hetzler, J.; Flannery, C.; Klaassen, J.; Mizugai, Y.; Coy, S.
1990-01-01
The spectroscopy of small to medium-size polyatomic molecules can be extremely complex, especially in higher-lying overtone and combination vibrational levels. The high density of levels also complicates the understanding of inelastic collision processes, which is required to model energy transfer and collision broadening of spectral lines. Both of these problems can be addressed by double-resonance spectroscopy, i.e., time-resolved pump-probe measurements using microwave, infrared, near-infrared, and visible-wavelength sources. Information on excited-state spectroscopy, transition moments, inelastic energy transfer rates and propensity rules, and pressure-broadening parameters may be obtained from such experiments. Examples are given for several species of importance in planetary atmospheres, including ozone, silane, ethane, and ammonia.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou Fengji; Hogg, David W.; Goodman, Jonathan
Markov chain Monte Carlo (MCMC) proves to be powerful for Bayesian inference and in particular for exoplanet radial velocity fitting because MCMC provides more statistical information and makes better use of data than common approaches like chi-square fitting. However, the nonlinear density functions encountered in these problems can make MCMC time-consuming. In this paper, we apply an ensemble sampler respecting affine invariance to orbital parameter extraction from radial velocity data. This new sampler has only one free parameter, and does not require much tuning for good performance, which is important for automatization. The autocorrelation time of this sampler is approximatelymore » the same for all parameters and far smaller than Metropolis-Hastings, which means it requires many fewer function calls to produce the same number of independent samples. The affine-invariant sampler speeds up MCMC by hundreds of times compared with Metropolis-Hastings in the same computing situation. This novel sampler would be ideal for projects involving large data sets such as statistical investigations of planet distribution. The biggest obstacle to ensemble samplers is the existence of multiple local optima; we present a clustering technique to deal with local optima by clustering based on the likelihood of the walkers in the ensemble. We demonstrate the effectiveness of the sampler on real radial velocity data.« less
Electric dipole moments in natural supersymmetry
NASA Astrophysics Data System (ADS)
Nakai, Yuichiro; Reece, Matthew
2017-08-01
We discuss electric dipole moments (EDMs) in the framework of CP-violating natural supersymmetry (SUSY). Recent experimental results have significantly tightened constraints on the EDMs of electrons and of mercury, and substantial further progress is expected in the near future. We assess how these results constrain the parameter space of natural SUSY. In addition to our discussion of SUSY, we provide a set of general formulas for two-loop fermion EDMs, which can be applied to a wide range of models of new physics. In the SUSY context, the two-loop effects of stops and charginos respectively constrain the phases of A t μ and M 2 μ to be small in the natural part of parameter space. If the Higgs mass is lifted to 125 GeV by a new tree-level superpotential interaction and soft term with CP-violating phases, significant EDMs can arise from the two-loop effects of W bosons and tops. We compare the bounds arising from EDMs to those from other probes of new physics including colliders, b → sγ, and dark matter searches. Importantly, improvements in reach not only constrain higher masses, but require the phases to be significantly smaller in the natural parameter space at low mass. The required smallness of phases sharpens the CP problem of natural SUSY model building.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, Steven M.
2001-01-01
Since most advanced material systems (for example metallic-, polymer-, and ceramic-based systems) being currently researched and evaluated are for high-temperature airframe and propulsion system applications, the required constitutive models must account for both reversible and irreversible time-dependent deformations. Furthermore, since an integral part of continuum-based computational methodologies (be they microscale- or macroscale-based) is an accurate and computationally efficient constitutive model to describe the deformation behavior of the materials of interest, extensive research efforts have been made over the years on the phenomenological representations of constitutive material behavior in the inelastic analysis of structures. From a more recent and comprehensive perspective, the NASA Glenn Research Center in conjunction with the University of Akron has emphasized concurrently addressing three important and related areas: that is, 1) Mathematical formulation; 2) Algorithmic developments for updating (integrating) the external (e.g., stress) and internal state variables; 3) Parameter estimation for characterizing the model. This concurrent perspective to constitutive modeling has enabled the overcoming of the two major obstacles to fully utilizing these sophisticated time-dependent (hereditary) constitutive models in practical engineering analysis. These obstacles are: 1) Lack of efficient and robust integration algorithms; 2) Difficulties associated with characterizing the large number of required material parameters, particularly when many of these parameters lack obvious or direct physical interpretations.
Image processing for IMRT QA dosimetry.
Zaini, Mehran R; Forest, Gary J; Loshek, David D
2005-01-01
We have automated the determination of the placement location of the dosimetry ion chamber within intensity-modulated radiotherapy (IMRT) fields, as part of streamlining the entire IMRT quality assurance process. This paper describes the mathematical image-processing techniques to arrive at the appropriate measurement locations within the planar dose maps of the IMRT fields. A specific spot within the found region is identified based on its flatness, radiation magnitude, location, area, and the avoidance of the interleaf spaces. The techniques used include applying a Laplacian, dilation, erosion, region identification, and measurement point selection based on three parameters: the size of the erosion operator, the gradient, and the importance of the area of a region versus its magnitude. These three parameters are adjustable by the user. However, the first one requires tweaking in extremely rare occasions, the gradient requires rare adjustments, and the last parameter needs occasional fine-tuning. This algorithm has been tested in over 50 cases. In about 5% of cases, the algorithm does not find a measurement point due to the extremely steep and narrow regions within the fluence maps. In such cases, manual selection of a point is allowed by our code, which is also difficult to ascertain, since the fluence map does not yield itself to an appropriate measurement point selection.
Surface Pre-treatment for Thermally Sprayed ZnAl15 Coatings
NASA Astrophysics Data System (ADS)
Bobzin, K.; Öte, M.; Knoch, M. A.
2017-02-01
Pre-treatment of substrates is an important step in thermal spraying. It is widely accepted that mechanical interlocking is the dominant adhesion mechanism for most substrate-coating combinations. To prevent premature failure, minimum coating adhesion strength, surface preparation grades, and roughness parameters are often specified. For corrosion-protection coatings for offshore wind turbines, an adhesion strength ≥ 5 MPa is commonly assumed to ensure adhesion over service lifetime. In order to fulfill this requirement, Rz > 80 µm and a preparation grade of Sa3 are common specifications. In this study, the necessity of these requirements is investigated using the widely used combination of twin-wire arc-sprayed ZnAl15 on S355J2 + N as a test case. By using different blasting media and parameters, the correlation between coating adhesion and roughness parameters is analyzed. The adhesion strength of these systems is measured using a test method allowing measurements on real parts. The results are compared to DIN EN 582:1993, the European equivalent of ASTM-C633. In another series of experiments, the influence of surface pre-treatment grades Sa2.5 and Sa3 is considered. By combining the results of these three sets of experiments, a guideline for surface pre-treatment and adhesion testing on real parts is proposed for the considered system.
NASA Astrophysics Data System (ADS)
Uslu, Faruk Sukru
2017-07-01
Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Combining cluster number counts and galaxy clustering
NASA Astrophysics Data System (ADS)
Lacasa, Fabien; Rosenfeld, Rogerio
2016-08-01
The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
2015-06-01
CIPs ) We have drafted policy language that Defense Acquisition, Technology, and Logistics now is coordinating that will make it a requirement for at...least Acquisition Category I programs to identify CIPs early and for the intelligence community to monitor those and report breaches throughout the...are coming. Two important ones are the Critical Intelligence Parameters ( CIPs ) policy and the change to the System Threat Assessment Re- port (STAR
Hohm, Tim; Demarsy, Emilie; Quan, Clément; Allenbach Petrolati, Laure; Preuten, Tobias; Vernoux, Teva; Bergmann, Sven; Fankhauser, Christian
2014-01-01
Phototropism is a growth response allowing plants to align their photosynthetic organs toward incoming light and thereby to optimize photosynthetic activity. Formation of a lateral gradient of the phytohormone auxin is a key step to trigger asymmetric growth of the shoot leading to phototropic reorientation. To identify important regulators of auxin gradient formation, we developed an auxin flux model that enabled us to test in silico the impact of different morphological and biophysical parameters on gradient formation, including the contribution of the extracellular space (cell wall) or apoplast. Our model indicates that cell size, cell distributions, and apoplast thickness are all important factors affecting gradient formation. Among all tested variables, regulation of apoplastic pH was the most important to enable the formation of a lateral auxin gradient. To test this prediction, we interfered with the activity of plasma membrane H+-ATPases that are required to control apoplastic pH. Our results show that H+-ATPases are indeed important for the establishment of a lateral auxin gradient and phototropism. Moreover, we show that during phototropism, H+-ATPase activity is regulated by the phototropin photoreceptors, providing a mechanism by which light influences apoplastic pH. PMID:25261457
Myokit: A simple interface to cardiac cellular electrophysiology.
Clerx, Michael; Collins, Pieter; de Lange, Enno; Volders, Paul G A
2016-01-01
Myokit is a new powerful and versatile software tool for modeling and simulation of cardiac cellular electrophysiology. Myokit consists of an easy-to-read modeling language, a graphical user interface, single and multi-cell simulation engines and a library of advanced analysis tools accessible through a Python interface. Models can be loaded from Myokit's native file format or imported from CellML. Model export is provided to C, MATLAB, CellML, CUDA and OpenCL. Patch-clamp data can be imported and used to estimate model parameters. In this paper, we review existing tools to simulate the cardiac cellular action potential to find that current tools do not cater specifically to model development and that there is a gap between easy-to-use but limited software and powerful tools that require strong programming skills from their users. We then describe Myokit's capabilities, focusing on its model description language, simulation engines and import/export facilities in detail. Using three examples, we show how Myokit can be used for clinically relevant investigations, multi-model testing and parameter estimation in Markov models, all with minimal programming effort from the user. This way, Myokit bridges a gap between performance, versatility and user-friendliness. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
An investigation of using an RQP based method to calculate parameter sensitivity derivatives
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.
Zieliński, Tomasz G
2015-04-01
This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.
Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis
NASA Technical Reports Server (NTRS)
Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony
2009-01-01
Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.
Comprehensive monitoring of drinking well water quality in Seoul metropolitan city, Korea.
Kim, Ki-Hyun; Susaya, Janice P; Park, Chan Goo; Uhm, Jung-Hoon; Hur, Jin
2013-08-01
In this research, the quality of drinking well waters from 14 districts around Seoul metropolitan city, Korea was assessed by measuring a number of parameters with established guideline (e.g., arsenic, fluoride, nitrate nitrogen, benzene, 1,2-dichloroethene, dichloromethane, copper, and lead) and without such criteria (e.g., hardness, chloride ion, sulfate ion, ammonia nitrogen, aluminum, iron, manganese, and zinc). Physical parameters such as evaporation residue (or total dissolved solids) and turbidity were also measured. The importance of each parameter in well waters was examined in terms of the magnitude and exceedance frequency of guideline values established by international (and national) health agencies. The results of this study indicate that among the eight parameters with well-established guidelines (e.g., WHO), arsenic and lead (guideline value of 0.01 mg L(-1) for both) recorded the highest exceedance frequency of 18 and 16 well samples ranging in 0.06-136 and 2-9 mg L(-1), respectively. As such, a number of water quality parameters measured from many well waters in this urban area were in critical levels which require immediate attention for treatment and continuous monitoring.
Muramoto, Akiko; Matsushita, Madoka; Kato, Ayako; Yamamoto, Naoki; Koike, George; Nakamura, Masakazu; Numata, Takeyuki; Tamakoshi, Akiko; Tsushita, Kazuyo
2014-01-01
Adequate goal-setting is important in health counselling and treatment for obesity and overweight. We tried to determine the minimum weight reduction required for improvement of obesity-related risk factors and conditions in obese and overweight Japanese people, using a nationwide intervention programme database. Japanese men and women (n=3480; mean age±standard deviation [SD], 48.3±5.9 years; mean body mass index±SD, 27.7±2.5kgm(-2)) with "Obesity Disease" or "Metabolic Syndrome" participated in a 6-month lifestyle modification programme (specific health guidance) and underwent follow-up for 6 months thereafter. The relationship between percent weight reduction and changes in 11 parameters of obesity-related diseases were examined. Significant weight reduction was observed 6 months after the beginning of the programme, and it was maintained for 1 year. Concomitant improvements in parameters for obesity-related diseases were also observed. One-third of the subjects reduced their body weight by ≥3%. In the group exhibiting 1% to <3% weight reduction, plasma triglycerides (TG), low-density lipoprotein cholesterol (LDL-C), haemoglobin A1c (HbA1c), aspartate aminotransferase (AST), alanine aminotransferase (ALT) and γ-glutamyl transpeptidase (γ-GTP) decreased significantly, and high-density lipoprotein cholesterol (HDL-C) increased significantly compared to the control group (±1% weight change group). In addition to the improvements of these 7 parameters (out of 11), significant reductions in systolic blood pressure (SBP), diastolic blood pressure (DBP), fasting plasma glucose (FPG) and uric acid (UA) (total 11 of 11 parameters) were observed in the group with 3% to <5% weight reduction. In the group with ≥5% weight reduction, the same 11 parameters also improved as those in the group with 3% to <5% weight reduction. The 6-month lifestyle modification programme induced significant weight reduction and significant improvement of parameters of obesity-related diseases. All the measured obesity-related parameters were significantly improved in groups with 3% to <5% and ≥5% weight reduction. Based on these findings, the minimum weight reduction required for improvement of obesity-related risk factors or conditions is 3% in obese and overweight (by WHO classification) Japanese people. Copyright © 2013 Asian Oceanian Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.
Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.
Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu
2015-05-01
Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.
Tomasic, Ivan; Tomasic, Nikica; Trobec, Roman; Krpan, Miroslav; Kelava, Tomislav
2018-04-01
Remote patient monitoring should reduce mortality rates, improve care, and reduce costs. We present an overview of the available technologies for the remote monitoring of chronic obstructive pulmonary disease (COPD) patients, together with the most important medical information regarding COPD in a language that is adapted for engineers. Our aim is to bridge the gap between the technical and medical worlds and to facilitate and motivate future research in the field. We also present a justification, motivation, and explanation of how to monitor the most important parameters for COPD patients, together with pointers for the challenges that remain. Additionally, we propose and justify the importance of electrocardiograms (ECGs) and the arterial carbon dioxide partial pressure (PaCO 2 ) as two crucial physiological parameters that have not been used so far to any great extent in the monitoring of COPD patients. We cover four possibilities for the remote monitoring of COPD patients: continuous monitoring during normal daily activities for the prediction and early detection of exacerbations and life-threatening events, monitoring during the home treatment of mild exacerbations, monitoring oxygen therapy applications, and monitoring exercise. We also present and discuss the current approaches to decision support at remote locations and list the normal and pathological values/ranges for all the relevant physiological parameters. The paper concludes with our insights into the future developments and remaining challenges for improvements to continuous remote monitoring systems. Graphical abstract ᅟ.
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
NASA Astrophysics Data System (ADS)
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo
2014-09-01
Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Barati, Reza
2017-07-01
Perumal et al. (2017) compared the performances of the variable parameter McCarthy-Muskingum (VPMM) model of Perumal and Price (2013) and the nonlinear Muskingum (NLM) model of Gill (1978) using hypothetical inflow hydrographs in an artificial channel. As input parameters, first model needs the initial condition, upstream boundary condition, Manning's roughness coefficient, length of the routing reach, cross-sections of the river reach and the bed slope, while the latter one requires the initial condition, upstream boundary condition and the hydrologic parameters (three parameters which can be calibrated using flood hydrographs of the upstream and downstream sections). The VPMM model was examined by available Manning's roughness values, whereas the NLM model was tested in both calibration and validation steps. As final conclusion, Perumal et al. (2017) claimed that the NLM model should be retired from the literature of the Muskingum model. While the author's intention is laudable, this comment examines some important issues in the subject matter of the original study.
Pendyam, Sandeep; Mohan, Ashwin; Kalivas, Peter W.; Nair, Satish S.
2015-01-01
Extracellular neurotransmitter concentrations vary over a wide range depending on the type of neurotransmitter and location in the brain. Neurotransmitter homeostasis near a synapse is achieved by a balance of several mechanisms including vesicular release from the presynapse, diffusion, uptake by transporters, non-synaptic production, and regulation of release by autoreceptors. These mechanisms are also affected by the glia surrounding the synapse. However, the role of these mechanisms in achieving neurotransmitter homeostasis is not well understood. A biophysical modeling framework was proposed to reverse engineer glial configurations and parameters related to homeostasis for synapses that support a range of neurotransmitter gradients. Model experiments reveal that synapses with extracellular neurotransmitter concentrations in the micromolar range require non-synaptic neurotransmitter sources and tight synaptic isolation by extracellular glial formations. The model was used to identify the role of perisynaptic parameters on neurotransmitter homeostasis, and to propose glial configurations that could support different levels of extracellular neurotransmitter concentrations. Ranking the parameters based on their effect on neurotransmitter homeostasis, non-synaptic sources were found to be the most important followed by transporter concentration and diffusion coefficient. PMID:22460547
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Control-Relevant Modeling, Analysis, and Design for Scramjet-Powered Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Rodriguez, Armando A.; Dickeson, Jeffrey J.; Sridharan, Srikanth; Benavides, Jose; Soloway, Don; Kelkar, Atul; Vogel, Jerald M.
2009-01-01
Within this paper, control-relevant vehicle design concepts are examined using a widely used 3 DOF (plus flexibility) nonlinear model for the longitudinal dynamics of a generic carrot-shaped scramjet powered hypersonic vehicle. Trade studies associated with vehicle/engine parameters are examined. The impact of parameters on control-relevant static properties (e.g. level-flight trimmable region, trim controls, AOA, thrust margin) and dynamic properties (e.g. instability and right half plane zero associated with flight path angle) are examined. Specific parameters considered include: inlet height, diffuser area ratio, lower forebody compression ramp inclination angle, engine location, center of gravity, and mass. Vehicle optimizations is also examined. Both static and dynamic considerations are addressed. The gap-metric optimized vehicle is obtained to illustrate how this control-centric concept can be used to "reduce" scheduling requirements for the final control system. A classic inner-outer loop control architecture and methodology is used to shed light on how specific vehicle/engine design parameter selections impact control system design. In short, the work represents an important first step toward revealing fundamental tradeoffs and systematically treating control-relevant vehicle design.
Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.
Westgard, James O; Westgard, Sten A
2017-03-01
Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Systematic procedure for designing processes with multiple environmental objectives.
Kim, Ki-Joo; Smith, Raymond L
2005-04-01
Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.
The effects of mixotrophy on the stability and dynamics of a simple planktonic food web
Jost, Christian; Lawrence, Cathryn A.; Campolongo, Francesca; Wouter, van de Bund; Hill, Sheryl; DeAngelis, Donald L.
2004-01-01
Recognition of the microbial loop as an important part of aquatic ecosystems disrupted the notion of simple linear food chains. However, current research suggests that even the microbial loop paradigm is a gross simplification of microbial interactions due to the presence of mixotrophs—organisms that both photosynthesize and graze. We present a simple food web model with four trophic species, three of them arranged in a food chain (nutrients–autotrophs–herbivores) and the fourth as a mixotroph with links to both the nutrients and the autotrophs. This model is used to study the general implications of inclusion of the mixotrophic link in microbial food webs and the specific predictions for a parameterization that describes open ocean mixed layer plankton dynamics. The analysis indicates that the system parameters reside in a region of the parameter space where the dynamics converge to a stable equilibrium rather than displaying periodic or chaotic solutions. However, convergence requires weeks to months, suggesting that the system would never reach equilibrium in the ocean due to alteration of the physical forcing regime. Most importantly, the mixotrophic grazing link seems to stabilize the system in this region of the parameter space, particularly when nutrient recycling feedback loops are included.
Peleg, Micha; Normand, Mark D
2015-09-01
When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.
2012-11-01
Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.
Physical requirements in Olympic sailing.
Bojsen-Møller, J; Larsson, B; Aagaard, P
2015-01-01
Physical fitness and muscular strength are important performance parameters in Olympic sailing although their relative importance changes between classes. The Olympic format consists of eight yacht types combined into 10 so-called events with total 15 sailors (male and female) in a complete national Olympic delegation. The yachts have different requirements with respect to handling, and moreover, each sailor plays a specific role when sailing. Therefore physical demands remain heterogeneous for Olympic sailors. Previous studies have mainly examined sailors where 'hiking' (the task of leaning over the side of the yacht to increase righting moment) is the primary requirement. Other than the ability to sustain prolonged quasi-isometric contractions, hiking seems to require significant maximal muscle strength especially in knee extensors, hip flexors and abdominal and lower back muscles. Another group of studies has investigated boardsailing and provided evidence to show that windsurfing requires very high aerobic and anaerobic capacity. Although data exist on other types of sailors, the information is limited, and moreover the profile of the Olympic events has changed markedly over the last few years to involve more agile, fast and spectacular yachts. The change of events in Olympic sailing has likely added to physical requirements; however, data on sailors in the modern-type yachts are scarce. The present paper describes the recent developments in Olympic sailing with respect to yacht types, and reviews the existing knowledge on physical requirements in modern Olympic sailing. Finally, recommendations for future research in sailing are given.
Searching the Force Field Electrostatic Multipole Parameter Space.
Jakobsen, Sofie; Jensen, Frank
2016-04-12
We show by tensor decomposition analyses that the molecular electrostatic potential for amino acid peptide models has an effective rank less than twice the number of atoms. This rank indicates the number of parameters that can be derived from the electrostatic potential in a statistically significant way. Using this as a guideline, we investigate different strategies for deriving a reduced set of atomic charges, dipoles, and quadrupoles capable of reproducing the reference electrostatic potential with a low error. A full combinatorial search of selected parameter subspaces for N-methylacetamide and a cysteine peptide model indicates that there are many different parameter sets capable of providing errors close to that of the global minimum. Among the different reduced multipole parameter sets that have low errors, there is consensus that atoms involved in π-bonding require higher order multipole moments. The possible correlation between multipole parameters is investigated by exhaustive searches of combinations of up to four parameters distributed in all possible ways on all possible atomic sites. These analyses show that there is no advantage in considering combinations of multipoles compared to a simple approach where the importance of each multipole moment is evaluated sequentially. When combined with possible weighting factors related to the computational efficiency of each type of multipole moment, this may provide a systematic strategy for determining a computational efficient representation of the electrostatic component in force field calculations.
Comparing methods for Earthquake Location
NASA Astrophysics Data System (ADS)
Turkaya, Semih; Bodin, Thomas; Sylvander, Matthieu; Parroucau, Pierre; Manchuel, Kevin
2017-04-01
There are plenty of methods available for locating small magnitude point source earthquakes. However, it is known that these different approaches produce different results. For each approach, results also depend on a number of parameters which can be separated into two main branches: (1) parameters related to observations (number and distribution of for example) and (2) parameters related to the inversion process (velocity model, weighting parameters, initial location etc.). Currently, the results obtained from most of the location methods do not systematically include quantitative uncertainties. The effect of the selected parameters on location uncertainties is also poorly known. Understanding the importance of these different parameters and their effect on uncertainties is clearly required to better constrained knowledge on fault geometry, seismotectonic processes and at the end to improve seismic hazard assessment. In this work, realized in the frame of the SINAPS@ research program (http://www.institut-seism.fr/projets/sinaps/), we analyse the effect of different parameters on earthquakes location (e.g. type of phase, max. hypocentral separation etc.). We compare several codes available (Hypo71, HypoDD, NonLinLoc etc.) and determine their strengths and weaknesses in different cases by means of synthetic tests. The work, performed for the moment on synthetic data, is planned to be applied, in a second step, on data collected by the Midi-Pyrénées Observatory (OMP).
Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stucky, D.J.; Zoeller, A.L.
1980-01-01
Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less
NASA Astrophysics Data System (ADS)
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.
Lo, Sheng-Ying; Baird, Geoffrey S; Greene, Dina N
2015-12-07
Proper utilization of resources is an important operational objective for clinical laboratories. To reduce unnecessary manual interventions on automated instruments, we conducted a workflow analysis that optimized dilution parameters and reporting of abnormally high chemistry results for the Beckman AU series of chemistry analyzers while maintaining clinically acceptable reportable ranges. Workflow analysis for the Beckman AU680/5812 and DxC800 chemistry analyzers was performed using historical data. Clinical reportable ranges for 53 chemistry analytes were evaluated. Optimized dilution parameters and upper limit of reportable ranges for the AU680/5812 instruments were derived and validated to meet these reportable ranges. The number of specimens that required manual dilutions before and after optimization was determined for both the AU680/5812 and DxC800, with the DxC800 serving as the reference instrument. Retrospective data analysis revealed that 7700 specimens required manual dilutions on the DxC over a 2-y period. Using our optimized AU-specific dilution and reporting parameters, the data-driven simulation analysis showed a 61% reduction in manual dilutions. For the specimens that required manual dilutions on the AU680/5812, we developed standardized dilution procedures to further streamline workflow. We provide a data-driven, practical outline for clinical laboratories to efficiently optimize their use of automated chemistry analyzers. The outcomes can be used to assist laboratories wishing to improve their existing procedures or to facilitate transitioning into a new line of instrumentation, regardless of the instrument model or manufacturer. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkpatrick, R. C.
Nuclear fusion was discovered experimentally in 1933-34 and other charged particle nuclear reactions were documented shortly thereafter. Work in earnest on the fusion ignition problem began with Edward Teller's group at Los Alamos during the war years. His group quantified all the important basic atomic and nuclear processes and summarized their interactions. A few years later, the success of the early theory developed at Los Alamos led to very successful thermonuclear weapons, but also to decades of unsuccessful attempts to harness fusion as an energy source of the future. The reasons for this history are many, but it seems appropriatemore » to review some of the basics with the objective of identifying what is essential for success and what is not. This tutorial discusses only the conditions required for ignition in small fusion targets and how the target design impacts driver requirements. Generally speaking, the driver must meet the energy, power and power density requirements needed by the fusion target. The most relevant parameters for ignition of the fusion fuel are the minimum temperature and areal density (rhoR), but these parameters set secondary conditions that must be achieved, namely an implosion velocity, target size and pressure, which are interrelated. Despite the apparent simplicity of inertial fusion targets, there is not a single mode of fusion ignition, and the necessary combination of minimum temperature and areal density depends on the mode of ignition. However, by providing a magnetic field of sufficient strength, the conditions needed for fusion ignition can be drastically altered. Magnetized target fusion potentially opens up a vast parameter space between the extremes of magnetic and inertial fusion.« less
Dracínský, Martin; Kaminský, Jakub; Bour, Petr
2009-03-07
Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.
Magnetic field errors tolerances of Nuclotron booster
NASA Astrophysics Data System (ADS)
Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet
2018-04-01
Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.
HPLC, MS, and pharmacokinetics of melphalan, bisantrene and 13-cis retinoic acid.
Davis, T P; Peng, Y M; Goodman, G E; Alberts, D S
1982-11-01
High performance liquid chromatographic procedures are described for melphalan, bisantrene, and 13-cis retinoic acid, three important anticancer drugs in various stages of clinical development. The procedures require a rapid and simple sample clean-up followed by a 10-to 20-min chromatographic separation on a reversed-phase C18 column. Precisions are all less than 8% with recoveries greater than 80%. Mass spectrometry confirmation of each drug from patient sample separations is presented to provide unambiguous identification for valid pharmacokinetic parameter determination.
Minimum Energy Pathways for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, S. P.; Langhoff, S. R. (Technical Monitor)
1995-01-01
Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives useful results for a number of chemically important systems. The talk will focus on a number of applications to reactions leading to NOx and soot formation in hydrocarbon combustion.
Viral diseases of new world camelids.
Kapil, Sanjay; Yeary, Teresa; Evermann, James F
2009-07-01
The increased popularity and population of New World camelids in the United States requires the development of a broader base of knowledge of the health and disease parameters for these animals by the veterinary livestock practitioner. Although our knowledge regarding infectious diseases of camelids has increased greatly over the past decade, the practice of camelid medicine is a relatively new field in North America, so it is important to seek out seasoned colleagues and diagnostic laboratories that are involved in camelid health treatment and diagnosis.
A study of an orbital radar mapping mission to Venus. Volume 1: Summary
NASA Technical Reports Server (NTRS)
1973-01-01
A preliminary design of a Venus radar mapping orbiter mission and spacecraft was developed. The important technological problems were identified and evaluated. The study was primarily concerned with trading off alternate ways of implementing the mission and examining the most attractive concepts in order to assess technology requirements. Compatible groupings of mission and spacecraft parameters were analyzed by examining the interaction of their functioning elements and assessing their overall cost effectiveness in performing the mission.
Stratospheric measurement requirements and satellite-borne remote sensing capabilities
NASA Technical Reports Server (NTRS)
Carmichael, J. J.; Eldridge, R. G.; Frey, E. J.; Friedman, E. J.; Ghovanlou, A. H.
1976-01-01
The capabilities of specific NASA remote sensing systems to provide appropriate measurements of stratospheric parameters for potential user needs were assessed. This was used to evaluate the capabilities of the remote sensing systems to perform global monitoring of the stratosphere. The following conclusions were reached: (1) The performance of current remote stratospheric sensors, in some cases, compares quite well with identified measurement requirements. Their ability to measure other species has not been demonstrated. (2) None of the current, in-situ methods have the capability to satisfy the requirements for global monitoring and the temporal constraints derived from the users needs portion of the study. (3) Existing, non-remote techniques will continue to play an important role in stratospheric investigations for both corroboration of remotely collected data and in the evolutionary development of future remote sensors.
The Earth Microbiome Project and modeling the planets microbial potential (Invited)
NASA Astrophysics Data System (ADS)
Gilbert, J. A.
2013-12-01
The understanding of Earth's climate and ecology requires multiscale observations of the biosphere, of which microbial life are a major component. However, to acquire and process physical samples of soil, water and air that comprise the appropriate spatial and temporal resolution to capture the immense variation in microbial dynamics, would require a herculean effort and immense financial resources dwarfing even the most ambitious projects to date. To overcome this hurdle we created the Earth Microbiome Project, a crowd-sourced effort to acquire physical samples from researchers around the world that are, importantly, contextualized with physical, chemical and biological data detailing the environmental properties of that sample in the location and time it was acquired. The EMP leverages these existing efforts to target a systematic analysis of microbial taxonomic and functional dynamics across a vast array of environmental parameter gradients. The EMP captures the environmental gradients, location, time and sampling protocol information about every sample donated by our valued collaborators. Physical samples are then processed using a standardized DNA extraction, PCR, and shotgun sequencing protocol to generate comparable data regarding the microbial community structure and function in each sample. To date we have processed >17,000 samples from 40 different biomes. One of the key goals of the EMP is to map the spatiotemporal variability of microbial communities to capture the changes in important functional processes that need to be appropriately expressed in models to provide reliable forecasts of ecosystem phenotype across our changing planet. This is essential if we are to develop economically sound strategies to be good stewards of our Earth. The EMP recognizes that environments are comprised of complex sets of interdependent parameters and that the development of useful predictive computational models of both terrestrial and atmospheric systems requires recognition and accommodation of sources of uncertainty.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
NASA Astrophysics Data System (ADS)
Nakano, Yoichiro; Yanase, Takashi; Nagahama, Taro; Yoshida, Hajime; Shimada, Toshihiro
2016-10-01
The water vapor transmission rate (WVTR) of a gas barrier coating is a critically important parameter for flexible organic device packaging, but its accurate measurement without mechanical stress to ultrathin films has been a significant challenge in instrumental analysis. At the current stage, no reliable results have been reported in the range of 10-6 g m-2 day-1 that is required for organic light emitting diodes (OLEDs). In this article, we describe a solution for this difficult, but important measurement, involving enhanced sensitivity by a cold trap, stabilized temperature system, pumped sealing and calibration by a standard conductance element.
Sun Glint and Sea Surface Salinity Remote Sensing
NASA Technical Reports Server (NTRS)
Dinnat, Emmanuel P.; LeVine, David M.
2007-01-01
A new mission in space, called Aquarius/SAC-D, is being built to measure the salinity of the world's oceans. Salinity is an important parameter for understanding movement of the ocean water. This circulation results in the transportation of heat and is important for understanding climate and climate change. Measuring salinity from space requires precise instruments and a careful accounting for potential sources of error. One of these sources of error is radiation from the sun that is reflected from the ocean surface to the sensor in space. This paper examines this reflected radiation and presents an advanced model for describing this effect that includes the effects of ocean waves on the reflection.
Channeling of multikilojoule high-intensity laser beams in an inhomogeneous plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivancic, S.; Haberberger, D.; Habara, H.
Channeling experiments were performed that demonstrate the transport of high-intensity (>10¹⁸ W/cm²), multikilojoule laser light through a millimeter-sized, inhomogeneous (~300-μm density scale length) laser produced plasma up to overcritical density, which is an important step forward for the fast-ignition concept. The background plasma density and the density depression inside the channel were characterized with a novel optical probe system. The channel progression velocity was measured, which agrees well with theoretical predictions based on large scale particle-in-cell simulations, confirming scaling laws for the required channeling laser energy and laser pulse duration, which are important parameters for future integrated fast-ignition channeling experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
An Integrated Magnetic Circuit Model and Finite Element Model Approach to Magnetic Bearing Design
NASA Technical Reports Server (NTRS)
Provenza, Andrew J.; Kenny, Andrew; Palazzolo, Alan B.
2003-01-01
A code for designing magnetic bearings is described. The code generates curves from magnetic circuit equations relating important bearing performance parameters. Bearing parameters selected from the curves by a designer to meet the requirements of a particular application are input directly by the code into a three-dimensional finite element analysis preprocessor. This means that a three-dimensional computer model of the bearing being developed is immediately available for viewing. The finite element model solution can be used to show areas of magnetic saturation and make more accurate predictions of the bearing load capacity, current stiffness, position stiffness, and inductance than the magnetic circuit equations did at the start of the design process. In summary, the code combines one-dimensional and three-dimensional modeling methods for designing magnetic bearings.
High pressure research using muons at the Paul Scherrer Institute
NASA Astrophysics Data System (ADS)
Khasanov, R.; Guguchia, Z.; Maisuradze, A.; Andreica, D.; Elender, M.; Raselli, A.; Shermadini, Z.; Goko, T.; Knecht, F.; Morenzoni, E.; Amato, A.
2016-04-01
Pressure, together with temperature and magnetic field, is an important thermodynamical parameter in physics. Investigating the response of a compound or of a material to pressure allows to elucidate ground states, investigate their interplay and interactions and determine microscopic parameters. Pressure tuning is used to establish phase diagrams, study phase transitions and identify critical points. Muon spin rotation/relaxation (μSR) is now a standard technique making increasing significant contribution in condensed matter physics, material science research and other fields. In this review, we will discuss specific requirements and challenges to perform μSR experiments under pressure, introduce the high pressure muon facility at the Paul Scherrer Institute (PSI, Switzerland) and present selected results obtained by combining the sensitivity of the μSR technique with pressure.
Technology needs of advanced Earth observation spacecraft
NASA Technical Reports Server (NTRS)
Herbert, J. J.; Postuchow, J. R.; Schartel, W. A.
1984-01-01
Remote sensing missions were synthesized which could contribute significantly to the understanding of global environmental parameters. Instruments capable of sensing important land and sea parameters are combined with a large antenna designed to passively quantify surface emitted radiation at several wavelengths. A conceptual design for this large deployable antenna was developed. All subsystems required to make the antenna an autonomous spacecraft were conceptually designed. The entire package, including necessary orbit transfer propulsion, is folded to package within the Space Transportation System (STS) cargo bay. After separation, the antenna, its integral feed mast, radiometer receivers, power system, and other instruments are automatically deployed and transferred to the operational orbit. The design resulted in an antenna with a major antenna dimension of 120 meters, weighing 7650 kilograms, and operating at an altitude of 700 kilometers.
Key Parameters for Operator Diagnosis of BWR Plant Condition during a Severe Accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Dwight A.; Poore, III, Willis P.
2015-01-01
The objective of this research is to examine the key information needed from nuclear power plant instrumentation to guide severe accident management and mitigation for boiling water reactor (BWR) designs (specifically, a BWR/4-Mark I), estimate environmental conditions that the instrumentation will experience during a severe accident, and identify potential gaps in existing instrumentation that may require further research and development. This report notes the key parameters that instrumentation needs to measure to help operators respond to severe accidents. A follow-up report will assess severe accident environmental conditions as estimated by severe accident simulation model analysis for a specific US BWR/4-Markmore » I plant for those instrumentation systems considered most important for accident management purposes.« less
Functional relationship-based alarm processing
Corsberg, D.R.
1987-04-13
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). 11 figs.
NASA Astrophysics Data System (ADS)
Syme, A. M.; McQuarrie, S. A.; Middleton, J. W.; Fallone, B. G.
2003-05-01
A simple model has been developed to investigate the dosimetry of micrometastases in the peritoneal cavity during intraperitoneal targeted liposomal radioimmunotherapy. The model is applied to free-floating tumours with radii between 0.005 cm and 0.1 cm. Tumour dose is assumed to come from two sources: free liposomes in solution in the peritoneal cavity and liposomes bound to the surface of the micrometastases. It is assumed that liposomes do not penetrate beyond the surface of the tumours and that the total amount of surface antigen does not change over the course of treatment. Integrated tumour doses are expressed as a function of biological parameters that describe the rates at which liposomes bind to and unbind from the tumour surface, the rate at which liposomes escape from the peritoneal cavity and the tumour surface antigen density. Integrated doses are translated into time-dependent tumour control probabilities (TCPs). The results of the work are illustrated in the context of a therapy in which liposomes labelled with Re-188 are targeted at ovarian cancer cells that express the surface antigen CA-125. The time required to produce a TCP of 95% is used to investigate the importance of the various parameters. The relative contributions of surface-bound radioactivity and unbound radioactivity are used to assess the conditions required for a targeted approach to provide an improvement over a non-targeted approach during intraperitoneal radiation therapy. Using Re-188 as the radionuclide, the model suggests that, for microscopic tumours, the relative importance of the surface-bound radioactivity increases with tumour size. This is evidenced by the requirement for larger antigen densities on smaller tumours to affect an improvement in the time required to produce a TCP of 95%. This is because for the smallest tumours considered, the unbound radioactivity is often capable of exerting a tumouricidal effect before the targeting agent has time to accumulate significantly on the tumour surface.
Why some plant species are rare.
Wieger Wamelink, G W; Wamelink, G W Weiger; Goedhart, Paul W; Frissel, Joep; Frissel, Josep Y
2014-01-01
Biodiversity, including plant species diversity, is threatened worldwide as a result of anthropogenic pressures such as an increase of pollutants and climate change. Rare species in particular are on the verge of becoming extinct. It is still unclear as to why some plant species are rare and others are not. Are they rare due to: intrinsic reasons, dispersal capacity, the effects of management or abiotic circumstances? Habitat preference of rare plant species may play an important role in determining why some species are rare. Based on an extensive data set of soil parameters we investigated if rarity is due to a narrow habitat preference for abiotic soil parameters. For 23 different abiotic soil parameters, of which the most influential were groundwater-table, soil-pH and nutrient-contents, we estimated species responses for common and rare species. Based on the responses per species we calculated the range of occurrence, the range between the 5 and 95 percentile of the response curve giving the habitat preference. Subsequently, we calculated the average response range for common and rare species. In addition, we designed a new graphic in order to provide a better means for presentation of the results. The habitat preferences of rare species for abiotic soil conditions are significantly narrower than for common species. Twenty of the twenty-three abiotic parameters showed on average significantly narrower habitat preferences for rare species than for common species; none of the abiotic parameters showed on average a narrower habitat preference for common species. The results have major implications for the conservation of rare plant species; accordingly management and nature development should be focussed on the maintenance and creation of a broad range of environmental conditions, so that the requirements of rare species are met. The conservation of (abiotic) gradients within ecosystems is particularly important for preserving rare species.
Optimal structure and parameter learning of Ising models
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant; ...
2018-03-16
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Surajit; Ladpli, Purim; Chang, Fu-Kuo
Accurate interpretation of in-situ piezoelectric sensor signals is a challenging task. This article presents the development of a numerical compensation model based on physical insight to address the influence of structural loads on piezo-sensor signals. The model requires knowledge of in-situ strain and temperature distribution in a structure while acquiring sensor signals. The parameters of the numerical model are obtained using experiments on flat aluminum plate under uniaxial tensile loading. It is shown that the model parameters obtained experimentally can be used for different structures, and sensor layout. Furthermore, the combined effects of load and temperature on the piezo-sensor responsemore » are also investigated and it is observed that both of these factors have a coupled effect on the sensor signals. It is proposed to obtain compensation model parameters under a range of operating temperatures to address this coupling effect. An important outcome of this study is a new load monitoring concept using in-situ piezoelectric sensor signals to track changes in the load paths in a structure.« less
Optimal structure and parameter learning of Ising models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lokhov, Andrey; Vuffray, Marc Denis; Misra, Sidhant
Reconstruction of the structure and parameters of an Ising model from binary samples is a problem of practical importance in a variety of disciplines, ranging from statistical physics and computational biology to image processing and machine learning. The focus of the research community shifted toward developing universal reconstruction algorithms that are both computationally efficient and require the minimal amount of expensive data. Here, we introduce a new method, interaction screening, which accurately estimates model parameters using local optimization problems. The algorithm provably achieves perfect graph structure recovery with an information-theoretically optimal number of samples, notably in the low-temperature regime, whichmore » is known to be the hardest for learning. Here, the efficacy of interaction screening is assessed through extensive numerical tests on synthetic Ising models of various topologies with different types of interactions, as well as on real data produced by a D-Wave quantum computer. Finally, this study shows that the interaction screening method is an exact, tractable, and optimal technique that universally solves the inverse Ising problem.« less
The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.
Coleman, S G; Hale, T
1998-08-01
Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
A canonical correlation neural network for multicollinearity and functional data.
Gou, Zhenkun; Fyfe, Colin
2004-03-01
We review a recent neural implementation of Canonical Correlation Analysis and show, using ideas suggested by Ridge Regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing Partial Least Squares regression (at one extreme) to Canonical Correlation Analysis (at the other)and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, we develop a second penalty term which acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term. We illustrate our algorithms on both artificial and real data.
NASA Astrophysics Data System (ADS)
Assis, Anu; Shahul Hameed T., A.; Predeep, P.
2017-06-01
Mobility and current handling capabilities of Organic Field Effect Transistor (OFET) are vitally important parameters in the electrical performance where the material parameters and thickness of different layers play significant role. In this paper, we report the simulation of an OFET using multi physics tool, where the active layer is pentacene and Poly Methyl Methacrylate (PMMA) forms the dielectric. Electrical characterizations of the OFET on varying the thickness of the dielectric layer from 600nm to 400nm are simulated and drain current, transconductance and mobility are analyzed. In the study it is found that even though capacitance increases with reduction in dielectric layer thickness, the transconductance effect is reflected many more times in the mobility which in turn could be attributed to the variations in transverse electric field. The layer thickness below 300nm may result in gate leakage current points to the requirement of optimizing the thickness of different layers for better performance.
Design of satellite flexibility experiments
NASA Technical Reports Server (NTRS)
Kaplan, M. H.; Hillard, S. E.
1977-01-01
A preliminary study has been completed to begin development of a flight experiment to measure spacecraft control/flexible structure interaction. The work reported consists of two phases: identification of appropriate structural parameters which can be associated with flexibility phenomena, and suggestions for the development of an experiment for a satellite configuration typical of near-future vehicles which are sensitive to such effects. Recommendations are made with respect to the type of data to be collected and instrumentation associated with these data. The approach consists of developing the equations of motion for a vehicle possessing a flexible solar array, then linearizing about some nominal motion of the craft. A set of solutions are assumed for array deflection using a continuous normal mode method and important parameters are exposed. Inflight and ground based measurements are distinguished. Interrelationships between these parameters, measurement techniques, and input requirements are discussed which assure minimization of special vehicle maneuvers and optimization of data to be obtained during the normal flight sequence.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2016-04-01
An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
NASA Astrophysics Data System (ADS)
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.
Simulation of the Press Hardening Process and Prediction of the Final Mechanical Material Properties
NASA Astrophysics Data System (ADS)
Hochholdinger, Bernd; Hora, Pavel; Grass, Hannes; Lipp, Arnulf
2011-08-01
Press hardening is a well-established production process in the automotive industry today. The actual trend of this process technology points towards the manufacturing of parts with tailored properties. Since the knowledge of the mechanical properties of a structural part after forming and quenching is essential for the evaluation of for example the crash performance, an accurate as possible virtual assessment of the production process is more than ever necessary. In order to achieve this, the definition of reliable input parameters and boundary conditions for the thermo-mechanically coupled simulation of the process steps is required. One of the most important input parameters, especially regarding the final properties of the quenched material, is the contact heat transfer coefficient (IHTC). The CHTC depends on the effective pressure or the gap distance between part and tool. The CHTC at different contact pressures and gap distances is determined through inverse parameter identification. Furthermore a simulation strategy for the subsequent steps of the press hardening process as well as adequate modeling approaches for part and tools are discussed. For the prediction of the yield curves of the material after press hardening a phenomenological model is presented. This model requires the knowledge of the microstructure within the part. By post processing the nodal temperature history with a CCT diagram the quantitative distribution of the phase fractions martensite, bainite, ferrite and pearlite after press hardening is determined. The model itself is based on a Hockett-Sherby approach with the Hockett-Sherby parameters being defined in function of the phase fractions and a characteristic cooling rate.
Brain blood vessel segmentation using line-shaped profiles
NASA Astrophysics Data System (ADS)
Babin, Danilo; Pižurica, Aleksandra; De Vylder, Jonas; Vansteenkiste, Ewout; Philips, Wilfried
2013-11-01
Segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, especially for embolization of cerebral aneurysms and arteriovenous malformations (AVMs). In order to perform embolization of the AVM, the structural and geometric information of blood vessels from 3D images is of utmost importance. For this reason, the in-depth segmentation of cerebral blood vessels is usually done as a fusion of different segmentation techniques, often requiring extensive user interaction. In this paper we introduce the idea of line-shaped profiling with an application to brain blood vessel and AVM segmentation, efficient both in terms of resolving details and in terms of computation time. Our method takes into account both local proximate and wider neighbourhood of the processed pixel, which makes it efficient for segmenting large blood vessel tree structures, as well as fine structures of the AVMs. Another advantage of our method is that it requires selection of only one parameter to perform segmentation, yielding very little user interaction.
NASA Astrophysics Data System (ADS)
Yossifon, Gilad; Park, Sinwook
2016-11-01
Previously, it has been shown that for a prescribed system, the diffusion length may be affected by any number of mechanisms including natural and forced convection, electroosmotic flow of the second kind and electro-convective instability. In all of the above mentioned cases the length of the diffusion layer is indirectly prescribed by the complicated competition between several mechanisms which are primarily dictated by the various system parameters and applied voltage. In contrast, we suggest that by embedding electrodes/heaters within a microchannel interfacing a permselective medium, the diffusion layer length may be controlled regardless of the dominating overlimiting current mechanism and system parameters. As well as demonstrating that the simple presence of electrodes can enhance mixing via induced-charge electrokinetic effects, we also offer a means of externally activating embedded electrodes and heaters to maintain external, dynamic control of the diffusion length. Such control is particularly important in applications requiring intense ion transport, such as electrodialysis. At the same time, we will also investigate means of suppressing these mechanisms which is of fundamental importance for sensing applications.
Modern Perspectives on Numerical Modeling of Cardiac Pacemaker Cell
Maltsev, Victor A.; Yaniv, Yael; Maltsev, Anna V.; Stern, Michael D.; Lakatta, Edward G.
2015-01-01
Cardiac pacemaking is a complex phenomenon that is still not completely understood. Together with experimental studies, numerical modeling has been traditionally used to acquire mechanistic insights in this research area. This review summarizes the present state of numerical modeling of the cardiac pacemaker, including approaches to resolve present paradoxes and controversies. Specifically we discuss the requirement for realistic modeling to consider symmetrical importance of both intracellular and cell membrane processes (within a recent “coupled-clock” theory). Promising future developments of the complex pacemaker system models include the introduction of local calcium control, mitochondria function, and biochemical regulation of protein phosphorylation and cAMP production. Modern numerical and theoretical methods such as multi-parameter sensitivity analyses within extended populations of models and bifurcation analyses are also important for the definition of the most realistic parameters that describe a robust, yet simultaneously flexible operation of the coupled-clock pacemaker cell system. The systems approach to exploring cardiac pacemaker function will guide development of new therapies, such as biological pacemakers for treating insufficient cardiac pacemaker function that becomes especially prevalent with advancing age. PMID:24748434
A Microstructurally Inspired Damage Model for Early Venous Thrombus
Rausch, Manuel K.; Humphrey, Jay D.
2015-01-01
Accumulative damage may be an important contributor to many cases of thrombotic disease progression. Thus, a complete understanding of the pathological role of thrombus requires an understanding of its mechanics and in particular mechanical consequences of damage. In the current study, we introduce a novel microstructurally inspired constitutive model for thrombus that considers a non-uniform distribution of microstructural fibers at various crimp levels and employs one of the distribution parameters to incorporate stretch-driven damage on the microscopic level. To demonstrate its ability to represent the mechanical behavior of thrombus, including a recently reported Mullins type damage phenomenon, we fit our model to uniaxial tensile test data of early venous thrombus. Our model shows an agreement with these data comparable to previous models for damage in elastomers with the added advantages of a microstructural basis and fewer model parameters. We submit that our novel approach marks another important step toward modeling the evolving mechanics of intraluminal thrombus, specifically its damage, and hope it will aid in the study of physiological and pathological thrombotic events. PMID:26523784
Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics
Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.
2012-01-01
Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915
Vendruscolo, M; Najmanovich, R; Domany, E
2000-02-01
We present a method to derive contact energy parameters from large sets of proteins. The basic requirement on which our method is based is that for each protein in the database the native contact map has lower energy than all its decoy conformations that are obtained by threading. Only when this condition is satisfied one can use the proposed energy function for fold identification. Such a set of parameters can be found (by perceptron learning) if Mp, the number of proteins in the database, is not too large. Other aspects that influence the existence of such a solution are the exact definition of contact and the value of the critical distance Rc, below which two residues are considered to be in contact. Another important novel feature of our approach is its ability to determine whether an energy function of some suitable proposed form can or cannot be parameterized in a way that satisfies our basic requirement. As a demonstration of this, we determine the region in the (Rc, Mp) plane in which the problem is solvable, i.e., we can find a set of contact parameters that stabilize simultaneously all the native conformations. We show that for large enough databases the contact approximation to the energy cannot stabilize all the native folds even against the decoys obtained by gapless threading.
Efficient 3D inversions using the Richards equation
NASA Astrophysics Data System (ADS)
Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad
2018-07-01
Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.
Schoellhamer, D.H.; Ganju, N.K.; Mineart, P.R.; Lionberger, M.A.; Kusuda, T.; Yamanishi, H.; Spearman, J.; Gailani, J. Z.
2008-01-01
Bathymetric change in tidal environments is modulated by watershed sediment yield, hydrodynamic processes, benthic composition, and anthropogenic activities. These multiple forcings combine to complicate simple prediction of bathymetric change; therefore, numerical models are necessary to simulate sediment transport. Errors arise from these simulations, due to inaccurate initial conditions and model parameters. We investigated the response of bathymetric change to initial conditions and model parameters with a simplified zero-dimensional cohesive sediment transport model, a two-dimensional hydrodynamic/sediment transport model, and a tidally averaged box model. The zero-dimensional model consists of a well-mixed control volume subjected to a semidiurnal tide, with a cohesive sediment bed. Typical cohesive sediment parameters were utilized for both the bed and suspended sediment. The model was run until equilibrium in terms of bathymetric change was reached, where equilibrium is defined as less than the rate of sea level rise in San Francisco Bay (2.17 mm/year). Using this state as the initial condition, model parameters were perturbed 10% to favor deposition, and the model was resumed. Perturbed parameters included, but were not limited to, maximum tidal current, erosion rate constant, and critical shear stress for erosion. Bathymetric change was most sensitive to maximum tidal current, with a 10% perturbation resulting in an additional 1.4 m of deposition over 10 years. Re-establishing equilibrium in this model required 14 years. The next most sensitive parameter was the critical shear stress for erosion; when increased 10%, an additional 0.56 m of sediment was deposited and 13 years were required to re-establish equilibrium. The two-dimensional hydrodynamic/sediment transport model was calibrated to suspended-sediment concentration, and despite robust solution of hydrodynamic conditions it was unable to accurately hindcast bathymetric change. The tidally averaged box model was calibrated to bathymetric change data and shows rapidly evolving bathymetry in the first 10-20 years, though sediment supply and hydrodynamic forcing did not vary greatly. This initial burst of bathymetric change is believed to be model adjustment to initial conditions, and suggests a spin-up time of greater than 10 years. These three diverse modeling approaches reinforce the sensitivity of cohesive sediment transport models to initial conditions and model parameters, and highlight the importance of appropriate calibration data. Adequate spin-up time of the order of years is required to initialize models, otherwise the solution will contain bathymetric change that is not due to environmental forcings, but rather improper specification of initial conditions and model parameters. Temporally intensive bathymetric change data can assist in determining initial conditions and parameters, provided they are available. Computational effort may be reduced by selectively updating hydrodynamics and bathymetry, thereby allowing time for spin-up periods. reserved.
Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning
Kok, Kai Yit; Rajendran, Parvathy
2016-01-01
The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630
BONNSAI: correlated stellar observables in Bayesian methods
NASA Astrophysics Data System (ADS)
Schneider, F. R. N.; Castro, N.; Fossati, L.; Langer, N.; de Koter, A.
2017-02-01
In an era of large spectroscopic surveys of stars and big data, sophisticated statistical methods become more and more important in order to infer fundamental stellar parameters such as mass and age. Bayesian techniques are powerful methods because they can match all available observables simultaneously to stellar models while taking prior knowledge properly into account. However, in most cases it is assumed that observables are uncorrelated which is generally not the case. Here, we include correlations in the Bayesian code Bonnsai by incorporating the covariance matrix in the likelihood function. We derive a parametrisation of the covariance matrix that, in addition to classical uncertainties, only requires the specification of a correlation parameter that describes how observables co-vary. Our correlation parameter depends purely on the method with which observables have been determined and can be analytically derived in some cases. This approach therefore has the advantage that correlations can be accounted for even if information for them are not available in specific cases but are known in general. Because the new likelihood model is a better approximation of the data, the reliability and robustness of the inferred parameters are improved. We find that neglecting correlations biases the most likely values of inferred stellar parameters and affects the precision with which these parameters can be determined. The importance of these biases depends on the strength of the correlations and the uncertainties. For example, we apply our technique to massive OB stars, but emphasise that it is valid for any type of stars. For effective temperatures and surface gravities determined from atmosphere modelling, we find that masses can be underestimated on average by 0.5σ and mass uncertainties overestimated by a factor of about 2 when neglecting correlations. At the same time, the age precisions are underestimated over a wide range of stellar parameters. We conclude that accounting for correlations is essential in order to derive reliable stellar parameters including robust uncertainties and will be vital when entering an era of precision stellar astrophysics thanks to the Gaia satellite.
Ethnopedology and soil quality of bamboo (Bambusa sp.) based agroforestry system.
Arun Jyoti, Nath; Lal, Rattan; Das, Ashesh Kumar
2015-07-15
It is widely recognized that farmers' hold important knowledge of folk soil classification for agricultural land for its uses, yet little has been studied for traditional agroforestry systems. This article explores the ethnopedology of bamboo (Bambusa sp.) based agroforestry system in North East India, and establishes the relationship of soil quality index (SQI) with bamboo productivity. The study revealed four basic folk soil (mati) types: kalo (black soil), lal (red soil), pathal (stony soil) and balu (sandy soil). Of these, lal mati soil was the most predominant soil type (~ 40%) in bamboo-based agroforestry system. Soil physio-chemical parameters were studied to validate the farmers' soil hierarchal classification and also to correlate with productivity of the bamboo stand. Farmers' hierarchal folk soil classification was consistent with the laboratory scientific analysis. Culm production (i.e. measure of productivity of bamboo) was the highest (27culmsclump(-1)) in kalo mati (black soil) and the lowest (19culmsclump(-1)) in balu mati (sandy soil). Linear correlation of individual soil quality parameter with bamboo productivity explained 16 to 49% of the variability. A multiple correlation of the best fitted linear soil quality parameter (soil organic carbon or SOC, water holding capacity or WHC, total nitrogen) with productivity improved explanatory power to 53%. Development of SQI from ten relevant soil quality parameters and its correlation with bamboo productivity explained the 64% of the variation and therefore, suggest SQI as the best determinant of bamboo yield. Data presented indicate that the kalo mati (black soil) is sustainable or sustainable with high input. However, the other three folk soil types (red, stony and sandy soil) are also sustainable but for other land uses. Therefore, ethnopedological studies may move beyond routine laboratory analysis and incorporate SQI for assessing the sustainability of land uses managed by the farmers'. Additional research is required to incorporate principal component analysis for improving the SQI and site potential assessment. It is also important to evaluate the minimum data set (MDS) required for SQI and productivity assessment in agroforestry systems. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Bicen; Xu, Pengmei; Hou, Lizhou; Wang, Caiqin
2017-10-01
Taking the advantages of high spectral resolution, high sensitivity and wide spectral coverage, space borne Fourier transform infrared spectrometer (FTS) plays more and more important role in atmospheric composition sounding. The combination of solar occultation and FTS technique improves the sensitivity of instrument. To achieve both high spectral resolution and high signal to noise ratio (SNR), reasonable allocation and optimization for instrument parameters are the foundation and difficulty. The solar occultation FTS (SOFTS) is a high spectral resolution (0.03 cm-1) FTS operating from 2.4 to 13.3 μm (750-4100cm-1), which will determine the altitude profile information of typical 10-100km for temperature, pressure, and the volume mixing ratios for several dozens of atmospheric compositions. As key performance of SOFTS, SNR is crucially important to high accuracy retrieval of atmospheric composition, which is required to be no less than 100:1 at the radiance of 5800K blackbody. Based on the study of various parameters and its interacting principle, according to interference theory and operation principle of time modulated FTS, a simulation model of FTS SNR has been built, which considers satellite orbit, spectral radiometric features of sun and atmospheric composition, optical system, interferometer and its control system, measurement duration, detector sensitivity, noise of detector and electronic system and so on. According to the testing results of SNR at the illuminating of 1000 blackbody, the on-orbit SNR performance of SOFTS is estimated, which can meet the mission requirement.
NASA Astrophysics Data System (ADS)
Liu, Bin
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior. To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barron, Robert W.; McJeon, Haewon C.
2015-05-01
This paper considers the effect of several key parameters of low carbon energy technologies on the cost of abatement. A methodology for determining the minimum level of performance required for a parameter to have a statistically significant impact on CO2 abatement cost is developed and used to evaluate the impact of eight key parameters of low carbon energy supply technologies on the cost of CO2 abatement. The capital cost of nuclear technology is found to have the greatest impact of the parameters studied. The cost of biomass and CCS technologies also have impacts, while their efficiencies have little, if any.more » Sensitivity analysis of the results with respect to population, GDP, and CO2 emission constraint show that the minimum performance level and impact of nuclear technologies is consistent across the socioeconomic scenarios studied, while the other technology parameters show different performance under higher population, lower GDP scenarios. Solar technology was found to have a small impact, and then only at very low costs. These results indicate that the cost of nuclear is the single most important driver of abatement cost, and that trading efficiency for cost may make biomass and CCS technologies more competitive.« less
Constraints on Average Radial Anisotropy in the Lower Mantle
NASA Astrophysics Data System (ADS)
Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.
2014-12-01
Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.
NASA Astrophysics Data System (ADS)
Khan, Zeeshan; Shah, Rehan Ali; Islam, Saeed; Jan, Bilal; Imran, Muhammad; Tahir, Farisa
2016-10-01
Modern optical fibers require double-layer coating on the glass fiber to provide protection from signal attenuation and mechanical damage. The most important plastic resins used in wires and optical fibers are plastic polyvinyl chloride (PVC) and low-high density polyethylene (LDPE/HDPE), nylon and Polysulfone. In this paper, double-layer optical fiber coating is performed using melt polymer satisfying PTT fluid model in a pressure type die using wet-on-wet coating process. The assumption of fully developed flow of Phan-Thien-Tanner (PTT) fluid model, two-layer liquid flows of an immiscible fluid is modeled in an annular die, where the fiber is dragged at a higher speed. The equations characterizing the flow and heat transfer phenomena are solved exactly and the effects of emerging parameters (Deborah and slip parameters, characteristic velocity, radii ratio and Brinkman numbers on the axial velocity, flow rate, thickness of coated fiber optics, and temperature distribution) are reported in graphs. It is shown that an increase in the non-Newtonian parameters increase the velocity in the absence or presence of slip parameters which coincides with related work. The comparison is done with experimental work by taking λ → 0 (non-Newtonian parameter).
Khan, Zeeshan; Shah, Rehan Ali; Islam, Saeed; Jan, Bilal; Imran, Muhammad; Tahir, Farisa
2016-01-01
Modern optical fibers require double-layer coating on the glass fiber to provide protection from signal attenuation and mechanical damage. The most important plastic resins used in wires and optical fibers are plastic polyvinyl chloride (PVC) and low-high density polyethylene (LDPE/HDPE), nylon and Polysulfone. In this paper, double-layer optical fiber coating is performed using melt polymer satisfying PTT fluid model in a pressure type die using wet-on-wet coating process. The assumption of fully developed flow of Phan-Thien-Tanner (PTT) fluid model, two-layer liquid flows of an immiscible fluid is modeled in an annular die, where the fiber is dragged at a higher speed. The equations characterizing the flow and heat transfer phenomena are solved exactly and the effects of emerging parameters (Deborah and slip parameters, characteristic velocity, radii ratio and Brinkman numbers on the axial velocity, flow rate, thickness of coated fiber optics, and temperature distribution) are reported in graphs. It is shown that an increase in the non-Newtonian parameters increase the velocity in the absence or presence of slip parameters which coincides with related work. The comparison is done with experimental work by taking λ → 0 (non-Newtonian parameter). PMID:27708412
Khan, Zeeshan; Shah, Rehan Ali; Islam, Saeed; Jan, Bilal; Imran, Muhammad; Tahir, Farisa
2016-10-06
Modern optical fibers require double-layer coating on the glass fiber to provide protection from signal attenuation and mechanical damage. The most important plastic resins used in wires and optical fibers are plastic polyvinyl chloride (PVC) and low-high density polyethylene (LDPE/HDPE), nylon and Polysulfone. In this paper, double-layer optical fiber coating is performed using melt polymer satisfying PTT fluid model in a pressure type die using wet-on-wet coating process. The assumption of fully developed flow of Phan-Thien-Tanner (PTT) fluid model, two-layer liquid flows of an immiscible fluid is modeled in an annular die, where the fiber is dragged at a higher speed. The equations characterizing the flow and heat transfer phenomena are solved exactly and the effects of emerging parameters (Deborah and slip parameters, characteristic velocity, radii ratio and Brinkman numbers on the axial velocity, flow rate, thickness of coated fiber optics, and temperature distribution) are reported in graphs. It is shown that an increase in the non-Newtonian parameters increase the velocity in the absence or presence of slip parameters which coincides with related work. The comparison is done with experimental work by taking λ → 0 (non-Newtonian parameter).
Design constraints of the LST fine guidance sensor
NASA Technical Reports Server (NTRS)
Wissinger, A. B.
1975-01-01
The LST Fine Guidance Sensor design is shaped by the rate of occurrence of suitable guide stars, the competition for telescope focal plane space with the Science Instruments, and the sensitivity of candidate image motion sensors. The relationship between these parameters is presented, and sensitivity to faint stars is shown to be of prime importance. An interferometric technique of image motion sensing is shown to have improved sensitivity and, therefore, a reduced focal plane area requirement in comparison with other candidate techniques (image-splitting prism and image dissector tube techniques). Another design requirement is speed in acquiring the guide star in order to maximize the time available for science observations. The design constraints are shown parametrically, and modelling results are presented.
Selection of suitable NDT methods for building inspection
NASA Astrophysics Data System (ADS)
Pauzi Ismail, Mohamad
2017-11-01
Construction of modern structures requires good quality concrete with adequate strength and durability. Several accidents occurred in the civil constructions and were reported in the media. Such accidents were due to poor workmanship and lack of systematic monitoring during the constructions. In addition, water leaking and cracking in residential houses was commonly reported too. Based on these facts, monitoring the quality of concrete in structures is becoming more and more important subject. This paper describes major Non-destructive Testing (NDT) methods for evaluating structural integrity of concrete building. Some interesting findings during actual NDT inspections on site are presented. The NDT methods used are explained, compared and discussed. The suitable methods are suggested as minimum NDT methods to cover parameters required in the inspection.
Hohm, Tim; Demarsy, Emilie; Quan, Clément; Allenbach Petrolati, Laure; Preuten, Tobias; Vernoux, Teva; Bergmann, Sven; Fankhauser, Christian
2014-09-26
Phototropism is a growth response allowing plants to align their photosynthetic organs toward incoming light and thereby to optimize photosynthetic activity. Formation of a lateral gradient of the phytohormone auxin is a key step to trigger asymmetric growth of the shoot leading to phototropic reorientation. To identify important regulators of auxin gradient formation, we developed an auxin flux model that enabled us to test in silico the impact of different morphological and biophysical parameters on gradient formation, including the contribution of the extracellular space (cell wall) or apoplast. Our model indicates that cell size, cell distributions, and apoplast thickness are all important factors affecting gradient formation. Among all tested variables, regulation of apoplastic pH was the most important to enable the formation of a lateral auxin gradient. To test this prediction, we interfered with the activity of plasma membrane H⁺ -ATPases that are required to control apoplastic pH. Our results show that H⁺ -ATPases are indeed important for the establishment of a lateral auxin gradient and phototropism. Moreover, we show that during phototropism, H⁺ -ATPase activity is regulated by the phototropin photoreceptors, providing a mechanism by which light influences apoplastic pH. © 2014 The Authors. Published under the terms of the CC BY 4.0 license.
Design of a multiband near-infrared sky brightness monitor using an InSb detector.
Dong, Shu-Cheng; Wang, Jian; Tang, Qi-Jie; Jiang, Feng-Xin; Chen, Jin-Ting; Zhang, Yi-Hao; Wang, Zhi-Yue; Chen, Jie; Zhang, Hong-Fei; Jiang, Hai-Jiao; Zhu, Qing-Feng; Jiang, Peng; Ji, Tuo
2018-02-01
Infrared sky background level is an important parameter of infrared astronomy observations from the ground, particularly for a candidate site of an infrared capable observatory since low background level is required for such a site. The Chinese astronomical community is looking for a suitable site for a future 12 m telescope, which is designed for working in both optical and infrared wavelengths. However, none of the proposed sites has been tested for infrared observations. Nevertheless, infrared sky background measurements are also important during the design of infrared observing instruments. Based on the requirement, in order to supplement the current site survey data and guide the design of future infrared instruments, a multiband near-infrared sky brightness monitor (MNISBM) based on an InSb sensor is designed in this paper. The MNISBM consists of an optical system, mechanical structure and control system, detector and cooler, high gain readout electronics, and operational software. It is completed and tested in the laboratory. The results show that the sensitivity of the MNISBM meets the requirements of the measurement of near-infrared sky background level of several well-known astronomical infrared observing sites.
Design of a multiband near-infrared sky brightness monitor using an InSb detector
NASA Astrophysics Data System (ADS)
Dong, Shu-cheng; Wang, Jian; Tang, Qi-jie; Jiang, Feng-xin; Chen, Jin-ting; Zhang, Yi-hao; Wang, Zhi-yue; Chen, Jie; Zhang, Hong-fei; Jiang, Hai-jiao; Zhu, Qing-feng; Jiang, Peng; Ji, Tuo
2018-02-01
Infrared sky background level is an important parameter of infrared astronomy observations from the ground, particularly for a candidate site of an infrared capable observatory since low background level is required for such a site. The Chinese astronomical community is looking for a suitable site for a future 12 m telescope, which is designed for working in both optical and infrared wavelengths. However, none of the proposed sites has been tested for infrared observations. Nevertheless, infrared sky background measurements are also important during the design of infrared observing instruments. Based on the requirement, in order to supplement the current site survey data and guide the design of future infrared instruments, a multiband near-infrared sky brightness monitor (MNISBM) based on an InSb sensor is designed in this paper. The MNISBM consists of an optical system, mechanical structure and control system, detector and cooler, high gain readout electronics, and operational software. It is completed and tested in the laboratory. The results show that the sensitivity of the MNISBM meets the requirements of the measurement of near-infrared sky background level of several well-known astronomical infrared observing sites.
Efficient Access Control in Multimedia Social Networks
NASA Astrophysics Data System (ADS)
Sachan, Amit; Emmanuel, Sabu
Multimedia social networks (MMSNs) have provided a convenient way to share multimedia contents such as images, videos, blogs, etc. Contents shared by a person can be easily accessed by anybody else over the Internet. However, due to various privacy, security, and legal concerns people often want to selectively share the contents only with their friends, family, colleagues, etc. Access control mechanisms play an important role in this situation. With access control mechanisms one can decide the persons who can access a shared content and who cannot. But continuously growing content uploads and accesses, fine grained access control requirements (e.g. different access control parameters for different parts in a picture), and specific access control requirements for multimedia contents can make the time complexity of access control to be very large. So, it is important to study an efficient access control mechanism suitable for MMSNs. In this chapter we present an efficient bit-vector transform based access control mechanism for MMSNs. The proposed approach is also compatible with other requirements of MMSNs, such as access rights modification, content deletion, etc. Mathematical analysis and experimental results show the effectiveness and efficiency of our proposed approach.
Local wavelet transform: a cost-efficient custom processor for space image compression
NASA Astrophysics Data System (ADS)
Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier
2002-11-01
Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.
Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K
2015-06-01
Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.
Study of parameters affecting the conversion in a plug flow reactor for reactions of the type 2A→B
NASA Astrophysics Data System (ADS)
Beltran-Prieto, Juan Carlos; Long, Nguyen Huynh Bach Son
2018-04-01
Modeling of chemical reactors is an important tool to quantify reagent conversion, product yield and selectivity towards a specific compound and to describe the behavior of the system. Proposal of differential equations describing the mass and energy balance are among the most important steps required during the modeling process as they play a special role in the design and operation of the reactor. Parameters governing transfer of heat and mass have a strong relevance in the rate of the reaction. Understanding this information is important for the selection of reactor and operating regime. In this paper we studied the irreversible gas-phase reaction 2A→B. We model the conversion that can be achieved as function of the reactor volume and feeding temperature. Additionally, we discuss the effect of activation energy and the heat of reaction on the conversion achieved in the tubular reactor. Furthermore, we considered that dimerization occurs instantaneously in the catalytic surface to develop equations for the determination of rate of reaction per unit area of three different catalytic surface shapes. This data can be combined with information about the global rate of conversion in the reactor to improve regent conversion and yield of product.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Takashina, Masaaki; Koizumi, Masahiko; Das, Indra J.; Moskvin, Vadim P.
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Dynamic Modelling under Uncertainty: The Case of Trypanosoma brucei Energy Metabolism
Achcar, Fiona; Kerkhoven, Eduard J.; Bakker, Barbara M.; Barrett, Michael P.; Breitling, Rainer
2012-01-01
Kinetic models of metabolism require detailed knowledge of kinetic parameters. However, due to measurement errors or lack of data this knowledge is often uncertain. The model of glycolysis in the parasitic protozoan Trypanosoma brucei is a particularly well analysed example of a quantitative metabolic model, but so far it has been studied with a fixed set of parameters only. Here we evaluate the effect of parameter uncertainty. In order to define probability distributions for each parameter, information about the experimental sources and confidence intervals for all parameters were collected. We created a wiki-based website dedicated to the detailed documentation of this information: the SilicoTryp wiki (http://silicotryp.ibls.gla.ac.uk/wiki/Glycolysis). Using information collected in the wiki, we then assigned probability distributions to all parameters of the model. This allowed us to sample sets of alternative models, accurately representing our degree of uncertainty. Some properties of the model, such as the repartition of the glycolytic flux between the glycerol and pyruvate producing branches, are robust to these uncertainties. However, our analysis also allowed us to identify fragilities of the model leading to the accumulation of 3-phosphoglycerate and/or pyruvate. The analysis of the control coefficients revealed the importance of taking into account the uncertainties about the parameters, as the ranking of the reactions can be greatly affected. This work will now form the basis for a comprehensive Bayesian analysis and extension of the model considering alternative topologies. PMID:22379410
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Research on Matching Method of Power Supply Parameters for Dual Energy Source Electric Vehicles
NASA Astrophysics Data System (ADS)
Jiang, Q.; Luo, M. J.; Zhang, S. K.; Liao, M. W.
2018-03-01
A new type of power source is proposed, which is based on the traffic signal matching method of the dual energy source power supply composed of the batteries and the supercapacitors. First, analyzing the power characteristics is required to meet the excellent dynamic characteristics of EV, studying the energy characteristics is required to meet the mileage requirements and researching the physical boundary characteristics is required to meet the physical conditions of the power supply. Secondly, the parameter matching design with the highest energy efficiency is adopted to select the optimal parameter group with the method of matching deviation. Finally, the simulation analysis of the vehicle is carried out in MATLABSimulink, The mileage and energy efficiency of dual energy sources are analyzed in different parameter models, and the rationality of the matching method is verified.
Guidelines for the Procurement of Aerospace Nickel Cadmium Cells
NASA Technical Reports Server (NTRS)
Thierfelder, Helmut
1997-01-01
NASA has been using a Modular Power System containing "standard" nickel cadmium (NiCd) batteries, composed of "standard" NiCd cells. For many years the only manufacturer of the NASA "standard" NiCd cells was General Electric Co. (subsequently Gates Aerospace and now SAFT). This standard cell was successfully used in numerous missions. However, uncontrolled technical changes, and changes in industrial restructuring require a new approach. General Electric (now SAFT Aerospace Batteries) had management changes, new manufacturers entered the market (Eagle-Picher Industries, ACME Electric Corporation, Aerospace Division, Sanyo Electric Co.) and battery technology advanced. New NASA procurements for aerospace NiCd cells will have specifications unique to the spacecraft and mission requirements. This document provides the user/customer guidelines for the new approach to procuring of and specifying performance requirements for highly reliable NiCd cells and batteries. It includes details of key parameters and their importance. The appendices contain a checklist, detailed calculations, and backup information.
Kinematics and constraints associated with swashplate blade pitch control
NASA Technical Reports Server (NTRS)
Leyland, Jane A.
1993-01-01
An important class of techniques to reduce helicopter vibration is based on using a Higher Harmonic controller to optimally define the Higher Harmonic blade pitch. These techniques typically require solution of a general optimization problem requiring the determination of a control vector which minimizes a performance index where functions of the control vector are subject to inequality constraints. Six possible constraint functions associated with swashplate blade pitch control were identified and defined. These functions constrain: (1) blade pitch Fourier Coefficients expressed in the Rotating System, (2) blade pitch Fourier Coefficients expressed in the Nonrotating System, (3) stroke of the individual actuators expressed in the Nonrotating System, (4) blade pitch expressed as a function of blade azimuth and actuator stroke, (5) time rate-of-change of the aforementioned parameters, and (6) required actuator power. The aforementioned constraints and the associated kinematics of swashplate blade pitch control by means of the strokes of the individual actuators are documented.
Space shuttle launch era spacecraft injection errors and DSN initial acquisition
NASA Technical Reports Server (NTRS)
Khatib, A. R.; Berman, A. L.; Wackley, J. A.
1981-01-01
The initial acquisition of a spacecraft by the Deep Space Network (DSN) is a critical mission event. This results from the importance of rapidly evaluating the health and trajectory of a spacecraft in the event that immediate corrective action might be required. Further, the DSN initial acquisition is always complicated by the most extreme tracking rates of the mission. The DSN initial acquisition characteristics will change considerably in the upcoming space shuttle launch era. How given injection errors at spacecraft separation from the upper stage launch vehicle (carried into orbit by the space shuttle) impact the DSN initial acquisition, and how this information can be factored into injection accuracy requirements to be levied on the Space Transportation System (STS) is addressed. The approach developed begins with the DSN initial acquisition parameters, generates a covariance matrix, and maps this covariance matrix backward to the spacecraft injection, thereby greatly simplifying the task of levying accuracy requirements on the STS, by providing such requirements in a format both familiar and convenient to STS.
Soares, Kelen Carine Costa; Moraes, Marcelo Vogler; Gelfuso, Guilherme Martins; Gratieri, Taís
2015-11-01
The comparative evaluation required for the registration of generic topical medicines in Brazil is conducted by means of a pharmaceutical equivalence study, which merely assesses the physical/chemical and microbiological parameters of the formulations. At the international level, clinical or pharmacodynamic studies are now being required to prove the efficacy and safety of semisolid topical generic formulations. This work presents a comparison of the different requirements for the registration of topical formulations, taking into consideration the various regulatory authorities, and presents a survey of topical medicines registered in Brazil prior to 2013. The survey revealed that in comparison with the USA there were many more copies of these formulations registered in Brazil. This fact, together with the large number of studies in the literature showing the lack of bioequivalence of topical medication, is clear proof of the major importance of the need to realign Brazilian legislation with respect to the technical requirements for the registration of generic and similar medication for dermatological topical application in Brazil.
Underwater hydraulic shock shovel control system
NASA Astrophysics Data System (ADS)
Liu, He-Ping; Luo, A.-Ni; Xiao, Hai-Yan
2008-06-01
The control system determines the effectiveness of an underwater hydraulic shock shovel. This paper begins by analyzing the working principles of these shovels and explains the importance of their control systems. A new type of control system’s mathematical model was built and analyzed according to those principles. Since the initial control system’s response time could not fulfill the design requirements, a PID controller was added to the control system. System response time was still slower than required, so a neural network was added to nonlinearly regulate the proportional element, integral element and derivative element coefficients of the PID controller. After these improvements to the control system, system parameters fulfilled the design requirements. The working performance of electrically-controlled parts such as the rapidly moving high speed switch valve is largely determined by the control system. Normal control methods generally can’t satisfy a shovel’s requirements, so advanced and normal control methods were combined to improve the control system, bringing good results.
Leszczuk, Mikołaj; Dudek, Łukasz; Witkowski, Marcin
The VQiPS (Video Quality in Public Safety) Working Group, supported by the U.S. Department of Homeland Security, has been developing a user guide for public safety video applications. According to VQiPS, five parameters have particular importance influencing the ability to achieve a recognition task. They are: usage time-frame, discrimination level, target size, lighting level, and level of motion. These parameters form what are referred to as Generalized Use Classes (GUCs). The aim of our research was to develop algorithms that would automatically assist classification of input sequences into one of the GUCs. Target size and lighting level parameters were approached. The experiment described reveals the experts' ambiguity and hesitation during the manual target size determination process. However, the automatic methods developed for target size classification make it possible to determine GUC parameters with 70 % compliance to the end-users' opinion. Lighting levels of the entire sequence can be classified with an efficiency reaching 93 %. To make the algorithms available for use, a test application has been developed. It is able to process video files and display classification results, the user interface being very simple and requiring only minimal user interaction.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
Experimental Assessment of the Hydraulics of a Miniature Axial-Flow Left Ventricular Assist Device
NASA Astrophysics Data System (ADS)
Smith, P. Alex; Cohn, William; Metcalfe, Ralph
2017-11-01
A minimally invasive partial-support left ventricular assist device (LVAD) has been proposed with a flow path from the left atrium to the arterial system to reduce left ventricular stroke work. In LVAD design, peak and average efficiency must be balanced over the operating range to reduce blood trauma. Axial flow pumps have many geometric parameters. Until recently, testing all these parameters was impractical, but modern 3D printing technology enables multi-parameter studies. Following theoretical design, experimental hydraulic evaluation in steady state conditions examines pressure, flow, pressure-flow gradient, efficiency, torque, and axial force as output parameters. Preliminary results suggest that impeller blades and stator vanes with higher inlet angles than recommended by mean line theory (MLT) produce flatter gradients and broader efficiency curves, increasing compatibility with heart physiology. These blades also produce less axial force, which reduces bearing load. However, they require slightly higher torque, which is more demanding of the motor. MLT is a low order, empirical model developed on large pumps. It does not account for the significant viscous losses in small pumps like LVADs. This emphasizes the importance of experimental testing for hydraulic design. Roderick D MacDonald Research Fund.
Element distinctness revisited
NASA Astrophysics Data System (ADS)
Portugal, Renato
2018-07-01
The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x_1,\\ldots ,x_N) is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N^{2/3}) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis' seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007). The first critical parameter is the number of repetitions of the algorithm's main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm's success probability is 1-O(N^{1/(k+1)}), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm.
Update of Standard Practices for New Method Validation in Forensic Toxicology.
Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T
2017-01-01
International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
40 CFR 62.14454 - How must I monitor the required parameters?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 8 2010-07-01 2010-07-01 false How must I monitor the required... Before June 20, 1996 Performance Testing and Monitoring Requirements § 62.14454 How must I monitor the... equipment necessary to monitor the site-specific operating parameters developed pursuant to § 62.14453(b...
40 CFR 62.14454 - How must I monitor the required parameters?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 8 2011-07-01 2011-07-01 false How must I monitor the required parameters? 62.14454 Section 62.14454 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF STATE PLANS FOR DESIGNATED FACILITIES AND POLLUTANTS Federal Plan Requirements for Hospital...
NASA Astrophysics Data System (ADS)
Zhu, Qimeng; Chen, Jia; Gou, Guoqing; Chen, Hui; Li, Peng; Gao, W.
2016-10-01
Residual stress measurement and control are highly important for the safety of structures of high-speed trains, which is critical for the structure design. The longitudinal critically refracted wave technology is the most widely used method in measuring residual stress with ultrasonic method, but its accuracy is strongly related to the test parameters, namely the flight time at the free-stress condition ( t 0), stress coefficient ( K), and initial stress (σ0) of the measured materials. The difference of microstructure in the weld zone, heat affected zone, and base metal (BM) results in the divergence of experimental parameters. However, the majority of researchers use the BM parameters to determine the residual stress in other zones and ignore the initial stress (σ0) in calibration samples. Therefore, the measured residual stress in different zones is often high in errors and may result in the miscalculation of the safe design of important structures. A serious problem in the ultrasonic estimation of residual stresses requires separation between the microstructure and the acoustoelastic effects. In this paper, the effects of initial stress and microstructure on stress coefficient K and flight time t 0 at free-stress conditions have been studied. The residual stress with or without different corrections was investigated. The results indicated that the residual stresses obtained with correction are more accurate for structure design.
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
Control of energy sweep and transverse beam motion in induction linacs
NASA Astrophysics Data System (ADS)
Turner, W. C.
1991-05-01
Recent interest in the electron induction accelerator has focussed on its application as a driver for high power radiation sources; free electron laser (FEL), relativistic klystron (RK) and cyclotron autoresonance maser (CARM). In the microwave regime where many successful experiments have been carried out, typical beam parameters are: beam energy 1 to 10 MeV, current 1 to 3 kA and pulse width 50 nsec. Radiation source applications impose conditions on electron beam quality, as characterized by three parameters; energy sweep, transverse beam motion and brightness. These conditions must be maintained for the full pulse duration to assure high efficiency conversion of beam power to radiation. The microwave FEL that has been analyzed in the greatest detail requires energy sweep less than (+ or -) 1 pct., transverse beam motion less than (+ or -) 1 mm and brightness approx. 1 x 10(exp 8)A/sq m sq rad. In the visible region the requirements on these parameters become roughly an order of magnitude more strigent. With the ETAII accelerator at LLNL the requirements were achieved for energy sweep, transverse beam motion and brightness. The recent data and the advances that have made the improved beam quality possible are discussed. The most important advances are: understanding of focussing magnetic field errors and improvements in alignment of the magnetic axis, a redesign of the high voltage pulse distribution system between the magnetic compression modulators and the accelerator cells, and exploitation of a beam tuning algorithm for minimizing transverse beam motion. The prospects are briefly described for increasing the pulse repetition frequency to the range of 5 kHz and a delayed feedback method of regulating beam energy over very long pulse bursts, thus making average power megawatt level microwave sources at 140 GHz and above a possibility.
Robust control with structured perturbations
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1988-01-01
Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.
Shuker, Nauras; de Man, Femke M; de Weerd, Annelies E; van Agteren, Madelon; Weimar, Willem; Betjes, Michiel G H; van Gelder, Teun; Hesselink, Dennis A
2016-04-01
The aim of this study was to investigate whether pretransplant tacrolimus (Tac) dose requirements of patients scheduled to undergo living donor kidney transplantation correlate with posttransplantation dose requirements. The predictive value of Tac dose requirements (defined as the ratio of the Tac predose concentration, C0, divided by the total daily Tac dose, D) pretransplantation on this same parameter posttransplantation was assessed retrospectively in a cohort of 57 AB0-incompatible kidney transplant recipients. These patients started immunosuppressive therapy 14 days before transplant surgery. All patients were using a stable dose of glucocorticoids and were at steady-state Tac exposure before transplantation. Tac dose requirements immediately before transplantation (C0/Dbefore) explained 63% of the Tac dose requirements on day 3 after transplantation: r = 0.633 [F (1, 44) = 75.97, P < 0.01]. No other clinical and demographic variables predicted Tac dose requirements early after transplantation. Steady-state Tac dose requirement before transplantation largely predicted posttransplantation Tac dose requirements in AB0-incompatible kidney transplant recipients. The importance of this finding is that the posttransplantation Tac dose can be individualized based on a patient's pretransplantation Tac concentration/dose ratio. Pretransplant Tac phenotyping therefore has the potential to improve transplantation outcomes.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
NASA Astrophysics Data System (ADS)
Annese, E.; Mori, T. J. A.; Schio, P.; Rache Salles, B.; Cezar, J. C.
2018-04-01
The implementation of La0.67Sr0.33MnO3 thin films in multilayered structures in organic and inorganic spintronics devices requires the optimization of their electronic and magnetic properties. In this work we report the structural, morphological, electronic and magnetic characterizations of La0.67Sr0.33MnO3 epitaxial thin films on SrTiO3 substrates, grown by pulsed laser deposition under different growing conditions. We show that the fluence of laser shots and in situ post-annealing conditions are important parameters to control the tetragonality (c/a) of the thin films. The distortion of the structure has a remarkable impact on both surface and bulk magnetism, allowing the tunability of the materials properties for use in different applications.
How Many Is Enough?—Statistical Principles for Lexicostatistics
Zhang, Menghan; Gong, Tao
2016-01-01
Lexicostatistics has been applied in linguistics to inform phylogenetic relations among languages. There are two important yet not well-studied parameters in this approach: the conventional size of vocabulary list to collect potentially true cognates and the minimum matching instances required to confirm a recurrent sound correspondence. Here, we derive two statistical principles from stochastic theorems to quantify these parameters. These principles validate the practice of using the Swadesh 100- and 200-word lists to indicate degree of relatedness between languages, and enable a frequency-based, dynamic threshold to detect recurrent sound correspondences. Using statistical tests, we further evaluate the generality of the Swadesh 100-word list compared to the Swadesh 200-word list and other 100-word lists sampled randomly from the Swadesh 200-word list. All these provide mathematical support for applying lexicostatistics in historical and comparative linguistics. PMID:28018261
Impedance Flow Cytometry as a Tool to Analyze Microspore and Pollen Quality.
Heidmann, Iris; Di Berardino, Marco
2017-01-01
Analyzing pollen quality in an efficient and reliable manner is of great importance to the industries involved in seed and fruit production, plant breeding, and plant research. Pollen quality parameters, viability and germination capacity, are analyzed by various staining methods or by in vitro germination assays, respectively. These methods are time-consuming, species-dependent, and require a lab environment. Furthermore, the obtained viability data are often poorly related to in vivo pollen germination and seed set. Here, we describe a quick, label-free method to analyze pollen using microfluidic chips inserted into an impedance flow cytometer (IFC). Using this approach, pollen quality parameters are determined by a single measurement in a species-independent manner. The advantage of this protocol is that pollen viability and germination can be analyzed quickly by a reliable and standardized method.
Assessment of Diastolic Function in Congenital Heart Disease
Panesar, Dilveer Kaur; Burch, Michael
2017-01-01
Diastolic function is an important component of left ventricular (LV) function which is often overlooked. It can cause symptoms of heart failure in patients even in the presence of normal systolic function. The parameters used to assess diastolic function often measure flow and are affected by the loading conditions of the heart. The interpretation of diastolic function in the context of congenital heart disease requires some understanding of the effects of the lesions themselves on these parameters. Individual congenital lesions will be discussed in this paper. Recently, load-independent techniques have led to more accurate measurements of ventricular compliance and remodeling in heart disease. The combination of inflow velocities and tissue Doppler measurements can be used to estimate diastolic function and LV filling pressures. This review focuses on diastolic function and assessment in congenital heart disease. PMID:28261582
Nonlinear acoustics experimental characterization of microstructure evolution in Inconel 617
NASA Astrophysics Data System (ADS)
Yao, Xiaochu; Liu, Yang; Lissenden, Cliff J.
2014-02-01
Inconel 617 is a candidate material for the intermediate heat exchanger in a very high temperature reactor for the next generation nuclear power plant. This application will require the material to withstand fatigue-ratcheting interaction at temperatures up to 950°C. Therefore nondestructive evaluation and structural health monitoring are important capabilities. Acoustic nonlinearity (which is quantified in terms of a material parameter, the acoustic nonlinearity parameter, β) has been proven to be sensitive to microstructural changes in material. This research develops a robust experimental procedure to track the evolution of damage precursors in laboratory tested Inconel 617 specimens using ultrasonic bulk waves. The results from the acoustic non-linear tests are compared with stereoscope surface damage results. Therefore, the relationship between acoustic nonlinearity and microstructural evaluation can be clearly demonstrated for the specimens tested.
Assaying Mitochondrial Respiration as an Indicator of Cellular Metabolism and Fitness.
Smolina, Natalia; Bruton, Joseph; Kostareva, Anna; Sejersen, Thomas
2017-01-01
Mitochondrial respiration is the most important generator of cellular energy under most circumstances. It is a process of energy conversion of substrates into ATP. The Seahorse equipment allows measuring oxygen consumption rate (OCR) in living cells and estimates key parameters of mitochondrial respiration in real-time mode. Through use of mitochondrial inhibitors, four key mitochondrial respiration parameters can be measured: basal, ATP production-linked, maximal, and proton leak-linked OCR. This approach requires application of mitochondrial inhibitors-oligomycin to block ATP synthase, FCCP-to make the inner mitochondrial membrane permeable for protons and allow maximum electron flux through the electron transport chain, and rotenone and antimycin A-to inhibit complexes I and III, respectively. This chapter describes the protocol of OCR assessment in the culture of primary myotubes obtained upon satellite cell fusion.
Fuzzy based attitude controller for flexible spacecraft with on/off thrusters
NASA Astrophysics Data System (ADS)
Knapp, Roger G.; Adams, Neil J.
A fuzzy-based attitude controller is designed for attitude control of a generic spacecraft with on/off thrusters. The controller is comprised of packages of rules dedicated to addressing different objectives (e.g., disturbance rejection, low fuel consumption, avoiding the excitation of flexible appendages, etc.). These rule packages can be inserted or removed depending on the requirements of the particular spacecraft and are parameterized based on vehicle parameters such as inertia or operational parameters such as the maneuvering rate. Individual rule packages can be 'weighted' relative to each other to emphasize the importance of one objective relative to another. Finally, the fuzzy controller and rule packages are demonstrated using the high-fidelity Space Shuttle Interactive On-Orbit Simulator (IOS) while performing typical on-orbit operations and are subsequently compared with the existing shuttle flight control system performance.
Fuzzy based attitude controller for flexible spacecraft with on/off thrusters
NASA Astrophysics Data System (ADS)
Knapp, Roger Glenn
1993-05-01
A fuzzy-based attitude controller is designed for attitude control of a generic spacecraft with on/off thrusters. The controller is comprised of packages of rules dedicated to addressing different objectives (e.g., disturbance rejection, low fuel consumption, avoiding the excitation of flexible appendages, etc.). These rule packages can be inserted or removed depending on the requirements of the particular spacecraft and are parameterized based on vehicle parameters such as inertia or operational parameters such as the maneuvering rate. Individual rule packages can be 'weighted' relative to each other to emphasize the importance of one objective relative to another. Finally, the fuzzy controller and rule packages are demonstrated using the high-fidelity Space Shuttle Interactive On-Orbit Simulator (IOS) while performing typical on-orbit operations and are subsequently compared with the existing shuttle flight control system performance.
Generation of Requirements for Simulant Measurements
NASA Technical Reports Server (NTRS)
Rickman, D. L.; Schrader, C. M.; Edmunson, J. E.
2010-01-01
This TM presents a formal, logical explanation of the parameters selected for the figure of merit (FoM) algorithm. The FoM algorithm is used to evaluate lunar regolith simulant. The objectives, requirements, assumptions, and analysis behind the parameters are provided. A requirement is derived to verify and validate simulant performance versus lunar regolith from NASA s objectives for lunar simulants. This requirement leads to a specification that comparative measurements be taken the same way on the regolith and the simulant. In turn, this leads to a set of nine criteria with which to evaluate comparative measurements. Many of the potential measurements of interest are not defensible under these criteria. For example, many geotechnical properties of interest were not explicitly measured during Apollo and they can only be measured in situ on the Moon. A 2005 workshop identified 32 properties of major interest to users. Virtually all of the properties are tightly constrained, though not predictable, if just four parameters are controlled. Three parameters (composition, size, and shape) are recognized as being definable at the particle level. The fourth parameter (density) is a bulk property. In recent work, a fifth parameter (spectroscopy) has been identified, which will need to be added to future releases of the FoM.
Dudrick Research Symposium 2015-Lean Tissue and Protein in Health and Disease.
Earthman, Carrie P; Wolfe, Robert R; Heymsfield, Steven B
2017-02-01
The 2015 Dudrick Research Symposium "Lean Tissue and Protein in Health and Disease: Key Targets and Assessment Strategies" was held on February 16, 2015, at Clinical Nutrition Week in Long Beach, California. The Dudrick Symposium honors the many pivotal and innovative contributions to the development and advancement of parenteral nutrition made by Dr Stanley J. Dudrick, physician scientist, academic leader, and a founding member of the American Society for Parenteral and Enteral Nutrition. As the 2014 recipient of the Dudrick award, Dr Carrie Earthman chaired the symposium and was the first of 3 speakers, followed by Dr Robert Wolfe and Dr Steven Heymsfield. The symposium addressed the importance of lean tissue to health and response to disease and injury, as well as the many opportunities and challenges in its assessment at the bedside. Lean tissue assessment is beneficial to clinical care in chronic and acute care clinical settings, given the strong relationship between lean tissue and outcomes, including functional status. Currently available bioimpedance techniques, including the use of bioimpedance parameters, for lean tissue and nutrition status assessment were presented. The connection between protein requirements and lean tissue was discussed, highlighting the maintenance of lean tissue as one of the most important primary end points by which protein requirements can be estimated. The various tracer techniques to establish protein requirements were presented, emphasizing the importance of practical considerations in research protocols aimed to establish protein requirements. Ultrasound and other new and emerging technologies that may be used for lean tissue assessment were discussed, and areas for future research were highlighted.
Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects
NASA Technical Reports Server (NTRS)
Green Robert O.; Moreno, Jose F.
1996-01-01
AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529
Boily, Michaël; Dussault, Catherine; Massicotte, Julie; Guibord, Pascal; Lefebvre, Marc
2015-01-23
To demonstrate bioequivalence (BE) between two prolonged-release (PR) drug formulations, single dose studies under fasting and fed state as well as at least one steady-state study are currently required by the European Medicines Agency (EMA). Recently, however, there have been debates regarding the relevance of steady-state studies. New requirements in single-dose investigations have also been suggested by the EMA to address the absence of a parameter that can adequately assess the equivalence of the shape of the curves. In the draft guideline issued in 2013, new partial area under the curve (pAUC) pharmacokinetic (PK) parameters were introduced to that effect. In light of these potential changes, there is a need of supportive clinical evidence to evaluate the impact of pAUCs on the evaluation of BE between PR formulations. In this retrospective analysis, it was investigated whether the newly defined parameters were associated with an increase in discriminatory ability or a change in variability compared to the conventional PK parameters. Among the single dose studies that met the requirements already in place, 20% were found unable to meet the EMA's new requirements in regards to the pAUC PK parameters. When pairing fasting and fed studies for a same formulation, the failure rate increased to 40%. In some cases, due to the high variability of these parameters, an increase of the sample size would be required to prove BE. In other cases however, the pAUC parameters demonstrated a robust ability to detect differences between the shapes of the curves of PR formulations. The present analysis should help to better understand the impact of the upcoming changes in European regulations on PR formulations and in the design of future BE studies. Copyright © 2014 Elsevier B.V. All rights reserved.
Janczyk, Markus; Berryhill, Marian E
2014-04-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring a manual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required.
Berryhill, Marian E.
2014-01-01
The retro-cue effect (RCE) describes superior working memory performance for validly cued stimulus locations long after encoding has ended. Importantly, this happens with delays beyond the range of iconic memory. In general, the RCE is a stable phenomenon that emerges under varied stimulus configurations and timing parameters. We investigated its susceptibility to dual-task interference to determine the attentional requirements at the time point of cue onset and encoding. In Experiment 1, we compared single- with dual-task conditions. In Experiment 2, we borrowed from the psychological refractory period paradigm and compared conditions with high and low (dual-) task overlap. The secondary task was always binary tone discrimination requiring amanual response. Across both experiments, an RCE was found, but it was diminished in magnitude in the critical dual-task conditions. A previous study did not find evidence that sustained attention is required in the interval between cue offset and test. Our results apparently contradict these findings and point to a critical time period around cue onset and briefly thereafter during which attention is required. PMID:24452383
Charge carrier transport properties in thallium bromide crystalls used as radiation detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olschner, F.; Toledo-Quinones, M.; Shah, K.S.
1990-06-01
Thallium bromide (TlBr) is an attractive material for use in radiation detectors because of its wide bandgap (2.68 eV) and very high atomic number. Usefulness as a semiconductor detector material, however, also requires good charge carrier transport properties in order to maximize the magnitude of the signal from the detector. The authors report on measurements of the two most important transport parameters; the mobility {mu} and the mean trapping time {tau} for electrons and holes in TlBr crystals prepared in the laboratory.
Assembly-line Simulation Program
NASA Technical Reports Server (NTRS)
Chamberlain, Robert G.; Zendejas, Silvino; Malhotra, Shan
1987-01-01
Costs and profits estimated for models based on user inputs. Standard Assembly-line Manufacturing Industry Simulation (SAMIS) program generalized so useful for production-line manufacturing companies. Provides accurate and reliable means of comparing alternative manufacturing processes. Used to assess impact of changes in financial parameters as cost of resources and services, inflation rates, interest rates, tax policies, and required rate of return of equity. Most important capability is ability to estimate prices manufacturer would have to receive for its products to recover all of costs of production and make specified profit. Written in TURBO PASCAL.
Gomez, Carles; Paradells, Josep
2015-09-10
Urban Automation Networks (UANs) are being deployed worldwide in order to enable Smart City applications. Given the crucial role of UANs, as well as their diversity, it is critically important to assess their properties and trade-offs. This article introduces the requirements and challenges for UANs, characterizes the main current and emerging UAN paradigms, provides guidelines for their design and/or choice, and comparatively examines their performance in terms of a variety of parameters including coverage, power consumption, latency, standardization status and economic cost.
Gomez, Carles; Paradells, Josep
2015-01-01
Urban Automation Networks (UANs) are being deployed worldwide in order to enable Smart City applications. Given the crucial role of UANs, as well as their diversity, it is critically important to assess their properties and trade-offs. This article introduces the requirements and challenges for UANs, characterizes the main current and emerging UAN paradigms, provides guidelines for their design and/or choice, and comparatively examines their performance in terms of a variety of parameters including coverage, power consumption, latency, standardization status and economic cost. PMID:26378534
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
NASA Astrophysics Data System (ADS)
Schneider, Robert; Haberl, Alexander; Rascher, Rolf
2017-06-01
The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement data and to carry out a revision. From this point onwards, it is possible to evaluate the potential of this approach, and in the ideal case it will be further researched and later found in the production.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities
NASA Astrophysics Data System (ADS)
Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir
2016-03-01
Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.
Path integrals with higher order actions: Application to realistic chemical systems
NASA Astrophysics Data System (ADS)
Lindoy, Lachlan P.; Huang, Gavin S.; Jordan, Meredith J. T.
2018-02-01
Quantum thermodynamic parameters can be determined using path integral Monte Carlo (PIMC) simulations. These simulations, however, become computationally demanding as the quantum nature of the system increases, although their efficiency can be improved by using higher order approximations to the thermal density matrix, specifically the action. Here we compare the standard, primitive approximation to the action (PA) and three higher order approximations, the Takahashi-Imada action (TIA), the Suzuki-Chin action (SCA) and the Chin action (CA). The resulting PIMC methods are applied to two realistic potential energy surfaces, for H2O and HCN-HNC, both of which are spectroscopically accurate and contain three-body interactions. We further numerically optimise, for each potential, the SCA parameter and the two free parameters in the CA, obtaining more significant improvements in efficiency than seen previously in the literature. For both H2O and HCN-HNC, accounting for all required potential and force evaluations, the optimised CA formalism is approximately twice as efficient as the TIA formalism and approximately an order of magnitude more efficient than the PA. The optimised SCA formalism shows similar efficiency gains to the CA for HCN-HNC but has similar efficiency to the TIA for H2O at low temperature. In H2O and HCN-HNC systems, the optimal value of the a1 CA parameter is approximately 1/3 , corresponding to an equal weighting of all force terms in the thermal density matrix, and similar to previous studies, the optimal α parameter in the SCA was ˜0.31. Importantly, poor choice of parameter significantly degrades the performance of the SCA and CA methods. In particular, for the CA, setting a1 = 0 is not efficient: the reduction in convergence efficiency is not offset by the lower number of force evaluations. We also find that the harmonic approximation to the CA parameters, whilst providing a fourth order approximation to the action, is not optimal for these realistic potentials: numerical optimisation leads to better approximate cancellation of the fifth order terms, with deviation between the harmonic and numerically optimised parameters more marked in the more quantum H2O system. This suggests that numerically optimising the CA or SCA parameters, which can be done at high temperature, will be important in fully realising the efficiency gains of these formalisms for realistic potentials.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Estimation of splitting functions from Earth's normal mode spectra using the neighbourhood algorithm
NASA Astrophysics Data System (ADS)
Pachhai, Surya; Tkalčić, Hrvoje; Masters, Guy
2016-01-01
The inverse problem for Earth structure from normal mode data is strongly non-linear and can be inherently non-unique. Traditionally, the inversion is linearized by taking partial derivatives of the complex spectra with respect to the model parameters (i.e. structure coefficients), and solved in an iterative fashion. This method requires that the earthquake source model is known. However, the release of energy in large earthquakes used for the analysis of Earth's normal modes is not simple. A point source approximation is often inadequate, and a more complete account of energy release at the source is required. In addition, many earthquakes are required for the solution to be insensitive to the initial constraints and regularization. In contrast to an iterative approach, the autoregressive linear inversion technique conveniently avoids the need for earthquake source parameters, but it also requires a number of events to achieve full convergence when a single event does not excite all singlets well. To build on previous improvements, we develop a technique to estimate structure coefficients (and consequently, the splitting functions) using a derivative-free parameter search, known as neighbourhood algorithm (NA). We implement an efficient forward method derived using the autoregresssion of receiver strips, and this allows us to search over a multiplicity of structure coefficients in a relatively short time. After demonstrating feasibility of the use of NA in synthetic cases, we apply it to observations of the inner core sensitive mode 13S2. The splitting function of this mode is dominated by spherical harmonic degree 2 axisymmetric structure and is consistent with the results obtained from the autoregressive linear inversion. The sensitivity analysis of multiple events confirms the importance of the Bolivia, 1994 earthquake. When this event is used in the analysis, as little as two events are sufficient to constrain the splitting functions of 13S2 mode. Apart from not requiring the knowledge of earthquake source, the newly developed technique provides an approximate uncertainty measure of the structure coefficients and allows us to control the type of structure solved for, for example to establish if elastic structure is sufficient.
Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.
Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana
2012-05-15
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model's structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Estimating stage-specific daily survival probabilities of nests when nest age is unknown
Stanley, T.R.
2004-01-01
Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.
A CCIR aeronautical mobile satellite report
NASA Technical Reports Server (NTRS)
Davarian, Faramaz; Bishop, Dennis; Rogers, David; Smith, Ernest K.
1989-01-01
Propagation effects in the aeronautical mobile-satellite service differ from those in the fixed-satellite service and other mobile-satellite services because: small antennas are used on aircraft, and the aircraft body may affect the performance of the antenna; high aircraft speeds cause large Doppler spreads; aircraft terminals must accommodate a large dynamic range in transmission and reception; and due to their high speeds, banking maneuvers, and three-dimensional operation, aircraft routinely require exceptionally high integrity of communications, making even short-term propagation effects very important. Data and models specifically required to characterize the path impairments are discussed, which include: tropospheric effects, including gaseous attenuation, cloud and rain attenuation, fog attenuation, refraction and scintillation; surface reflection (multipath) effects; ionospheric effects such as scintillation; and environmental effects (aircraft motion, sea state, land surface type). Aeronautical mobile-satellite systems may operate on a worldwide basis, including propagation paths at low elevation angles. Several measurements of multipath parameters over land and sea were conducted. In some cases, laboratory simulations are used to compare measured data and verify model parameters. The received signals is considered in terms of its possible components: a direct wave subject to atmospheric effects, and a reflected wave, which generally contains mostly a diffuse component.
Assessing Backwards Integration as a Method of KBO Family Finding
NASA Astrophysics Data System (ADS)
Benfell, Nathan; Ragozzine, Darin
2018-04-01
The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.
Enhanced visual perception through tone mapping
NASA Astrophysics Data System (ADS)
Harrison, Andre; Mullins, Linda L.; Raglin, Adrienne; Etienne-Cummings, Ralph
2016-05-01
Tone mapping operators compress high dynamic range images to improve the picture quality on a digital display when the dynamic range of the display is lower than that of the image. However, tone mapping operators have been largely designed and evaluated based on the aesthetic quality of the resulting displayed image or how perceptually similar the compressed image appears relative to the original scene. They also often require per image tuning of parameters depending on the content of the image. In military operations, however, the amount of information that can be perceived is more important than the aesthetic quality of the image and any parameter adjustment needs to be as automated as possible regardless of the content of the image. We have conducted two studies to evaluate the perceivable detail of a set of tone mapping algorithms, and we apply our findings to develop and test an automated tone mapping algorithm that demonstrates a consistent improvement in the amount of perceived detail. An automated, and thereby predictable, tone mapping method enables a consistent presentation of perceivable features, can reduce the bandwidth required to transmit the imagery, and can improve the accessibility of the data by reducing the needed expertise of the analyst(s) viewing the imagery.
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset.
Shirts, Michael R; Klein, Christoph; Swails, Jason M; Yin, Jian; Gilson, Michael K; Mobley, David L; Case, David A; Zhong, Ellen D
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to a better than 0.1% relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb’s constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison. PMID:27787702
Lessons learned from comparing molecular dynamics engines on the SAMPL5 dataset
NASA Astrophysics Data System (ADS)
Shirts, Michael R.; Klein, Christoph; Swails, Jason M.; Yin, Jian; Gilson, Michael K.; Mobley, David L.; Case, David A.; Zhong, Ellen D.
2017-01-01
We describe our efforts to prepare common starting structures and models for the SAMPL5 blind prediction challenge. We generated the starting input files and single configuration potential energies for the host-guest in the SAMPL5 blind prediction challenge for the GROMACS, AMBER, LAMMPS, DESMOND and CHARMM molecular simulation programs. All conversions were fully automated from the originally prepared AMBER input files using a combination of the ParmEd and InterMol conversion programs. We find that the energy calculations for all molecular dynamics engines for this molecular set agree to better than 0.1 % relative absolute energy for all energy components, and in most cases an order of magnitude better, when reasonable choices are made for different cutoff parameters. However, there are some surprising sources of statistically significant differences. Most importantly, different choices of Coulomb's constant between programs are one of the largest sources of discrepancies in energies. We discuss the measures required to get good agreement in the energies for equivalent starting configurations between the simulation programs, and the energy differences that occur when simulations are run with program-specific default simulation parameter values. Finally, we discuss what was required to automate this conversion and comparison.
Use of Airborne Hyperspectral Data in the Simulation of Satellite Images
NASA Astrophysics Data System (ADS)
de Miguel, Eduardo; Jimenez, Marcos; Ruiz, Elena; Salido, Elena; Gutierrez de la Camara, Oscar
2016-08-01
The simulation of future images is part of the development phase of most Earth Observation missions. This simulation uses frequently as starting point images acquired from airborne instruments. These instruments provide the required flexibility in acquisition parameters (time, date, illumination and observation geometry...) and high spectral and spatial resolution, well above the target values (as required by simulation tools). However, there are a number of important problems hampering the use of airborne imagery. One of these problems is that observation zenith angles (OZA), are far from those that the misisons to be simulated would use.We examine this problem by evaluating the difference in ground reflectance estimated from airborne images for different observation/illumination geometries. Next, we analyze a solution for simulation purposes, in which a Bi- directional Reflectance Distribution Function (BRDF) model is attached to an image of the isotropic surface reflectance. The results obtained confirm the need for reflectance anisotropy correction when using airborne images for creating a reflectance map for simulation purposes. But this correction should not be used without providing the corresponding estimation of BRDF, in the form of model parameters, to the simulation teams.
Modeling impact of small Kansas landfills on underlying aquifers
Sophocleous, M.; Stadnyk, N.G.; Stotts, M.
1996-01-01
Small landfills are exempt from compliance with Resource Conservation and Recovery Act Subtitle D standards for liner and leachate collection. We investigate the ramifications of this exemption under western Kansas semiarid environments and explore the conditions under which naturally occurring geologic settings provide sufficient protection against ground-water contamination. The methodology we employed was to run water budget simulations using the Hydrologic Evaluation of Landfill Performance (HELP) model, and fate and transport simulations using the Multimedia Exposure Assessment Model (MULTIMED) for several western Kansas small landfill scenarios in combination with extensive sensitivity analyses. We demonstrate that requiring landfill cover, leachate collection system (LCS), and compacted soil liner will reduce leachate production by 56%, whereas requiring only a cover without LCS and liner will reduce leachate by half as much. The most vulnerable small landfills are shown to be the ones with no vegetative cover underlain by both a relatively thin vadose zone and aquifer and which overlie an aquifer characterized by cool temperatures and low hydraulic gradients. The aquifer-related physical and chemical parameters proved to be more important than vadose zone and biodegradation parameters in controlling leachate concentrations at the point of compliance. ??ASCE.
Innovative hyperchaotic encryption algorithm for compressed video
NASA Astrophysics Data System (ADS)
Yuan, Chun; Zhong, Yuzhuo; Yang, Shiqiang
2002-12-01
It is accepted that stream cryptosystem can achieve good real-time performance and flexibility which implements encryption by selecting few parts of the block data and header information of the compressed video stream. Chaotic random number generator, for example Logistics Map, is a comparatively promising substitute, but it is easily attacked by nonlinear dynamic forecasting and geometric information extracting. In this paper, we present a hyperchaotic cryptography scheme to encrypt the compressed video, which integrates Logistics Map with Z(232 - 1) field linear congruential algorithm to strengthen the security of the mono-chaotic cryptography, meanwhile, the real-time performance and flexibility of the chaotic sequence cryptography are maintained. It also integrates with the dissymmetrical public-key cryptography and implements encryption and identity authentification on control parameters at initialization phase. In accord with the importance of data in compressed video stream, encryption is performed in layered scheme. In the innovative hyperchaotic cryptography, the value and the updating frequency of control parameters can be changed online to satisfy the requirement of the network quality, processor capability and security requirement. The innovative hyperchaotic cryprography proves robust security by cryptoanalysis, shows good real-time performance and flexible implement capability through the arithmetic evaluating and test.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Lofthag-Hansen, Sara; Thilander-Klang, Anne; Gröndahl, Kerstin
2011-11-01
To evaluate subjective image quality for two diagnostic tasks, periapical diagnosis and implant planning, for cone beam computed tomography (CBCT) using different exposure parameters and fields of view (FOVs). Examinations were performed in posterior part of the jaws on a skull phantom with 3D Accuitomo (FOV 3 cm×4 cm) and 3D Accuitomo FPD (FOVs 4 cm×4 cm and 6 cm×6 cm). All combinations of 60, 65, 70, 75, 80 kV and 2, 4, 6, 8, 10 mA with a rotation of 180° and 360° were used. Dose-area product (DAP) value was determined for each combination. The images were presented, displaying the object in axial, cross-sectional and sagittal views, without scanning data in a random order for each FOV and jaw. Seven observers assessed image quality on a six-point rating scale. Intra-observer agreement was good (κw=0.76) and inter-observer agreement moderate (κw=0.52). Stepwise logistic regression showed kV, mA and diagnostic task to be the most important variables. Periapical diagnosis, regardless jaw, required higher exposure parameters compared to implant planning. Implant planning in the lower jaw required higher exposure parameters compared to upper jaw. Overall ranking of FOVs gave 4 cm×4 cm, 6 cm×6 cm followed by 3 cm×4 cm. This study has shown that exposure parameters should be adjusted according to diagnostic task. For this particular CBCT brand a rotation of 180° gave good subjective image quality, hence a substantial dose reduction can be achieved without loss of diagnostic information. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network
NASA Technical Reports Server (NTRS)
Kuhn, D. Richard; Kacker, Raghu; Lei, Yu
2010-01-01
This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
Adam, Stewart I; Srinet, Prateek; Aronberg, Ryan M; Rosenberg, Graeme; Leder, Steven B
2015-01-01
To investigate physiologic parameters, voice production abilities, and functional verbal communication ratings of the Blom low profile voice inner cannula and Passy-Muir one-way tracheotomy tube speaking valves. Case series with planned data collection. Large, urban, tertiary care teaching hospital. Referred sample of 30 consecutively enrolled adults requiring a tracheotomy tube and tested with Blom and Passy-Muir valves. Physiologic parameters recorded were oxygen saturation, respiration rate, and heart rate. Voice production abilities included maximum voice intensity in relation to ambient room noise and maximum phonation duration of the vowel/a/. Functional verbal communication was determined from randomized and blinded listener ratings of counting 1-10, saying the days of the week, and reading aloud the sentence, "There is according to legend a boiling pot of gold at one end." There were no significant differences (p>0.05) between the Blom and Passy-Muir valves for the physiologic parameters of oxygen saturation, respiration rate, and heart rate; voice production abilities of both maximum intensity and duration of/a/; and functional verbal communication ratings. Both valves allowed for significantly greater maximum voice intensity over ambient room noise (p<0.001). The Blom low profile voice inner cannula and Passy-Muir one-way speaking valves exhibited equipoise regarding patient physiologic parameters, voice production abilities, and functional verbal communication ratings. Readers will understand the importance of verbal communication for patients who require a tracheotomy tube; will be able to determine the differences between the Blom low profile voice inner cannula and Passy-Muir one-way tracheotomy tube speaking valves; and will be confident in knowing that both the Blom and Passy-Muir one-way tracheotomy tube speaking valves are equivalent regarding physiological functioning and speech production abilities. Copyright © 2015 Elsevier Inc. All rights reserved.
[Parameters of phoniatric examination of solo vocalists].
Mitrović, Slobodan; Jović, Rajko; Aleksić, Vesna; Cvejić, Biserka
2002-01-01
A phoniatrist analyzes the professional's voice at the beginning of his vocal studies or career but also later, in cases of voice disorder. Phoniatric examination of professional singers must be done according to "all inclusive" protocols of examination. Such protocols must establish the status of basic elements of phonatory system: activator, generator and resonator of voice and articulatory space. All patients requiring phoniatric examination no matter if they are candidates for professional singers, need to provide anamnestic data about their previous problems regarding voice or singing. This examination is necessary and it must include: examination of nose, cavum oris, pharynx, ears and larynx. This analysis is based on evaluation of physiological and pathophysiological manifestations of voice. Determination of musical voice range during phoniatric examination does not intend to make any classification of voice, nor to suggest to vocal teacher what he should count upon from future singers. Musical range can be determined only by a phoniatrist skilled in music or with musical training, but first of all vocal teacher. These methods are used for examination of phonatory function, or laryngeal pathology. They are not invasive and give objective and quantitative information. They include: laryngostroboscopy, spectral analysis of voice (sonography) and fundamental parameters of voice signal (computer program). Articulation is very important for solo singers, because good articulation contributes to qualitative emission of sound and expression of emotions. Tonal-threshold audiometry is performed as a hearing test. They include rhinomanometry, vital capacity measurements, maximal phonation time and phonation quotient. Phoniatric examination is a necessary proceeding which must be performed before admission to the academy of solo singing, and then during singers' education and career. The phoniatric protocol must include a minimal number of parameters, which can be increased if required. All parameters of phoniatric examination must be adequately evaluated by experts.
Li, Baini; Ma, Jun; Hu, Xuenan; Liu, Haijun; Wu, Jiajiao; Chen, Hongjun; Zhang, Runjie
2010-08-01
Exotic fruit flies (Ceratitis spp.) are often serious agricultural pests. Here, we used, pathway analysis and Monte Carlo simulations to assess the risk of introduction of Ceratitis capitata (Wiedemann), Ceratitis cosyra (Walker), and Ceratitis rosa Karsch, into southern China with fruit consignments and incoming travelers. Historical data, expert opinions, relevant literature, and archives were used to set appropriate parameters in the pathway analysis. Based on the ongoing quarantine/ inspection strategies of China, as well as the interception records, we estimated the annual number of each fruit fly species entering Guangdong province undetected with commercially imported fruit, and the associated risk. We also estimated the gross number of pests arriving at Guangdong ports with incoming travelers and the associated risk. Sensitivity analysis also was performed to test the impact of parameter changes and to assess how the risk could be reduced. Results showed that the risk of introduction of the three fruit fly species into southern China with fruit consignments, which are mostly transported by ship, exists but is relatively low. In contrast, the risk of introduction with incoming travelers is high and hence deserves intensive attention. Sensitivity analysis indicated that either ensuring all shipments meet current phytosanitary requirements or increasing the proportion of fruit imports sampled for inspection could substantially reduce the risk associated with commercial imports. Sensitivity analysis also provided justification for banning importation of fresh fruit by international travelers. Thus, inspection and quarantine in conjunction with intensive detection were important mitigation measures to reduce the risk of Ceratitis spp. introduced into China.
NASA Technical Reports Server (NTRS)
Estes, J. E.; Eisgruber, L.
1981-01-01
Important points presented and recommendations made at an information and decision processes workshop held in Asilomar, California; at a data and information performance workshop held in Houston, Texas; and at a data base use and management workshop held near San Jose, California are summarized. Issues raised at a special session of the Soil Conservation Society of America's remote sensing for resource management conference in Kansas City, Missouri are also highlighted. The goals, status and activities of the NASA program definition study of basic research requirements, the necessity of making the computer science community aware of user needs with respect to information related to renewable resources, performance parameters and criteria for judging federal information systems, and the requirements and characteristics of scientific data bases are among the topics reported.
Barth, Gilbert R.; Hill, M.C.
2005-01-01
This paper evaluates the importance of seven types of parameters to virus transport: hydraulic conductivity, porosity, dispersivity, sorption rate and distribution coefficient (representing physical-chemical filtration), and in-solution and adsorbed inactivation (representing virus inactivation). The first three parameters relate to subsurface transport in general while the last four, the sorption rate, distribution coefficient, and in-solution and adsorbed inactivation rates, represent the interaction of viruses with the porous medium and their ability to persist. The importance of four types of observations to estimate the virus-transport parameters are evaluated: hydraulic heads, flow, temporal moments of conservative-transport concentrations, and virus concentrations. The evaluations are conducted using one- and two-dimensional homogeneous simulations, designed from published field experiments, and recently developed sensitivity-analysis methods. Sensitivity to the transport-simulation time-step size is used to evaluate the importance of numerical solution difficulties. Results suggest that hydraulic conductivity, porosity, and sorption are most important to virus-transport predictions. Most observation types provide substantial information about hydraulic conductivity and porosity; only virus-concentration observations provide information about sorption and inactivation. The observations are not sufficient to estimate these important parameters uniquely. Even with all observation types, there is extreme parameter correlation between porosity and hydraulic conductivity and between the sorption rate and in-solution inactivation. Parameter estimation was accomplished by fixing values of porosity and in-solution inactivation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 12 2011-07-01 2009-07-01 true Requirements for Installation, Operation, and Maintenance of Continuous Parameter Monitoring Systems 41 Table 41 to Subpart UUU of Part 63... Sulfur Recovery Units Pt. 63, Subpt. UUU, Table 41 Table 41 to Subpart UUU of Part 63—Requirements for...
Estimation procedure of the efficiency of the heat network segment
NASA Astrophysics Data System (ADS)
Polivoda, F. A.; Sokolovskii, R. I.; Vladimirov, M. A.; Shcherbakov, V. P.; Shatrov, L. A.
2017-07-01
An extensive city heat network contains many segments, and each segment operates with different efficiency of heat energy transfer. This work proposes an original technical approach; it involves the evaluation of the energy efficiency function of the heat network segment and interpreting of two hyperbolic functions in the form of the transcendental equation. In point of fact, the problem of the efficiency change of the heat network depending on the ambient temperature was studied. Criteria dependences used for evaluation of the set segment efficiency of the heat network and finding of the parameters for the most optimal control of the heat supply process of the remote users were inferred with the help of the functional analysis methods. Generally, the efficiency function of the heat network segment is interpreted by the multidimensional surface, which allows illustrating it graphically. It was shown that the solution of the inverse problem is possible as well. Required consumption of the heating agent and its temperature may be found by the set segment efficient and ambient temperature; requirements to heat insulation and pipe diameters may be formulated as well. Calculation results were received in a strict analytical form, which allows investigating the found functional dependences for availability of the extremums (maximums) under the set external parameters. A conclusion was made that it is expedient to apply this calculation procedure in two practically important cases: for the already made (built) network, when the change of the heat agent consumption and temperatures in the pipe is only possible, and for the projecting (under construction) network, when introduction of changes into the material parameters of the network is possible. This procedure allows clarifying diameter and length of the pipes, types of insulation, etc. Length of the pipes may be considered as the independent parameter for calculations; optimization of this parameter is made in accordance with other, economical, criteria for the specific project.
Physical parameters for proton induced K-, L-, and M-shell ionization processes
NASA Astrophysics Data System (ADS)
Shehla; Puri, Sanjiv
2016-10-01
The proton induced atomic inner-shell ionization processes comprising radiative and non-radiative transitions are characterized by physical parameters, namely, the proton ionization cross sections, X-ray emission rates, fluorescence yields and Coster-Kronig (CK) transition probabilities. These parameters are required to calculate the K/L/M shell X-ray production (XRP) cross sections and relative X-ray intensity ratios, which in turn are required for different analytical applications. The current status of different physical parameters is presented in this report for use in various applications.
Comparison of chiller models for use in model-based fault detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreedharan, Priya; Haves, Philip
Selecting the model is an important and essential step in model based fault detection and diagnosis (FDD). Factors that are considered in evaluating a model include accuracy, training data requirements, calibration effort, generality, and computational requirements. The objective of this study was to evaluate different modeling approaches for their applicability to model based FDD of vapor compression chillers. Three different models were studied: the Gordon and Ng Universal Chiller model (2nd generation) and a modified version of the ASHRAE Primary Toolkit model, which are both based on first principles, and the DOE-2 chiller model, as implemented in CoolTools{trademark}, which ismore » empirical. The models were compared in terms of their ability to reproduce the observed performance of an older, centrifugal chiller operating in a commercial office building and a newer centrifugal chiller in a laboratory. All three models displayed similar levels of accuracy. Of the first principles models, the Gordon-Ng model has the advantage of being linear in the parameters, which allows more robust parameter estimation methods to be used and facilitates estimation of the uncertainty in the parameter values. The ASHRAE Toolkit Model may have advantages when refrigerant temperature measurements are also available. The DOE-2 model can be expected to have advantages when very limited data are available to calibrate the model, as long as one of the previously identified models in the CoolTools library matches the performance of the chiller in question.« less
A Physical Based Formula for Calculating the Critical Stress of Snow Movement
NASA Astrophysics Data System (ADS)
He, S.; Ohara, N.
2016-12-01
In snow redistribution modeling, one of the most important parameters is the critical stress of snow movement, which is difficult to estimate from field data because it is influenced by various factors. In this study, a new formula for calculating critical stress of snow movement was derived based on the ice particle sintering process modeling and the moment balance of a snow particle. Through this formula, the influences of snow particle size, air temperature, and deposited time on the critical stress were explicitly taken into consideration. It was found that some of the model parameters were sensitive to the critical stress estimation through the sensitivity analysis using Sobol's method. The two sensitive parameters of the sintering process modeling were determined by a calibration-validation procedure using the observed snow flux data via FlowCapt. Based on the snow flux and metrological data observed at the ISAW stations (http://www.iav.ch), it was shown that the results of this formula were able to describe very well the evolution of the minimum friction wind speed required for the snow motion. This new formula suggested that when the snow just reaches the surface, the smaller snowflake can move easier than the larger particles. However, smaller snow particles require more force to move as the sintering between the snowflakes progresses. This implied that compact snow with small snow particles may be harder to erode by wind although smaller particles may have a higher chance to be suspended once they take off.
NASA Astrophysics Data System (ADS)
Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.
2014-12-01
Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of these conspicuously large single-celled eukaryotes.
NASA Astrophysics Data System (ADS)
Kostrubiec, Franciszek; Pawlak, Ryszard; Raczynski, Tomasz; Walczak, Maria
1994-09-01
Laser treatment of the surface of materials is of major importance for many fields technology. One of the latest and most significant methods of this treatment is laser alloying consisting of introducing foreign atoms into the metal surface layer during the reaction of laser radiation with the surface. This opens up vast possibilities for the modification of properties of such a layer (obtaining layers of increased microhardness, increased resistance to electroerosion in an electric arc, etc.). Conductivity of the material is a very important parameter in case of conductive materials used for electrical contacts. The paper presents the results of studies on change in electrical conductivity of the surface layer of metals alloyed with a laser. A comparative analysis of conductivity of base metal surface layers prior to and following laser treatment has been performed. Depending on the base metal and the alloying element, optical treatment parameters allowing a required change in the surface layer conductivity have been selected. A very important property of the contact material is its resistance to plastic strain. It affects the real value of contact surface coming into contact and, along with the material conductivity, determines contact resistance and the amount of heat generated in place of contact. These quantities are directly related to the initiation and the course of an arc discharge, hence they also affect resistance to electroerosion. The parameter that reflects plastic properties with loads concentrated on a small surface, as is the case with a reciprocal contact force of two real surfaces with their irregularities being in contact, is microhardness. In the paper, the results of investigations into microhardness of modified surface layers compared with base metal microhardness have been presented.
Drag coefficients for modeling flow through emergent vegetation in the Florida Everglades
Lee, J.K.; Roig, L.C.; Jenter, H.L.; Visser, H.M.
2004-01-01
Hydraulic data collected in a flume fitted with pans of sawgrass were analyzed to determine the vertically averaged drag coefficient as a function of vegetation characteristics. The drag coefficient is required for modeling flow through emergent vegetation at low Reynolds numbers in the Florida Everglades. Parameters of the vegetation, such as the stem population per unit bed area and the average stem/leaf width, were measured for five fixed vegetation layers. The vertically averaged vegetation parameters for each experiment were then computed by weighted average over the submerged portion of the vegetation. Only laminar flow through emergent vegetation was considered, because this is the dominant flow regime of the inland Everglades. A functional form for the vegetation drag coefficient was determined by linear regression of the logarithmic transforms of measured resistance force and Reynolds number. The coefficients of the drag coefficient function were then determined for the Everglades, using extensive flow and vegetation measurements taken in the field. The Everglades data show that the stem spacing and the Reynolds number are important parameters for the determination of vegetation drag coefficient. ?? 2004 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Astrophysics to z approx. 10 with Gravitational Waves
NASA Technical Reports Server (NTRS)
Stebbins, Robin; Hughes, Scott; Lang, Ryan
2007-01-01
The most useful characterization of a gravitational wave detector's performance is the accuracy with which astrophysical parameters of potential gravitational wave sources can be estimated. One of the most important source types for the Laser Interferometer Space Antenna (LISA) is inspiraling binaries of black holes. LISA can measure mass and spin to better than 1% for a wide range of masses, even out to high redshifts. The most difficult parameter to estimate accurately is almost always luminosity distance. Nonetheless, LISA can measure luminosity distance of intermediate-mass black hole binary systems (total mass approx.10(exp 4) solar mass) out to z approx.10 with distance accuracies approaching 25% in many cases. With this performance, LISA will be able to follow the merger history of black holes from the earliest mergers of proto-galaxies to the present. LISA's performance as a function of mass from 1 to 10(exp 7) solar mass and of redshift out to z approx. 30 will be described. The re-formulation of LISA's science requirements based on an instrument sensitivity model and parameter estimation will be described.
Complementarity of dark matter searches in the phenomenological MSSM
Cahill-Rowley, Matthew; Cotta, Randy; Drlica-Wagner, Alex; ...
2015-03-11
As is well known, the search for and eventual identification of dark matter in supersymmetry requires a simultaneous, multipronged approach with important roles played by the LHC as well as both direct and indirect dark matter detection experiments. We examine the capabilities of these approaches in the 19-parameter phenomenological MSSM which provides a general framework for complementarity studies of neutralino dark matter. We summarize the sensitivity of dark matter searches at the 7 and 8 (and eventually 14) TeV LHC, combined with those by Fermi, CTA, IceCube/DeepCore, COUPP, LZ and XENON. The strengths and weaknesses of each of these techniques aremore » examined and contrasted and their interdependent roles in covering the model parameter space are discussed in detail. We find that these approaches explore orthogonal territory and that advances in each are necessary to cover the supersymmetric weakly interacting massive particle parameter space. We also find that different experiments have widely varying sensitivities to the various dark matter annihilation mechanisms, some of which would be completely excluded by null results from these experiments.« less
Online residence time distribution measurement of thermochemical biomass pretreatment reactors
Sievers, David A.; Kuhn, Erik M.; Stickel, Jonathan J.; ...
2015-11-03
Residence time is a critical parameter that strongly affects the product profile and overall yield achieved from thermochemical pretreatment of lignocellulosic biomass during production of liquid transportation fuels. The residence time distribution (RTD) is one important measure of reactor performance and provides a metric to use when evaluating changes in reactor design and operating parameters. An inexpensive and rapid RTD measurement technique was developed to measure the residence time characteristics in biomass pretreatment reactors and similar equipment processing wet-granular slurries. Sodium chloride was pulsed into the feed entering a 600 kg/d pilot-scale reactor operated at various conditions, and aqueous saltmore » concentration was measured in the discharge using specially fabricated electrical conductivity instrumentation. This online conductivity method was superior in both measurement accuracy and resource requirements compared to offline analysis. Experimentally measured mean residence time values were longer than estimated by simple calculation and screw speed and throughput rate were investigated as contributing factors. In conclusion, a semi-empirical model was developed to predict the mean residence time as a function of operating parameters and enabled improved agreement.« less
Smith, N.; Zhong, P.
2012-01-01
To investigate the roles of lithotripter shock wave (LSW) parameters and cavitation in stone comminution, a series of in vitro fragmentation experiments have been conducted in water and 1,3-butanediol (a cavitation-suppressive fluid) at a variety of acoustic field positions of an electromagnetic shock wave lithotripter. Using field mapping data and integrated parameters averaged over a circular stone holder area (Rh = 7 mm), close logarithmic correlations between the average peak pressure (P+(avg)) incident on the stone (D = 10 mm BegoStone) and comminution efficiency after 500 and 1,000 shocks have been identified. Moreover, the correlations have demonstrated distinctive thresholds in P+(avg) (5.3 MPa and 7.6 MPa for soft and hard stones, respectively), that are required to initiate stone fragmentation independent of surrounding fluid medium and LSW dose. These observations, should they be confirmed using other shock wave lithotripters, may provide an important field parameter (i.e., P+(avg)) to guide appropriate application of SWL in clinics, and facilitate device comparison and design improvements in future lithotripters. PMID:22935690
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-12-04
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.
Smith, N; Zhong, P
2012-10-11
To investigate the roles of lithotripter shock wave (LSW) parameters and cavitation in stone comminution, a series of in vitro fragmentation experiments have been conducted in water and 1,3-butanediol (a cavitation-suppressive fluid) at a variety of acoustic field positions of an electromagnetic shock wave lithotripter. Using field mapping data and integrated parameters averaged over a circular stone holder area (R(h)=7 mm), close logarithmic correlations between the average peak pressure (P(+(avg))) incident on the stone (D=10 mm BegoStone) and comminution efficiency after 500 and 1000 shocks have been identified. Moreover, the correlations have demonstrated distinctive thresholds in P(+(avg)) (5.3 MPa and 7.6 MPa for soft and hard stones, respectively), that are required to initiate stone fragmentation independent of surrounding fluid medium and LSW dose. These observations, should they be confirmed using other shock wave lithotripters, may provide an important field parameter (i.e., P(+(avg))) to guide appropriate application of SWL in clinics, and facilitate device comparison and design improvements in future lithotripters. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ayaz, Teslime; Kocaman, Sinan Altan; Durakoğlugil, Tuğba; Erdoğan, Turan; Şahin, Osman Zikrullah; Şahin, Serap Baydur; Çiçek, Yüksel; Şatiroğlu, Ömer
2014-01-01
Background and Objectives Left ventricular hypertrophy (LVH), a sign of subclinical cardiovascular disease, is an important predictor of cardiovascular morbidity and mortality. The aim of our study was to determine the association of left ventricular mass (LVM) with possible causative anthropometric and biochemical parameters as well as carotid intima-media thickness (CIMT) and brachial flow-mediated dilation (FMD) as surrogates of atherosclerosis and endothelial dysfunction, respectively, in previously untreated hypertensive patients. Subjects and Methods Our study included 114 consecutive previously untreated hypertensive patients who underwent echocardiography and ultrasonography to evaluate their vascular status and function via brachial artery CIMT and FMD. Results Among all study parameters, age, systolic blood pressure (BP), diastolic BP, pulse pressure, plasma glucose, uric acid, total bilirubin, direct bilirubin, hemoglobin, and CIMT were positively correlated with the LVM index. Multiple logistic regression analysis revealed that office systolic BP, age, male gender, and total bilirubin were independent predictors of LVH. Conclusion Bilirubin seems to be related to LVM and LVH. The positive association of bilirubin with these parameters is novel and requires further research. PMID:25278987
Martínez-García, C G; Olguín, M T; Fall, C
2014-08-01
Aerobic digestion batch tests were run on a sludge model that contained only two fractions, the heterotrophic biomass (XH) and its endogenous residue (XP). The objective was to describe the stabilization of the sludge and estimate the endogenous decay parameters. Modeling was performed with Aquasim, based on long-term data of volatile suspended solids and chemical oxygen demand (VSS, COD). Sensitivity analyses were carried out to determine the conditions for unique identifiability of the parameters. Importantly, it was found that the COD/VSS ratio of the endogenous residues (1.06) was significantly lower than for the active biomass fraction (1.48). The decay rate constant of the studied sludge (low bH, 0.025 d(-1)) was one-tenth that usually observed (0.2d(-1)), which has two main practical significances. Digestion time required is much more long; also the oxygen uptake rate might be <1.5 mg O₂/gTSSh (biosolids standards), without there being significant decline in the biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.
Design of bearings for rotor systems based on stability
NASA Technical Reports Server (NTRS)
Dhar, D.; Barrett, L. E.; Knospe, C. R.
1992-01-01
Design of rotor systems incorporating stable behavior is of great importance to manufacturers of high speed centrifugal machinery since destabilizing mechanisms (from bearings, seals, aerodynamic cross coupling, noncolocation effects from magnetic bearings, etc.) increase with machine efficiency and power density. A new method of designing bearing parameters (stiffness and damping coefficients or coefficients of the controller transfer function) is proposed, based on a numerical search in the parameter space. The feedback control law is based on a decentralized low order controller structure, and the various design requirements are specified as constraints in the specification and parameter spaces. An algorithm is proposed for solving the problem as a sequence of constrained 'minimax' problems, with more and more eigenvalues into an acceptable region in the complex plane. The algorithm uses the method of feasible directions to solve the nonlinear constrained minimization problem at each stage. This methodology emphasizes the designer's interaction with the algorithm to generate acceptable designs by relaxing various constraints and changing initial guesses interactively. A design oriented user interface is proposed to facilitate the interaction.
Steering Dynamics of Tilting Narrow Track Vehicle with Passive Front Wheel Design
NASA Astrophysics Data System (ADS)
TAN, Jeffrey Too Chuan; ARAKAWA, Hiroki; SUDA, Yoshihiro
2016-09-01
In recent years, narrow track vehicle has been emerged as a potential candidate for the next generation of urban transportation system, which is greener and space effective. Vehicle body tilting has been a symbolic characteristic of such vehicle, with the purpose to maintain its stability with the narrow track body. However, the coordination between active steering and vehicle tilting requires considerable driving skill in order to achieve effective stability. In this work, we propose an alternative steering method with a passive front wheel that mechanically follows the vehicle body tilting. The objective of this paper is to investigate the steering dynamics of the vehicle under various design parameters of the passive front wheel. Modeling of a three-wheel tilting narrow track vehicle and multibody dynamics simulations were conducted to study the effects of two important front wheel design parameters, i.e. caster angle and trail toward the vehicle steering dynamics in steering response time, turning radius, steering stability and resiliency towards external disturbance. From the results of the simulation studies, we have verified the relationships of these two front wheel design parameters toward the vehicle steering dynamics.
Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory
NASA Astrophysics Data System (ADS)
He, Ning; Zhang, Youzhen; Pan, Liangxian
1993-05-01
The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.
Simulation of carbohydrates, from molecular docking to dynamics in water.
Sapay, Nicolas; Nurisso, Alessandra; Imberty, Anne
2013-01-01
Modeling of carbohydrates is particularly challenging because of the variety of structures resulting for the high number of monosaccharides and possible linkages and also because of their intrinsic flexibility. The development of carbohydrate parameters for molecular modeling is still an active field. Nowadays, main carbohydrates force fields are GLYCAM06, CHARMM36, and GROMOS 45A4. GLYCAM06 includes the largest choice of compounds and is compatible with the AMBER force fields and associated. Furthermore, AMBER includes tools for the implementation of new parameters. When looking at protein-carbohydrate interaction, the choice of the starting structure is of importance. Such complex can be sometimes obtained from the Protein Data Bank-although the stereochemistry of sugars may require some corrections. When no experimental data is available, molecular docking simulation is generally used to the obtain protein-carbohydrate complex coordinates. As molecular docking parameters are not specifically dedicated to carbohydrates, inaccuracies should be expected, especially for the docking of polysaccharides. This issue can be addressed at least partially by combining molecular docking with molecular dynamics simulation in water.
Heliostat calibration using attached cameras and artificial targets
NASA Astrophysics Data System (ADS)
Burisch, Michael; Sanchez, Marcelino; Olarra, Aitor; Villasante, Cristobal
2016-05-01
The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision requires the accurate knowledge of the motion of each of them. The motion of each heliostat can be described by a set of parameters, most notably the position and axis configuration. These parameters have to be determined individually for each heliostat during a calibration process. With the ongoing development of small sized heliostats, the ability to automatically perform such a calibration becomes more and more crucial as possibly hundreds of thousands of heliostats are involved. Furthermore, efficiency becomes an important factor as small sized heliostats potentially have to be recalibrated far more often, due to the limited stability of the components. In the following we present an automatic calibration procedure using cameras attached to each heliostat which are observing different targets spread throughout the solar field. Based on a number of observations of these targets under different heliostat orientations, the parameters describing the heliostat motion can be estimated with high precision.
An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood
NASA Astrophysics Data System (ADS)
Dinh, Khanh N.; Sidje, Roger B.
2017-12-01
Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.
Multivariate analysis of ATR-FTIR spectra for assessment of oil shale organic geochemical properties
Washburn, Kathryn E.; Birdwell, Justin E.
2013-01-01
In this study, attenuated total reflectance (ATR) Fourier transform infrared spectroscopy (FTIR) was coupled with partial least squares regression (PLSR) analysis to relate spectral data to parameters from total organic carbon (TOC) analysis and programmed pyrolysis to assess the feasibility of developing predictive models to estimate important organic geochemical parameters. The advantage of ATR-FTIR over traditional analytical methods is that source rocks can be analyzed in the laboratory or field in seconds, facilitating more rapid and thorough screening than would be possible using other tools. ATR-FTIR spectra, TOC concentrations and Rock–Eval parameters were measured for a set of oil shales from deposits around the world and several pyrolyzed oil shale samples. PLSR models were developed to predict the measured geochemical parameters from infrared spectra. Application of the resulting models to a set of test spectra excluded from the training set generated accurate predictions of TOC and most Rock–Eval parameters. The critical region of the infrared spectrum for assessing S1, S2, Hydrogen Index and TOC consisted of aliphatic organic moieties (2800–3000 cm−1) and the models generated a better correlation with measured values of TOC and S2 than did integrated aliphatic peak areas. The results suggest that combining ATR-FTIR with PLSR is a reliable approach for estimating useful geochemical parameters of oil shales that is faster and requires less sample preparation than current screening methods.
NASA Astrophysics Data System (ADS)
Moritzer, E.; Leister, C.
2014-05-01
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help of statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moritzer, E., E-mail: elmar.moritzer@ktp.upb.de; Leister, C., E-mail: elmar.moritzer@ktp.upb.de
The industrial use of atmospheric pressure plasmas in the plastics processing industry has increased significantly in recent years. Users of this treatment process have the possibility to influence the target values (e.g. bond strength or surface energy) with the help of kinematic and electrical parameters. Until now, systematic procedures have been used with which the parameters can be adapted to the process or product requirements but only by very time-consuming methods. For this reason, the relationship between influencing values and target values will be examined based on the example of a pretreatment in the bonding process with the help ofmore » statistical experimental design. Because of the large number of parameters involved, the analysis is restricted to the kinematic and electrical parameters. In the experimental tests, the following factors are taken as parameters: gap between nozzle and substrate, treatment velocity (kinematic data), voltage and duty cycle (electrical data). The statistical evaluation shows significant relationships between the parameters and surface energy in the case of polypropylene. An increase in the voltage and duty cycle increases the polar proportion of the surface energy, while a larger gap and higher velocity leads to lower energy levels. The bond strength of the overlapping bond is also significantly influenced by the voltage, velocity and gap. The direction of their effects is identical with those of the surface energy. In addition to the kinematic influences of the motion of an atmospheric pressure plasma jet, it is therefore especially important that the parameters for the plasma production are taken into account when designing the pretreatment processes.« less
Which echocardiographic parameter is a better marker of volume status in hemodialysis patients?
Sabaghian, Tahereh; Hajibaratali, Bahareh; Samavat, Shiva
2016-11-01
Bio-impedance analysis (BIA) is a preferred method for estimating the volume status. However, it cannot be utilized in daily practice. Since the assessment of the volume status is important and challenging for hemodialysis (HD) patients, the aim of study was to determine the volume status in chronic HD patients using echocardiographic parameters and assess its correlation with BIA. In this cross-sectional analysis, echocardiography and BIA were performed on 30 chronic HD patients 30 min before and 30 min after dialysis. All the cases of dialysis were performed in the middle of the week. This study also assessed the correlation between echocardiographic parameters and BIA parameters. There were significant differences between ECW, TBW, and TBW% (TBW/W) before and after HD. Significant differences were observed between echocardiographic parameters of IVCD, IVCDi min , IVCDi max before and after the HD. LVEDD, LVESD, LA area, mitral valve inflow, E/E', and IVRT, were improved after dialysis, too. There was a significant correlation between IVCDi min as an index of volume status, ECW% and TBW% before HD and IVCDi min change after dialysis had a significant correlation with %ECW change after dialysis. Comparison between hypertensive and non-hypertensive groups indicated IVCDi min was significantly lower in non-hypertensive group after dialysis. Our results showed a correlation between IVCDi min and BIA parameters before HD. So, it seems that IVCDi min can be a good parameter for determining the volume status of HD patients. However, further studies, with larger sample size and with a prospective study design, are required to confirm these results.
Neuert, Mark A C; Dunning, Cynthia E
2013-09-01
Strain energy-based adaptive material models are used to predict bone resorption resulting from stress shielding induced by prosthetic joint implants. Generally, such models are governed by two key parameters: a homeostatic strain-energy state (K) and a threshold deviation from this state required to initiate bone reformation (s). A refinement procedure has been performed to estimate these parameters in the femur and glenoid; this study investigates the specific influences of these parameters on resulting density distributions in the distal ulna. A finite element model of a human ulna was created using micro-computed tomography (µCT) data, initialized to a homogeneous density distribution, and subjected to approximate in vivo loading. Values for K and s were tested, and the resulting steady-state density distribution compared with values derived from µCT images. The sensitivity of these parameters to initial conditions was examined by altering the initial homogeneous density value. The refined model parameters selected were then applied to six additional human ulnae to determine their performance across individuals. Model accuracy using the refined parameters was found to be comparable with that found in previous studies of the glenoid and femur, and gross bone structures, such as the cortical shell and medullary canal, were reproduced. The model was found to be insensitive to initial conditions; however, a fair degree of variation was observed between the six specimens. This work represents an important contribution to the study of changes in load transfer in the distal ulna following the implementation of commercial orthopedic implants.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
Influence of model parameters on synthesized high-frequency strong-motion waveforms
NASA Astrophysics Data System (ADS)
Zadonina, Ekaterina; Caldeira, Bento; Bezzeghoud, Mourad; Borges, José F.
2010-05-01
Waveform modeling is an important and helpful instrument of modern seismology that may provide valuable information. However, synthesizing seismograms requires to define many parameters, which differently affect the final result. Such parameters may be: the design of the grid, the structure model, the source time functions, the source mechanism, the rupture velocity. Variations in parameters may produce significantly different seismograms. We synthesize seismograms from a hypothetical earthquake and numerically estimate the influence of some of the used parameters. Firstly, we present the results for high-frequency near-fault waveforms obtained from defined model by changing tested parameters. Secondly, we present the results of a quantitative comparison of contributions from certain parameters on synthetic waveforms by using misfit criteria. For the synthesis of waveforms we used 2D/3D elastic finite-difference wave propagation code E3D [1] based on the elastodynamic formulation of the wave equation on a staggered grid. This code gave us the opportunity to perform all needed manipulations using a computer cluster. To assess the obtained results, we use misfit criteria [2] where seismograms are compared in time-frequency and phase by applying a continuous wavelet transform to the seismic signal. [1] - Larsen, S. and C.A. Schultz (1995). ELAS3D: 2D/3D elastic finite-difference wave propagation code, Technical Report No. UCRL-MA-121792, 19 pp. [2] - Kristekova, M., Kristek, J., Moczo, P., Day, S.M., 2006. Misfit criteria for quantitative comparison of seismograms. Bul. of Seis. Soc. of Am. 96(5), 1836-1850.
A multiple functional connector for high-resolution optical satellites
NASA Astrophysics Data System (ADS)
She, Fengke; Zheng, Gangtie
2017-11-01
For earth observation satellites, perturbations from actuators, such as CMGs and momentum wheels, and thermal loadings from support structures often have significant impact on the image quality of an optical. Therefore, vibration isolators and thermal deformation releasing devices nowadays often become important parts of an image satellite. However, all these devices will weak the connection stiffness between the optical instrument and the satellite bus structure. This will cause concern of the attitude control system design for worrying about possible negative effect on the attitude control. Therefore, a connection design satisfying all three requirements is a challenge of advanced image satellites. Chinese scientists have proposed a large aperture high-resolution satellite for earth observation. To meet all these requirements and ensure image quality, specified multiple function connectors are designed to meet these challenging requirements, which are: isolating vibration, releasing thermal deformation and ensuring whole satellite dynamic properties [1]. In this paper, a parallel spring guide flexure is developed for both vibration isolation and thermal deformation releasing. The stiffness of the flexure is designed to meet the vibration isolation requirement. To attenuate vibration, and more importantly to satisfy the stability requirement of the attitude control system, metal damping, which has many merits for space applications, are applied in this connecter to provide a high damping ratio and nonlinear stiffness. The capability of the connecter for vibration isolation and attenuation is validated through numerical simulation and experiments. Connecter parameter optimization is also conducted to meet both requirements of thermal deformation releasing and attitude control. Analysis results show that the in-orbit attitude control requirement is satisfied while the thermal releasing performance is optimized. The design methods and analysis results are also provided in the present paper.
Vargas-Melendez, Leandro; Boada, Beatriz L; Boada, Maria Jesus L; Gauchia, Antonio; Diaz, Vicente
2017-04-29
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33 % of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle's parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle's roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle's states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm.
Vargas-Melendez, Leandro; Boada, Beatriz L.; Boada, Maria Jesus L.; Gauchia, Antonio; Diaz, Vicente
2017-01-01
Vehicles with a high center of gravity (COG), such as light trucks and heavy vehicles, are prone to rollover. This kind of accident causes nearly 33% of all deaths from passenger vehicle crashes. Nowadays, these vehicles are incorporating roll stability control (RSC) systems to improve their safety. Most of the RSC systems require the vehicle roll angle as a known input variable to predict the lateral load transfer. The vehicle roll angle can be directly measured by a dual antenna global positioning system (GPS), but it is expensive. For this reason, it is important to estimate the vehicle roll angle from sensors installed onboard in current vehicles. On the other hand, the knowledge of the vehicle’s parameters values is essential to obtain an accurate vehicle response. Some of vehicle parameters cannot be easily obtained and they can vary over time. In this paper, an algorithm for the simultaneous on-line estimation of vehicle’s roll angle and parameters is proposed. This algorithm uses a probability density function (PDF)-based truncation method in combination with a dual Kalman filter (DKF), to guarantee that both vehicle’s states and parameters are within bounds that have a physical meaning, using the information obtained from sensors mounted on vehicles. Experimental results show the effectiveness of the proposed algorithm. PMID:28468252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Bin, E-mail: bins@ieee.org
2014-07-01
We describe an algorithm that can adaptively provide mixture summaries of multimodal posterior distributions. The parameter space of the involved posteriors ranges in size from a few dimensions to dozens of dimensions. This work was motivated by an astrophysical problem called extrasolar planet (exoplanet) detection, wherein the computation of stochastic integrals that are required for Bayesian model comparison is challenging. The difficulty comes from the highly nonlinear models that lead to multimodal posterior distributions. We resort to importance sampling (IS) to estimate the integrals, and thus translate the problem to be how to find a parametric approximation of the posterior.more » To capture the multimodal structure in the posterior, we initialize a mixture proposal distribution and then tailor its parameters elaborately to make it resemble the posterior to the greatest extent possible. We use the effective sample size (ESS) calculated based on the IS draws to measure the degree of approximation. The bigger the ESS is, the better the proposal resembles the posterior. A difficulty within this tailoring operation lies in the adjustment of the number of mixing components in the mixture proposal. Brute force methods just preset it as a large constant, which leads to an increase in the required computational resources. We provide an iterative delete/merge/add process, which works in tandem with an expectation-maximization step to tailor such a number online. The efficiency of our proposed method is tested via both simulation studies and real exoplanet data analysis.« less
NASA Technical Reports Server (NTRS)
Baumback, J. I.; Davies, A. N.; Vonirmer, A.; Lampen, P. H.
1995-01-01
To assist peak assignment in ion mobility spectrometry it is important to have quality reference data. The reference collection should be stored in a database system which is capable of being searched using spectral or substance information. We propose to build such a database customized for ion mobility spectra. To start off with it is important to quickly reach a critical mass of data in the collection. We wish to obtain as many spectra combined with their IMS parameters as possible. Spectra suppliers will be rewarded for their participation with access to the database. To make the data exchange between users and system administration possible, it is important to define a file format specially made for the requirements of ion mobility spectra. The format should be computer readable and flexible enough for extensive comments to be included. In this document we propose a data exchange format, and we would like you to give comments on it. For the international data exchange it is important, to have a standard data exchange format. We propose to base the definition of this format on the JCAMP-DX protocol, which was developed for the exchange of infrared spectra. This standard made by the Joint Committee on Atomic and Molecular Physical Data is of a flexible design. The aim of this paper is to adopt JCAMP-DX to the special requirements of ion mobility spectra.
Green disease in optical coherence tomography diagnosis of glaucoma.
Sayed, Mohamed S; Margolis, Michael; Lee, Richard K
2017-03-01
Optical coherence tomography (OCT) has become an integral component of modern glaucoma practice. Utilizing color codes, OCT analysis has rendered glaucoma diagnosis and follow-up simpler and faster for the busy clinician. However, green labeling of OCT parameters suggesting normal values may confer a false sense of security, potentially leading to missed diagnoses of glaucoma and/or glaucoma progression. Conditions in which OCT color coding may be falsely negative (i.e., green disease) are identified. Early glaucoma in which retinal nerve fiber layer (RNFL) thickness and optic disc parameters, albeit labeled green, are asymmetric in both eyes may result in glaucoma being undetected. Progressively decreasing RNFL thickness may reveal the presence of progressive glaucoma that, because of green labeling, can be missed by the clinician. Other ocular conditions that can increase RNFL thickness can make the diagnosis of coexisting glaucoma difficult. Recently introduced progression analysis features of OCT may help detect green disease. Recognition of green disease is of paramount importance in diagnosing and treating glaucoma. Understanding the limitations of imaging technologies coupled with evaluation of serial OCT analyses, prompt clinical examination, and structure-function correlation is important to avoid missing real glaucoma requiring treatment.
Sensitivity of projected long-term CO2 emissions across the Shared Socioeconomic Pathways
NASA Astrophysics Data System (ADS)
Marangoni, G.; Tavoni, M.; Bosetti, V.; Borgonovo, E.; Capros, P.; Fricko, O.; Gernaat, D. E. H. J.; Guivarch, C.; Havlik, P.; Huppmann, D.; Johnson, N.; Karkatsoulis, P.; Keppo, I.; Krey, V.; Ó Broin, E.; Price, J.; van Vuuren, D. P.
2017-01-01
Scenarios showing future greenhouse gas emissions are needed to estimate climate impacts and the mitigation efforts required for climate stabilization. Recently, the Shared Socioeconomic Pathways (SSPs) have been introduced to describe alternative social, economic and technical narratives, spanning a wide range of plausible futures in terms of challenges to mitigation and adaptation. Thus far the key drivers of the uncertainty in emissions projections have not been robustly disentangled. Here we assess the sensitivities of future CO2 emissions to key drivers characterizing the SSPs. We use six state-of-the-art integrated assessment models with different structural characteristics, and study the impact of five families of parameters, related to population, income, energy efficiency, fossil fuel availability, and low-carbon energy technology development. A recently developed sensitivity analysis algorithm allows us to parsimoniously compute both the direct and interaction effects of each of these drivers on cumulative emissions. The study reveals that the SSP assumptions about energy intensity and economic growth are the most important determinants of future CO2 emissions from energy combustion, both with and without a climate policy. Interaction terms between parameters are shown to be important determinants of the total sensitivities.
Inclusion of TCAF model in XSPEC to study accretion flow dynamics around black hole candidates
NASA Astrophysics Data System (ADS)
Debnath, Dipak; Chakrabarti, Sandip Kumar; Mondal, Santanu
Spectral and Temporal properties of black hole candidates can be well understood with the Chakrabarti-Titarchuk solution of two component advective flow (TCAF). This model requires two accretion rates, namely, the Keplerian disk accretion rate and the sub-Keplerian halo accretion rate, the latter being composed of a low angular momentum flow which may or may not develop a shock. In this solution, the relevant parameter is the relative importance of the halo (which creates the Compton cloud region) rate with respect to the Keplerian disk rate (soft photon source). Though this model has been used earlier to manually fit data of several black hole candidates quite satisfactorily, for the first time we are able to create a user friendly version by implementing additive Table model FITS file into GSFC/NASA's spectral analysis software package XSPEC. This enables any user to extract physical parameters of accretion flows, such as two accretion rates, shock location, shock strength etc. for any black hole candidate. Most importantly, unlike any other theoretical model, we show that TCAF is capable of predicting timing properties from spectral fits, since in TCAF, a shock is responsible for deciding spectral slopes as well as QPO frequencies.
Leistra, Minze; Wolters, André; van den Berg, Frederik
2008-06-01
Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.
An optimized method for measuring fatty acids and cholesterol in stable isotope-labeled cells
Argus, Joseph P.; Yu, Amy K.; Wang, Eric S.; Williams, Kevin J.; Bensinger, Steven J.
2017-01-01
Stable isotope labeling has become an important methodology for determining lipid metabolic parameters of normal and neoplastic cells. Conventional methods for fatty acid and cholesterol analysis have one or more issues that limit their utility for in vitro stable isotope-labeling studies. To address this, we developed a method optimized for measuring both fatty acids and cholesterol from small numbers of stable isotope-labeled cultured cells. We demonstrate quantitative derivatization and extraction of fatty acids from a wide range of lipid classes using this approach. Importantly, cholesterol is also recovered, albeit at a modestly lower yield, affording the opportunity to quantitate both cholesterol and fatty acids from the same sample. Although we find that background contamination can interfere with quantitation of certain fatty acids in low amounts of starting material, our data indicate that this optimized method can be used to accurately measure mass isotopomer distributions for cholesterol and many fatty acids isolated from small numbers of cultured cells. Application of this method will facilitate acquisition of lipid parameters required for quantifying flux and provide a better understanding of how lipid metabolism influences cellular function. PMID:27974366
NASA Astrophysics Data System (ADS)
Lee, Hyunki; Kim, Min Young; Moon, Jeon Il
2017-12-01
Phase measuring profilometry and moiré methodology have been widely applied to the three-dimensional shape measurement of target objects, because of their high measuring speed and accuracy. However, these methods suffer from inherent limitations called a correspondence problem, or 2π-ambiguity problem. Although a kind of sensing method to combine well-known stereo vision and phase measuring profilometry (PMP) technique simultaneously has been developed to overcome this problem, it still requires definite improvement for sensing speed and measurement accuracy. We propose a dynamic programming-based stereo PMP method to acquire more reliable depth information and in a relatively small time period. The proposed method efficiently fuses information from two stereo sensors in terms of phase and intensity simultaneously based on a newly defined cost function of dynamic programming. In addition, the important parameters are analyzed at the view point of the 2π-ambiguity problem and measurement accuracy. To analyze the influence of important hardware and software parameters related to the measurement performance and to verify its efficiency, accuracy, and sensing speed, a series of experimental tests were performed with various objects and sensor configurations.
Jakobsen, Sofie; Jensen, Frank
2014-12-09
We assess the accuracy of force field (FF) electrostatics at several levels of approximation from the standard model using fixed partial charges to conformational specific multipole fits including up to quadrupole moments. Potential-derived point charges and multipoles are calculated using least-squares methods for a total of ∼1000 different conformations of the 20 natural amino acids. Opposed to standard charge fitting schemes the procedure presented in the current work employs fitting points placed on a single isodensity surface, since the electrostatic potential (ESP) on such a surface determines the ESP at all points outside this surface. We find that the effect of multipoles beyond partial atomic charges is of the same magnitude as the effect due to neglecting conformational dependency (i.e., polarizability), suggesting that the two effects should be included at the same level in FF development. The redundancy at both the partial charge and multipole levels of approximation is quantified. We present an algorithm which stepwise reduces or increases the dimensionality of the charge or multipole parameter space and provides an upper limit of the ESP error that can be obtained at a given truncation level. Thereby, we can identify a reduced set of multipole moments corresponding to ∼40% of the total number of multipoles. This subset of parameters provides a significant improvement in the representation of the ESP compared to the simple point charge model and close to the accuracy obtained using the complete multipole parameter space. The selection of the ∼40% most important multipole sites is highly transferable among different conformations, and we find that quadrupoles are of high importance for atoms involved in π-bonding, since the anisotropic electric field generated in such regions requires a large degree of flexibility.
Chatiza, F P; Bartels, P; Nedambale, T L; Wagenaar, G M
2012-07-15
The need for information on the reproductive physiology of different wildlife species is important for ex situ conservation using such methods as in vitro fertilization (IVF). Information on species reproductive physiology and evaluation of sperm quality using accurate, objective, repeatable methods, such as computer-assisted sperm analysis (CASA) for ex situ conservation has become a priority. The aim of this study was to evaluate motility patterns of antelope epididymal spermatozoa incubated for 4 h under conditions that support bovine IVF using CASA. Cauda epididymal spermatozoa were collected postmortem from testicles of springbok (N=38), impala (N=26), and blesbok (N=42), and cryopreserved in biladyl containing 7% glycerol. Spermatozoa were thawed and incubated in Capacitation media and modified Tyrode lactate (m-TL) IVF media using a protocol developed for domestic cattle IVF. The study evaluates 14 motility characteristics of the antelope epididymal sperm at six time points using CASA. Species differences in CASA parameters evaluated under similar conditions were observed. Several differences in individual motility parameters at the time points were reported for each species. Epididymal sperm of the different antelope species responded differently to capacitation agents exhibiting variations in hyperactivity. Motility parameters that describe the vigor of sperm decreased over time. Spermatozoa from the different antelope species have different physiological and optimal capacitation and in vitro culture requirements. The interspecies comparison of kinematic parameters of spermatozoa between the antelopes over several end points contributes to comparative sperm physiology which forms an important step in the development of species specific assisted reproductive techniques (ARTs) for ex situ conservation of these species. Copyright © 2012 Elsevier Inc. All rights reserved.
Monteyne, Tinne; Vancoillie, Jochem; Remon, Jean-Paul; Vervaet, Chris; De Beer, Thomas
2016-10-01
The pharmaceutical industry has a growing interest in alternative manufacturing models allowing automation and continuous production in order to improve process efficiency and reduce costs. Implementing a switch from batch to continuous processing requires fundamental process understanding and the implementation of quality-by-design (QbD) principles. The aim of this study was to examine the relationship between formulation-parameters (type binder, binder concentration, drug-binder miscibility), process-parameters (screw speed, powder feed rate and granulation temperature), granule properties (size, size distribution, shape, friability, true density, flowability) and tablet properties (tensile strength, friability, dissolution rate) of four different drug-binder formulations using Design of experiments (DOE). Two binders (polyethylene glycol (PEG) and Soluplus®) with a different solid state, semi-crystalline vs amorphous respectively, were combined with two model-drugs, metoprolol tartrate (MPT) and caffeine anhydrous (CAF), both having a contrasting miscibility with the binders. This research revealed that the granule properties of miscible drug-binder systems depended on the powder feed rate and barrel filling degree of the granulator whereas the granule properties of immiscible systems were mainly influenced by binder concentration. Using an amorphous binder, the tablet tensile strength depended on the granule size. In contrast, granule friability was more important for tablet quality using a brittle binder. However, this was not the case for caffeine-containing blends, since these phenomena were dominated by the enhanced compression properties of caffeine Form I, which was formed during granulation. Hence, it is important to gain knowledge about formulation behavior during processing since this influences the effect of process parameters onto the granule and tablet properties. Copyright © 2016 Elsevier B.V. All rights reserved.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
NASA Astrophysics Data System (ADS)
Gleason, Colin J.; Smith, Laurence C.; Lee, Jinny
2014-12-01
Knowledge of river discharge is critically important for water resource management, climate modeling, and improved understanding of the global water cycle, yet discharge is poorly known in much of the world. Remote sensing holds promise to mitigate this gap, yet current approaches for quantitative retrievals of river discharge require in situ calibration or a priori knowledge of river hydraulics, limiting their utility in unmonitored regions. Recently, Gleason and Smith (2014) demonstrated discharge retrievals within 20-30% of in situ observations solely from Landsat TM satellite images through discovery of a river-specific geomorphic scaling phenomenon termed at-many-stations hydraulic geometry (AMHG). This paper advances the AMHG discharge retrieval approach via additional parameter optimizations and validation on 34 gauged rivers spanning a diverse range of geomorphic and climatic settings. Sensitivity experiments reveal that discharge retrieval accuracy varies with river morphology, reach averaging procedure, and optimization parameters. Quality of remotely sensed river flow widths is also important. Recommended best practices include a proposed global parameter set for use when a priori information is unavailable. Using this global parameterization, AMHG discharge retrievals are successful for most investigated river morphologies (median RRMSE 33% of in situ gauge observations), except braided rivers (median RRMSE 74%), rivers having low at-a-station hydraulic geometry b exponents (reach-averaged b < 0.1, median RRMSE 86%), and arid rivers having extreme discharge variability (median RRMSE > 1000%). Excluding such environments, 26-41% RRMSE agreement between AMHG discharge retrievals and in situ gauge observations suggests AMHG can meaningfully address global discharge knowledge gaps solely from repeat satellite imagery.
Remote control of the industry processes. POWERLINK protocol application
NASA Astrophysics Data System (ADS)
Wóbel, A.; Paruzel, D.; Paszkiewicz, B.
2017-08-01
The present technological development enables the use of solutions characterized by a lower failure rate, and work with greater precision. This allows you to obtain the most efficient production, high speed production and reliability of individual components. The main scope of this article was POWERLINK protocol application for communication with the controller B & R through communication Ethernet for recording process parameters. This enables control of run production cycle using an internal network connected to the PC industry. Knowledge of the most important parameters of the production in real time allows detecting of a failure immediately after occurrence. For this purpose, the position of diagnostic use driver X20CP1301 B&R to record measurement data such as pressure, temperature valve between the parties and the torque required to change the valve setting was made. The use of POWERLINK protocol allows for the transmission of information on the status of every 200 μs.
NASA Astrophysics Data System (ADS)
Cortés, J.-C.; Colmenar, J.-M.; Hidalgo, J.-I.; Sánchez-Sánchez, A.; Santonja, F.-J.; Villanueva, R.-J.
2016-01-01
Academic performance is a concern of paramount importance in Spain, where around of 30 % of the students in the last two courses in high school, before to access to the labor market or to the university, do not achieve the minimum knowledge required according to the Spanish educational law in force. In order to analyze this problem, we propose a random network model to study the dynamics of the academic performance in Spain. Our approach is based on the idea that both, good and bad study habits, are a mixture of personal decisions and influence of classmates. Moreover, in order to consider the uncertainty in the estimation of model parameters, we perform a lot of simulations taking as the model parameters the ones that best fit data returned by the Differential Evolution algorithm. This technique permits to forecast model trends in the next few years using confidence intervals.
NASA Technical Reports Server (NTRS)
James, G. K.; Ajello, J. M.; Kanik, I.; Slevin, J.; Franklin, B.; Shemansky, D.
1993-01-01
The electron-atomic hydrogen scattering system is an important testing ground for theoretical models and has received a great deal of attention from experimentalists and theoreticians alike over the years. A complete description of the excitation process requires a knowledge of many different parameters, and experimental measurements of these parameters have been performed in various laboratories around the world. As far as total cross section data are concerned it has been noted that the discrepancy between the data of Long et al. and Williams for n = 2 excitations needs to be resolved in the interests of any further refinement of theory. We report new measurements of total cross sections and atomic line polarizations for both n=2 and n=3 excitations at energies from threshold to 2000 eV...
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
An intelligent identification algorithm for the monoclonal picking instrument
NASA Astrophysics Data System (ADS)
Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun
2017-11-01
The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.
Calculation of Stress Intensity Factors for Interfacial Cracks in Fiber Metal Laminates
NASA Technical Reports Server (NTRS)
Wang, John T.
2009-01-01
Stress intensity factors for interfacial cracks in Fiber Metal Laminates (FML) are computed by using the displacement ratio method recently developed by Sun and Qian (1997, Int. J. Solids. Struct. 34, 2595-2609). Various FML configurations with single and multiple delaminations subjected to different loading conditions are investigated. The displacement ratio method requires the total energy release rate, bimaterial parameters, and relative crack surface displacements as input. Details of generating the energy release rates, defining bimaterial parameters with anisotropic elasticity, and selecting proper crack surface locations for obtaining relative crack surface displacements are discussed in the paper. Even though the individual energy release rates are nonconvergent, mesh-size-independent stress intensity factors can be obtained. This study also finds that the selection of reference length can affect the magnitudes and the mode mixity angles of the stress intensity factors; thus, it is important to report the reference length used with the calculated stress intensity factors.
A Simple fMRI Compatible Robotic Stimulator to Study the Neural Mechanisms of Touch and Pain.
Riillo, F; Bagnato, C; Allievi, A G; Takagi, A; Fabrizi, L; Saggio, G; Arichi, T; Burdet, E
2016-08-01
This paper presents a simple device for the investigation of the human somatosensory system with functional magnetic imaging (fMRI). PC-controlled pneumatic actuation is employed to produce innocuous or noxious mechanical stimulation of the skin. Stimulation patterns are synchronized with fMRI and other relevant physiological measurements like electroencephalographic activity and vital physiological parameters. The system allows adjustable regulation of stimulation parameters and provides consistent patterns of stimulation. A validation experiment demonstrates that the system safely and reliably identifies clusters of functional activity in brain regions involved in the processing of pain. This new device is inexpensive, portable, easy-to-assemble and customizable to suit different experimental requirements. It provides robust and consistent somatosensory stimulation, which is of crucial importance to investigating the mechanisms of pain and its strong connection with the sense of touch.
A Review on Investigation and Assessment of Path Loss Models in Urban and Rural Environment
NASA Astrophysics Data System (ADS)
Maurya, G. R.; Kokate, P. A.; Lokhande, S. K.; Shrawankar, J. A.
2017-08-01
This paper aims at providing a clear knowledge of Path Loss (PL) to the researcher. The important data have been extracted from the papers and mentioned in clear and precise manner. The limited studies were based on identification of PL due to FM frequency. Majority of studies based on identification of PL considering telephonic frequency as a source. In this paper the PL in urban and rural areas of different places due to various factors like buildings, trees, antenna height, forest etc. have been studied. The common parameters like frequency, model and location based studies were done. The studies were segregated based on various parameters in tabular format and they were compared based on frequency, location and best fit model in that table. Scatter chart was drawn in order to make the things clearer and more understandable. However, location specific PL models are required to investigate the RF propagation in identified terrain.
Methods for the behavioral, educational, and social sciences: an R package.
Kelley, Ken
2007-11-01
Methods for the Behavioral, Educational, and Social Sciences (MBESS; Kelley, 2007b) is an open source package for R (R Development Core Team, 2007b), an open source statistical programming language and environment. MBESS implements methods that are not widely available elsewhere, yet are especially helpful for the idiosyncratic techniques used within the behavioral, educational, and social sciences. The major categories of functions are those that relate to confidence interval formation for noncentral t, F, and chi2 parameters, confidence intervals for standardized effect sizes (which require noncentral distributions), and sample size planning issues from the power analytic and accuracy in parameter estimation perspectives. In addition, MBESS contains collections of other functions that should be helpful to substantive researchers and methodologists. MBESS is a long-term project that will continue to be updated and expanded so that important methods can continue to be made available to researchers in the behavioral, educational, and social sciences.
New-Generation Sealing Slurries For Borehole Injection Purposes
NASA Astrophysics Data System (ADS)
Stryczek, Stanisław; Gonet, Andrzej; Wiśniowski, Rafał; Złotkowski, Albert
2015-12-01
The development of techniques and technologies thanks to which parameters of the ground medium can be modified makes specialists look for new recipes of geopolymers - binders for the reinforcing and sealing of unstable and permeable grounds. The sealing slurries are expected to meet a number of strict requirements, therefore it is important to find new admixtures and additives which could modify the fresh and hardened slurry. Special attention has been recently paid to the fluid ash - a by-product of the combustion of hard coals. However, the use of this additive is associated with the application of appropriate superplastifier. Laboratory analyses of rheological parameters of fresh sealing slurries and the ways of improving their liquidity by a properly selected third-generation superplastifier are presented in the paper. The slurries were based on Portland cement CEM I, milled granulated large-furnace slag and fly ash from fluidized-bed combustion of hard coal.
Leapfrog variants of iterative methods for linear algebra equations
NASA Technical Reports Server (NTRS)
Saylor, Paul E.
1988-01-01
Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.
NASA Technical Reports Server (NTRS)
Knapp, Roger Glenn
1993-01-01
A fuzzy-based attitude controller is designed for attitude control of a generic spacecraft with on/off thrusters. The controller is comprised of packages of rules dedicated to addressing different objectives (e.g., disturbance rejection, low fuel consumption, avoiding the excitation of flexible appendages, etc.). These rule packages can be inserted or removed depending on the requirements of the particular spacecraft and are parameterized based on vehicle parameters such as inertia or operational parameters such as the maneuvering rate. Individual rule packages can be 'weighted' relative to each other to emphasize the importance of one objective relative to another. Finally, the fuzzy controller and rule packages are demonstrated using the high-fidelity Space Shuttle Interactive On-Orbit Simulator (IOS) while performing typical on-orbit operations and are subsequently compared with the existing shuttle flight control system performance.
Critical parameters for coarse coal underground slurry haulage systems
NASA Technical Reports Server (NTRS)
Maynard, D. P.
1981-01-01
Factors are identified which must be considered in meeting the requirements of a transportation system for conveying, in a pipeline, the coal mined by a continuous mining machine to a storage location neat the mine entrance or to a coal preparation plant located near the surface. For successful operation, the slurry haulage the system should be designed to operated in the turbulent flow regime at a flow rate at least 30% greater than the deposition velocity (slurry flow rate at which the solid particles tend to settle in the pipe). The capacity of the haulage system should be compatible with the projected coal output. Partical size, solid concentration, density, and viscosity of the suspension are if importance as well as the selection of the pumps, pipes, and valves. The parameters with the greatest effect on system performance ar flow velocity, pressure coal particle size, and solids concentration.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
‘Action’ on structured freeform surfaces
NASA Astrophysics Data System (ADS)
Whitehouse, David J.
2018-06-01
Surfaces are becoming more complex partly due to the more complicated function required of them and partly due to the introduction of different manufacturing processes. These have thrown into relief the need to consider new ways of measuring and characterizing such surfaces and more importantly to make such characterization more relevant by tying together the geometry and the function more closely. The surfaces which have freeform and structure have been chosen to be a carrier for this investigation because so far there has been little work carried out in this neglected but potentially important area. This necessitates the development of a strategy for their characterization. In this article, some ways have been found of identifying possible strategies for tackling this characterization problem but also linking this characterization to performance and manufacture, based in part on the principles of least action and on the way that nature has evolved to solve the marriage of flexible freeform geometry, structure and function. Recommendations are made for the most suitable surface parameter to use which satisfies the requirement for characterizing structured freeform surfaces as well as utilizing ‘Action’ to predict functionality.
EEHG Performance and Scaling Laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penn, Gregory
This note will calculate the idealized performance of echo-enabled harmonic generation performance (EEHG), explore the parameter settings, and look at constraints determined by incoherent synchrotron radiation (ISR) and intrabeam scattering (IBS). Another important effect, time-of-flight variations related to transverse emittance, is included here but without detailed explanation because it has been described previously. The importance of ISR and IBS is that they lead to random energy shifts that lead to temporal shifts after the various beam manipulations required by the EEHG scheme. These effects give competing constraints on the beamline. For chicane magnets which are too compact for a givenmore » R56, the magnetic fields will be sufficiently strong that ISR will blur out the complex phase space structure of the echo scheme to the point where the bunching is strongly suppressed. The effect of IBS is more omnipresent, and requires an overall compact beamline. It is particularly challenging for the second pulse in a two-color attosecond beamline, due to the long delay between the first energy modulation and the modulator for the second pulse.« less
Principles of blood transfusion service audit.
Dosunmu, A O; Dada, M O
2005-12-01
Blood transfusion is still an important procedure in modern medical practice despite efforts to avoid it. This is due to it's association with infections especially HIV. It is therefore necessary to have proper quality control of its production, storage and usage [1]. A way of controlling usage is to do regular clinical audit. To effect this, there has to be an agreed standard for appropriate use of blood. The aim of this paper is to briefly highlight the importance of audit, audit procedures and tools i.e. required records, development of audit criteria and audit parameters. Every hospital/blood transfusion center is expected to develop a system of audit that is appropriate to its needs. The suggestions are mainly based on the experience at the Lagos University Teaching Hospital and the Lagos State Blood Transfusion Service.
NASA Astrophysics Data System (ADS)
Castaño Moraga, C. A.; Suárez Santana, E.; Sabbagh Rodríguez, I.; Nebot Medina, R.; Suárez García, S.; Rodríguez Alvarado, J.; Piernavieja Izquierdo, G.; Ruiz Alzola, J.
2010-09-01
Wind farms authorization and power allocations to private investors promoting wind energy projects requires some planification strategies. This issue is even more important under land restrictions, as it is the case of Canary Islands, where numerous specially protected areas are present for environmental reasons and land is a scarce resource. Aware of this limitation, the Regional Government of Canary Islands designed the requirements of a public tender to grant licences to install new wind farms trying to maximize the energy produced in terms of occupied land. In this paper, we detail the methodology developed by the Canary Islands Institute of Technology (ITC, S.A.) to support the work of the technical staff of the Regional Ministry of Industry, responsible for the evaluation of a competitive tender process for awarding power lincenses to private investors. The maximization of wind energy production per unit of area requires an exhaustive wind profile characterization. To that end, wind speed was statistically characterized by means of a Weibull probability density function, which mainly depends on two parameters: the shape parameter K, which determines the slope of the curve, and the average wind speed v , which is a scale parameter. These two parameters have been evaluated at three different heights (40,60,80 m) over the whole canarian archipelago, as well as the main wind speed direction. These parameters are available from the public data source Wind Energy Map of the Canary Islands [1]. The proposed methodology is based on the calculation of an initially defined Energy Efficiency Basic Index (EEBI), which is a performance criteria that weighs the annual energy production of a wind farm per unit of area. The calculation of this parameter considers wind conditions, windturbine characteristics, geometry of windturbine distribution in the wind farm (position within the row and column of machines), and involves four steps: Estimation of the energy produced by every windturbine as if it were isolated from all the other machines of the wind farm, using its power curve and the statistical characterization of the wind profile at the site. Estimation of energy losses due to affections caused by other windturbine in the same row and missalignment with respect to the main wind speed direction. Estimation of energy losses due to affections induced by windturbines located upstream. EEBI calculation as the ratio between the annual energy production and the area occupied by the wind farm, as a function of wind speed profile and wind turbine characteristics. Computations involved above are modeled under a System Theory characterization
Automated system for generation of soil moisture products for agricultural drought assessment
NASA Astrophysics Data System (ADS)
Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Drought is a frequently occurring disaster affecting lives of millions of people across the world every year. Several parameters, indices and models are being used globally to forecast / early warning of drought and monitoring drought for its prevalence, persistence and severity. Since drought is a complex phenomenon, large number of parameter/index need to be evaluated to sufficiently address the problem. It is a challenge to generate input parameters from different sources like space based data, ground data and collateral data in short intervals of time, where there may be limitation in terms of processing power, availability of domain expertise, specialized models & tools. In this study, effort has been made to automate the derivation of one of the important parameter in the drought studies viz Soil Moisture. Soil water balance bucket model is in vogue to arrive at soil moisture products, which is widely popular for its sensitivity to soil conditions and rainfall parameters. This model has been encoded into "Fish-Bone" architecture using COM technologies and Open Source libraries for best possible automation to fulfill the needs for a standard procedure of preparing input parameters and processing routines. The main aim of the system is to provide operational environment for generation of soil moisture products by facilitating users to concentrate on further enhancements and implementation of these parameters in related areas of research, without re-discovering the established models. Emphasis of the architecture is mainly based on available open source libraries for GIS and Raster IO operations for different file formats to ensure that the products can be widely distributed without the burden of any commercial dependencies. Further the system is automated to the extent of user free operations if required with inbuilt chain processing for every day generation of products at specified intervals. Operational software has inbuilt capabilities to automatically download requisite input parameters like rainfall, Potential Evapotranspiration (PET) from respective servers. It can import file formats like .grd, .hdf, .img, generic binary etc, perform geometric correction and re-project the files to native projection system. The software takes into account the weather, crop and soil parameters to run the designed soil water balance model. The software also has additional features like time compositing of outputs to generate weekly, fortnightly profiles for further analysis. Other tools to generate "Area Favorable for Crop Sowing" using the daily soil moisture with highly customizable parameters interface has been provided. A whole India analysis would now take a mere 20 seconds for generation of soil moisture products which would normally take one hour per day using commercial software.
NASA Astrophysics Data System (ADS)
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.
System-level view of geospace dynamics: Challenges for high-latitude ground-based observations
NASA Astrophysics Data System (ADS)
Donovan, E.
2014-12-01
Increasingly, research programs including GEM, CEDAR, GEMSIS, GO Canada, and others are focusing on how geospace works as a system. Coupling sits at the heart of system level dynamics. In all cases, coupling is accomplished via fundamental processes such as reconnection and plasma waves, and can be between regions, energy ranges, species, scales, and energy reservoirs. Three views of geospace are required to attack system level questions. First, we must observe the fundamental processes that accomplish the coupling. This "observatory view" requires in situ measurements by satellite-borne instruments or remote sensing from powerful well-instrumented ground-based observatories organized around, for example, Incoherent Scatter Radars. Second, we need to see how this coupling is controlled and what it accomplishes. This demands quantitative observations of the system elements that are being coupled. This "multi-scale view" is accomplished by networks of ground-based instruments, and by global imaging from space. Third, if we take geospace as a whole, the system is too complicated, so at the top level we need time series of simple quantities such as indices that capture important aspects of the system level dynamics. This requires a "key parameter view" that is typically provided through indices such as AE and DsT. With the launch of MMS, and ongoing missions such as THEMIS, Cluster, Swarm, RBSP, and ePOP, we are entering a-once-in-a-lifetime epoch with a remarkable fleet of satellites probing processes at key regions throughout geospace, so the observatory view is secure. With a few exceptions, our key parameter view provides what we need. The multi-scale view, however, is compromised by space/time scales that are important but under-sampled, combined extent of coverage and resolution that falls short of what we need, and inadequate conjugate observations. In this talk, I present an overview of what we need for taking system level research to its next level, and how high latitude ground based observations can address these challenges.
PHAZR: A phenomenological code for holeboring in air
NASA Astrophysics Data System (ADS)
Picone, J. M.; Boris, J. P.; Lampe, M.; Kailasanath, K.
1985-09-01
This report describes a new code for studying holeboring by a charged particle beam, laser, or electric discharge in a gas. The coordinates which parameterize the channel are radial displacement (r) from the channel axis and distance (z) along the channel axis from the energy source. The code is primarily phenomenological that is, we use closed solutions of simple models in order to represent many of the effects which are important in holeboring. The numerical simplicity which we gain from the use of these solutions enables us to estimate the structure of channel over long propagation distances while using a minimum of computer time. This feature makes PHAZR a useful code for those studying and designing future systems. Of particular interest is the design and implementation of the subgrid turbulence model required to compute the enhanced channel cooling caused by asymmetry-driven turbulence. The approximate equations of Boris and Picone form the basis of the model which includes the effects of turbulent diffusion and fluid transport on the turbulent field itself as well as on the channel parameters. The primary emphasis here is on charged particle beams, and as an example, we present typical results for an ETA-like beam propagating in air. These calculations demonstrate how PHAZAR may be used to investigate accelerator parameter space and to isolate the important physical parameters which determine the holeboring properties of a given system. The comparison with two-dimensional calculations provide a calibration of the subgrid turbulence model.
Fully automated segmentation of callus by micro-CT compared to biomechanics.
Bissinger, Oliver; Götz, Carolin; Wolff, Klaus-Dietrich; Hapfelmeier, Alexander; Prodinger, Peter Michael; Tischer, Thomas
2017-07-11
A high percentage of closed femur fractures have slight comminution. Using micro-CT (μCT), multiple fragment segmentation is much more difficult than segmentation of unfractured or osteotomied bone. Manual or semi-automated segmentation has been performed to date. However, such segmentation is extremely laborious, time-consuming and error-prone. Our aim was to therefore apply a fully automated segmentation algorithm to determine μCT parameters and examine their association with biomechanics. The femura of 64 rats taken after randomised inhibitory or neutral medication, in terms of the effect on fracture healing, and controls were closed fractured after a Kirschner wire was inserted. After 21 days, μCT and biomechanical parameters were determined by a fully automated method and correlated (Pearson's correlation). The fully automated segmentation algorithm automatically detected bone and simultaneously separated cortical bone from callus without requiring ROI selection for each single bony structure. We found an association of structural callus parameters obtained by μCT to the biomechanical properties. However, results were only explicable by additionally considering the callus location. A large number of slightly comminuted fractures in combination with therapies that influence the callus qualitatively and/or quantitatively considerably affects the association between μCT and biomechanics. In the future, contrast-enhanced μCT imaging of the callus cartilage might provide more information to improve the non-destructive and non-invasive prediction of callus mechanical properties. As studies evaluating such important drugs increase, fully automated segmentation appears to be clinically important.
NASA Astrophysics Data System (ADS)
Ahlstrand, Emma; Zukerman Schpector, Julio; Friedman, Ran
2017-11-01
When proteins are solvated in electrolyte solutions that contain alkali ions, the ions interact mostly with carboxylates on the protein surface. Correctly accounting for alkali-carboxylate interactions is thus important for realistic simulations of proteins. Acetates are the simplest carboxylates that are amphipathic, and experimental data for alkali acetate solutions are available and can be compared with observables obtained from simulations. We carried out molecular dynamics simulations of alkali acetate solutions using polarizable and non-polarizable forcefields and examined the ion-acetate interactions. In particular, activity coefficients and association constants were studied in a range of concentrations (0.03, 0.1, and 1M). In addition, quantum-mechanics (QM) based energy decomposition analysis was performed in order to estimate the contribution of polarization, electrostatics, dispersion, and QM (non-classical) effects on the cation-acetate and cation-water interactions. Simulations of Li-acetate solutions in general overestimated the binding of Li+ and acetates. In lower concentrations, the activity coefficients of alkali-acetate solutions were too high, which is suggested to be due to the simulation protocol and not the forcefields. Energy decomposition analysis suggested that improvement of the forcefield parameters to enable accurate simulations of Li-acetate solutions can be achieved but may require the use of a polarizable forcefield. Importantly, simulations with some ion parameters could not reproduce the correct ion-oxygen distances, which calls for caution in the choice of ion parameters when protein simulations are performed in electrolyte solutions.
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Labute, M.; Chowdhary, K.; Debusschere, B.; Cameron-Smith, P. J.
2014-12-01
Simulating the atmospheric cycles of ozone, methane, and other radiatively important trace gases in global climate models is computationally demanding and requires the use of 100's of photochemical parameters with uncertain values. Quantitative analysis of the effects of these uncertainties on tracer distributions, radiative forcing, and other model responses is hindered by the "curse of dimensionality." We describe efforts to overcome this curse using ensemble simulations and advanced statistical methods. Uncertainties from 95 photochemical parameters in the trop-MOZART scheme were sampled using a Monte Carlo method and propagated through 10,000 simulations of the single column version of the Community Atmosphere Model (CAM). The variance of the ensemble was represented as a network with nodes and edges, and the topology and connections in the network were analyzed using lasso regression, Bayesian compressive sensing, and centrality measures from the field of social network theory. Despite the limited sample size for this high dimensional problem, our methods determined the key sources of variation and co-variation in the ensemble and identified important clusters in the network topology. Our results can be used to better understand the flow of photochemical uncertainty in simulations using CAM and other climate models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC).
Functional relationship-based alarm processing system
Corsberg, D.R.
1988-04-22
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary. 12 figs.
Sound field simulation and acoustic animation in urban squares
NASA Astrophysics Data System (ADS)
Kang, Jian; Meng, Yan
2005-04-01
Urban squares are important components of cities, and the acoustic environment is important for their usability. While models and formulae for predicting the sound field in urban squares are important for their soundscape design and improvement, acoustic animation tools would be of great importance for designers as well as for public participation process, given that below a certain sound level, the soundscape evaluation depends mainly on the type of sounds rather than the loudness. This paper first briefly introduces acoustic simulation models developed for urban squares, as well as empirical formulae derived from a series of simulation. It then presents an acoustic animation tool currently being developed. In urban squares there are multiple dynamic sound sources, so that the computation time becomes a main concern. Nevertheless, the requirements for acoustic animation in urban squares are relatively low compared to auditoria. As a result, it is important to simplify the simulation process and algorithms. Based on a series of subjective tests in a virtual reality environment with various simulation parameters, a fast simulation method with acceptable accuracy has been explored. [Work supported by the European Commission.
Functional relationship-based alarm processing
Corsberg, Daniel R.
1988-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated. Thus, each alarm's importance is continuously oupdated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on caussal factors between two alarms); (3) required action (system response or action) expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
Functional relationship-based alarm processing system
Corsberg, Daniel R.
1989-01-01
A functional relationship-based alarm processing system and method analyzes each alarm as it is activated and determines its relative importance with other currently activated alarms and signals in accordance with the functional relationships that the newly activated alarm has with other currently activated alarms. Once the initial level of importance of the alarm has been determined, that alarm is again evaluated if another related alarm is activated or deactivated. Thus, each alarm's importance is continuously updated as the state of the process changes during a scenario. Four hierarchical relationships are defined by this alarm filtering methodology: (1) level precursor (usually occurs when there are two alarm settings on the same parameter); (2) direct precursor (based on causal factors between two alarms); (3) required action (system response or action expected within a specified time following activation of an alarm or combination of alarms and process signals); and (4) blocking condition (alarms that are normally expected and are not considered important). The alarm processing system and method is sensitive to the dynamic nature of the process being monitored and is capable of changing the relative importance of each alarm as necessary.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras
Morris, Mark; Sellers, William I.
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints. PMID:25780778
Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi
2017-01-01
Accurate identification of the material properties of the plantar soft tissue is important for computer-aided analysis of foot pathologies and design of therapeutic footwear interventions based on subject-specific models of the foot. However, parameter identification of the hyperelastic material properties of plantar soft tissues usually requires an inverse finite element analysis due to the lack of a practical contact model of the indentation test. In the present study, we derive an analytical contact model of a spherical indentation test in order to directly estimate the material properties of the plantar soft tissue. Force-displacement curves of the heel pads are obtained through an indentation experiment. The experimental data are fit to the analytical stress-strain solution of the spherical indentation in order to obtain the parameters. A spherical indentation approach successfully predicted the non-linear material properties of the heel pad without iterative finite element calculation. The force-displacement curve obtained in the present study was found to be situated lower than those identified in previous studies. The proposed framework for identifying the hyperelastic material parameters may facilitate the development of subject-specific FE modeling of the foot for possible clinical and ergonomic applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Subject-specific body segment parameter estimation using 3D photogrammetry with multiple cameras.
Peyer, Kathrin E; Morris, Mark; Sellers, William I
2015-01-01
Inertial properties of body segments, such as mass, centre of mass or moments of inertia, are important parameters when studying movements of the human body. However, these quantities are not directly measurable. Current approaches include using regression models which have limited accuracy: geometric models with lengthy measuring procedures or acquiring and post-processing MRI scans of participants. We propose a geometric methodology based on 3D photogrammetry using multiple cameras to provide subject-specific body segment parameters while minimizing the interaction time with the participants. A low-cost body scanner was built using multiple cameras and 3D point cloud data generated using structure from motion photogrammetric reconstruction algorithms. The point cloud was manually separated into body segments, and convex hulling applied to each segment to produce the required geometric outlines. The accuracy of the method can be adjusted by choosing the number of subdivisions of the body segments. The body segment parameters of six participants (four male and two female) are presented using the proposed method. The multi-camera photogrammetric approach is expected to be particularly suited for studies including populations for which regression models are not available in literature and where other geometric techniques or MRI scanning are not applicable due to time or ethical constraints.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Importance of determining the climatic domains of sheep breeds.
Petit, D; Boujenane, I
2018-07-01
The main purpose of the study was to compare the capacity of the major sheep breeds in Morocco to cope with climate changes through the ranges of several climate parameters in which they can be found. We first delimitated the climatic 'domains' of each breed by constructing a database including altitude and climatic parameters (minima mean of the coldest month, maxima mean of the hottest month, annual rainfall, pluviothermic coefficient of Emberger Q 2, annual minima mean and annual maxima mean) on a 30-year period using the representative stations of each breed distribution. The overlap between each breed combination was quantified through a canonical analysis that extracted the most discriminant parameters. The variance analysis of each climatic parameter evidenced two breeds remarkable by their tolerance. The first one is the Timahdite, mainly settled in areas over 1100 m, which can tolerate the greatest variations in annual rainfall and pluviothermic coefficient. In spite of this feature, this breed is endangered owing to the decreasing quality of pastures. The second one is the D'man which apparently can support high variations in extreme temperatures. In fact, this breed is not well adapted to pastures and requires a special microclimate offered by oases. The information reported in this study will be the basis for the establishment of characterization and selection strategies for Moroccan sheep.
Branched-chain Amino Acids are associated with Metabolic Parameters in Bipolar Disorder.
Fellendorf, F T; Platzer, M; Pilz, R; Rieger, A; Kapfhammer, H P; Mangge, H; Dalkner, N; Zelzer, S; Meinitzer, A; Birner, A; Bengesser, S A; Queissner, R; Hamm, C; Hartleb, R; Reininghaus, E Z
2018-06-14
An important aspect of bipolar disorder (BD) research is the identification of biomarkers pertaining to the somatic health state. The branched-chain essential amino acids (BCAAs), viz valine, leucine and isoleucine, have been proposed as biomarkers of an individual's health state, given their influence on protein synthesis and gluconeogenesis inhibition. BCAA levels of 141 euthymic/subsyndromal individuals with BD and 141 matched healthy controls (HC) were analyzed by high-pressure lipid chromatography and correlated with clinical psychiatric, anthropometric and metabolic parameters. BD and HC did not differ in valine and isoleucine, whereas leucine was significantly lower in BD. Furthermore, correlations were found between BCAAs and anthropometric and glucose metabolism data. All BCAAs correlated with lipid metabolism parameters in females. There were no associations between BCAAs and long-term clinical parameters of BD. A negative correlation was found between valine and Hamilton-Depression-Scale, and Beck-Depression-Inventory-II, in male individuals. Our results indicate the utility of BCAAs as biomarkers for the current state of health, also in BD. As BD individuals have a high risk for overweight/obesity, in association with comorbid medical conditions (e.g. cardiovascular diseases, insulin resistance), health-state markers are urgently required. However, no illness-specific associations were found in this euthymic/subsyndromal BD group.
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
40 CFR 761.389 - Testing parameter requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... variable testing parameters described in this section which may be used in the validation study. The conditions demonstrated in the validation study for these variables shall become the required conditions for.... During the validation study, use the same ratio of contaminated surface area to soak solvent volume as...
Aerodynamics as a subway design parameter
NASA Technical Reports Server (NTRS)
Kurtz, D. W.
1976-01-01
A parametric sensitivity study has been performed on the system operational energy requirement in order to guide subway design strategy. Aerodynamics can play a dominant or trivial role, depending upon the system characteristics. Optimization of the aerodynamic parameters may not minimize the total operational energy. Isolation of the station box from the tunnel and reduction of the inertial power requirements pay the largest dividends in terms of the operational energy requirement.
NASA Astrophysics Data System (ADS)
Lestari, Brina Cindy; Dewi, Dyah Santhi; Widodo, Rusminto Tjatur
2017-11-01
The elderly who has a particular disease need to take some medicines in everyday with correct dosages and appropriate by time schedules. However, the elderly frequently forget to take medicines because of their memory weakened. Consequently, the product innovation of elderly healthcare is required for helping elderly takes some medicine more easily. This research aims to develop a smart medicine box by applying quality function deployment method. The first step is identifying elderly requirements through an ethnographic approach by interviewing thirty-two of elderly people as respondents. Then, the second step is translated elderly requirements to technical parameter for designing a smart medicine box. The smart box design is focused on two main requirements which have highest importance rating including alarm reminder for taking medicine and automatic medicine box. Finally, the prototype design has been created and tested by using usability method. The result shown that 90% from ten respondents have positive respond on the feature of smart medicine box. The voice of alarm reminder smart medicine box is easy to understand by elderly people for taking medicines.