Science.gov

Sample records for accurate input parameters

  1. CASIM input parameters for various materials

    SciTech Connect

    Malensek, A.J.; Elwyn, A.J.

    1994-07-14

    During the past year, the computer program CASIM has been placed in a common area from which copies can be obtained by a wide array of users. The impetus for this arrangement was the need to have a standard code that could be maintained and transported to other platforms. In addition, an historical record would be kept of each version as the program evolved. CASIM requires a series of parameters (input by the user) that describe the medium in which the cascade develops. Presently a total of 9 materials can be defined. Occasions arise when one needs to know the properties of materials (elements, compounds, and mixtures) that have not been defined. Because it is desirable to have a uniform set of values for all CASIM users, this note presents a methodology for obtaining the input parameters for an arbitrary material. They are read in by the Subroutine CASIM{underscore}PROG from the user supplied file CASIM.DAT.

  2. Improved input parameters for diffusion models of skin absorption.

    PubMed

    Hansen, Steffi; Lehr, Claus-Michael; Schaefer, Ulrich F

    2013-02-01

    To use a diffusion model for predicting skin absorption requires accurate estimates of input parameters on model geometry, affinity and transport characteristics. This review summarizes methods to obtain input parameters for diffusion models of skin absorption focusing on partition and diffusion coefficients. These include experimental methods, extrapolation approaches, and correlations that relate partition and diffusion coefficients to tabulated physico-chemical solute properties. Exhaustive databases on lipid-water and corneocyte protein-water partition coefficients are presented and analyzed to provide improved approximations to estimate lipid-water and corneocyte protein-water partition coefficients. The most commonly used estimates of lipid and corneocyte diffusion coefficients are also reviewed. In order to improve modeling of skin absorption in the future diffusion models should include the vertical stratum corneum heterogeneity, slow equilibration processes, the absorption from complex non-aqueous formulations, and an improved representation of dermal absorption processes. This will require input parameters for which no suitable estimates are yet available.

  3. Input Type and Parameter Resetting: Is Naturalistic Input Necessary?

    ERIC Educational Resources Information Center

    Rothman, Jason; Iverson, Michael

    2007-01-01

    It has been argued that extended exposure to naturalistic input provides L2 learners with more of an opportunity to converge of target morphosyntactic competence as compared to classroom-only environments, given that the former provide more positive evidence of less salient linguistic properties than the latter (e.g., Isabelli 2004). Implicitly,…

  4. [Input impedance for studying hydraulic parameters of the vessel system].

    PubMed

    Naumov, A Iu; Sheptutsolov, K V; Balashov, S A; Mel'kumiants, A M

    2001-03-01

    Vascular input impedance can be used as an effective tool in estimating hydraulic parameters of arterial bed. These parameters may be interpreted as hydraulic resistance, elastance and inertance of particular sites of the arterial system. There is no significant difference between these parameters and those obtained through a direct measurement.

  5. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    PubMed

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  6. Estimation of the input parameters in the Feller neuronal model

    NASA Astrophysics Data System (ADS)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  7. Describing the catchment-averaged precipitation as a stochastic process improves parameter and input estimation

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter

    2016-04-01

    Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.

  8. Investigation of RADTRAN Stop Model input parameters for truck stops

    SciTech Connect

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-03-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops.

  9. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).

  10. Agricultural and Environmental Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  11. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  12. DC servomechanism parameter identification: a Closed Loop Input Error approach.

    PubMed

    Garrido, Ruben; Miranda, Roger

    2012-01-01

    This paper presents a Closed Loop Input Error (CLIE) approach for on-line parametric estimation of a continuous-time model of a DC servomechanism functioning in closed loop. A standard Proportional Derivative (PD) position controller stabilizes the loop without requiring knowledge on the servomechanism parameters. The analysis of the identification algorithm takes into account the control law employed for closing the loop. The model contains four parameters that depend on the servo inertia, viscous, and Coulomb friction as well as on a constant disturbance. Lyapunov stability theory permits assessing boundedness of the signals associated to the identification algorithm. Experiments on a laboratory prototype allows evaluating the performance of the approach.

  13. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations

    PubMed Central

    2015-01-01

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules. PMID:26146493

  14. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    DOE PAGES

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less

  15. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    SciTech Connect

    Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter

    2015-04-14

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.

  16. Machine Learning of Parameters for Accurate Semiempirical Quantum Chemical Calculations.

    PubMed

    Dral, Pavlo O; von Lilienfeld, O Anatole; Thiel, Walter

    2015-05-12

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.

  17. Macroscopic singlet oxygen model incorporating photobleaching as an input parameter

    NASA Astrophysics Data System (ADS)

    Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.

    2015-03-01

    A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.

  18. A convenient and accurate parallel Input/Output USB device for E-Prime.

    PubMed

    Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro

    2011-03-01

    Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.

  19. Environmental Transport Input Parameters for the Biosphere Model

    SciTech Connect

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  20. Inhalation Exposure Input Parameters for the Biosphere Model

    SciTech Connect

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  1. Soil-related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2003-07-02

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  2. The definition of input parameters for modelling of energetic subsystems

    NASA Astrophysics Data System (ADS)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  3. Soil-Related Input Parameters for the Biosphere Model

    SciTech Connect

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  4. Direct computation of parameters for accurate polarizable force fields

    SciTech Connect

    Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.

    2014-11-21

    We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.

  5. Accurate 3D quantification of the bronchial parameters in MDCT

    NASA Astrophysics Data System (ADS)

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  6. Accurate fundamental parameters for 23 bright solar-type stars

    NASA Astrophysics Data System (ADS)

    Bruntt, H.; Bedding, T. R.; Quirion, P.-O.; Lo Curto, G.; Carrier, F.; Smalley, B.; Dall, T. H.; Arentoft, T.; Bazot, M.; Butler, R. P.

    2010-07-01

    We combine results from interferometry, asteroseismology and spectroscopy to determine accurate fundamental parameters of 23 bright solar-type stars, from spectral type F5 to K2 and luminosity classes III-V. For some stars we can use direct techniques to determine the mass, radius, luminosity and effective temperature, and we compare with indirect methods that rely on photometric calibrations or spectroscopic analyses. We use the asteroseismic information available in the literature to infer an indirect mass with an accuracy of 4-15 per cent. From indirect methods we determine luminosity and radius to 3 per cent. We find evidence that the luminosity from the indirect method is slightly overestimated (~ 5 per cent) for the coolest stars, indicating that their bolometric corrections (BCs) are too negative. For Teff we find a slight offset of -40 +/- 20K between the spectroscopic method and the direct method, meaning the spectroscopic temperatures are too high. From the spectroscopic analysis we determine the detailed chemical composition for 13 elements, including Li, C and O. The metallicity ranges from [Fe/H] = -1.7 to +0.4, and there is clear evidence for α-element enhancement in the metal-poor stars. We find no significant offset between the spectroscopic surface gravity and the value from combining asteroseismology with radius estimates. From the spectroscopy we also determine v sin i and we present a new calibration of macroturbulence and microturbulence. From the comparison between the results from the direct and spectroscopic methods we claim that we can determine Teff, log g and [Fe/H] with absolute accuracies of 80K, 0.08 and 0.07dex. Photometric calibrations of Strömgren indices provide accurate results for Teff and [Fe/H] but will be more uncertain for distant stars when interstellar reddening becomes important. The indirect methods are important to obtain reliable estimates of the fundamental parameters of relatively faint stars when interferometry

  7. A generalized multiple-input, multiple-output modal parameter estimation algorithm

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Blair, M. A.

    1984-01-01

    A new method for experimental determination of the modal parameters of a structure is presented. The method allows for multiple input forces to be applied simultaneously, and for an arbitrary number of acceleration response measurements to be employed. These data are used to form the equations of motion for a damped linear elastic structure. The modal parameters are then obtained through an eigenvalue technique. In conjunction with the development of the equations, an extensive computer simulation study was performed. The results of the study show a marked improvement in the mode shape identification for closely-spaced modes as the number of applied forces is increased. Also demonstrated is the influence of noise on the method's ability to identify accurate modal parameters. Here again, an increase in the number of exciters leads to a significant improvement in the identified parameters.

  8. Sensitivity of injection costs to input petrophysical parameters in numerical geologic carbon sequestration models

    SciTech Connect

    Cheng, C. L.; Gragg, M. J.; Perfect, E.; White, Mark D.; Lemiszki, P. J.; McKay, L. D.

    2013-08-24

    Numerical simulations are widely used in feasibility studies for geologic carbon sequestration. Accurate estimates of petrophysical parameters are needed as inputs for these simulations. However, relatively few experimental values are available for CO2-brine systems. Hence, a sensitivity analysis was performed using the STOMP numerical code for supercritical CO2 injected into a model confined deep saline aquifer. The intrinsic permeability, porosity, pore compressibility, and capillary pressure-saturation/relative permeability parameters (residual liquid saturation, residual gas saturation, and van Genuchten alpha and m values) were varied independently. Their influence on CO2 injection rates and costs were determined and the parameters were ranked based on normalized coefficients of variation. The simulations resulted in differences of up to tens of millions of dollars over the life of the project (i.e., the time taken to inject 10.8 million metric tons of CO2). The two most influential parameters were the intrinsic permeability and the van Genuchten m value. Two other parameters, the residual gas saturation and the residual liquid saturation, ranked above the porosity. These results highlight the need for accurate estimates of capillary pressure-saturation/relative permeability parameters for geologic carbon sequestration simulations in addition to measurements of porosity and intrinsic permeability.

  9. Comparison of the pulsatility index and input impedance parameters in a model of altered hemodynamics.

    PubMed

    Downing, G J; Yarlagadda, A P; Maulik, D

    1991-06-01

    Clinical use of Doppler waveform analysis assumes that vascular resistance is accurately represented by the Doppler indices. This assumption was examined by correlating the pulsatility index (PI) with measures of input impedance including peripheral vascular resistance (Zpr), characteristic impedance (Zo), and reflection coefficient (Rc). Assessment of these parameters from the descending aorta was performed in five chronically instrumented, newborn lambs subjected to administration of norepinephrine and hydralazine. Significant increases in PI, Zpr, Zo, and Rc were seen in response to administration of norepinephrine, and decreases in PI and Zpr occurred with hydralazine use. Significant correlation existed between PI and Zpr throughout the study, but changes in PI did not correlate with changes in Zo and Rc. PI appears to reflect changes in Zpr accurately. However, the lack of ability for PI to assess Zo or Rc requires further investigation.

  10. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  11. Accurate Critical Parameters for the Modified Lennard-Jones Model

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuma; Fuchizaki, Kazuhiro

    2017-03-01

    The critical parameters of the modified Lennard-Jones system were examined. The isothermal-isochoric ensemble was generated by conducting a molecular dynamics simulation for the system consisting of 6912, 8788, 10976, and 13500 particles. The equilibrium between the liquid and vapor phases was judged from the chemical potential of both phases upon establishing the coexistence envelope, from which the critical temperature and density were obtained invoking the renormalization group theory. The finite-size scaling enabled us to finally determine the critical temperature, pressure, and density as Tc = 1.0762(2), pc = 0.09394(17), and ρc = 0.331(3), respectively.

  12. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    SciTech Connect

    Sprung, J.L.; Jow, H-N ); Rollstin, J.A. ); Helton, J.C. )

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.

  13. In vivo correlation of Doppler waveform analysis with arterial input impedance parameters.

    PubMed

    Downing, G J; Maulik, D; Phillips, C; Kadado, T R

    1993-01-01

    Previous studies have confirmed that Doppler waveform analysis (DWA) offers a valid reflection of changes in peripheral vascular resistance. However, the ability of the pulsatility index (PI), a parameter of DWA, to reflect the dynamic components of the circulation, as assessed by arterial input parameters, remains uncertain. In addition, the state of the central circulation is considered an important factor influencing the accuracy of this technique. This study evaluated the ability of the aortic PI to reflect alterations of input impedance in a chronically instrumented lamb model that was subjected to pharmacologic alteration of the circulation. Pressure, volumetric flow and continuous-wave Doppler frequency shift measurements were recorded from the infrarenal abdominal aorta. The parameters of input impedance, peripheral vascular resistance (Zpr), characteristic impedance (Zo) and reflection coefficient (Rc), were determined and then correlated with changes in the aortic PI. Initially, perturbations of the circulatory state were created with a vasodilator, hydralazine (HY) and a vasoconstrictor, phenylephrine (PE). During a second set of experiments, the effect of the reflex heart rate (HR) responses on the PI was evaluated. This was accomplished by inhibiting reflex HR responses to these vasoactive agents with either trimethophan (TM) or atropine methyl bromide (AMB). In response to HY and HY with TM, significant decreases in the PI and impedance parameters occurred. Administration of PE and PE with AMB resulted in significant increases in PI and each of the impedance parameters. HY and PE induced changes in PI correlated significantly with changes in volumetric flow (r = 0.82, 0.80; p < 0.001), mean arterial blood pressure (r = 0.64, 0.70; p < 0.001) and Zpr (r = 0.77, 0.80; p < 0.001), but not with Zo (r = 0.34, 0.36) and Rc (r = 0.26, 0.31). However, when reflex HR responses were inhibited during the administration of the vasoactive agents, HY with TM and PE

  14. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  15. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  16. Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Klein, Vladislav

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  17. Optimal input design for aircraft parameter estimation using dynamic programming principles

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Morelli, Eugene A.

    1990-01-01

    A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.

  18. An analytical study of relay neuron's reliability: dependence on input and model parameters.

    PubMed

    Agarwal, Rahul; Sarma, Sridevi V

    2011-01-01

    Relay neurons are widely found in our nervous system, including the Thalamus, spinal cord and lateral geniculate body. They receive a modulating input (background activity) and a reference input. The modulating input modulates relay of the reference input. This modulation is critical for correct functioning of relay neurons, but is poorly understood. In this paper, we use a biophysical-based model and systems theoretic tools to calculate how well a single relay neuron relays a reference input signal as a function of the neuron's electro physiological properties (i.e. model parameters), the modulating signal, and the reference signal parameters. Our analysis is more rigorous than previous related works and is generalizable to all relay cells in the body. Our analytical expression matches relay performance obtained in simulation and suggest that increasing the frequency of a sinusoidal modulating input or decreasing its DC offset increases the relay cell reliability.

  19. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  20. A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns

    NASA Astrophysics Data System (ADS)

    Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae

    2004-05-01

    Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.

  1. Microstrip superconducting quantum interference device radio-frequency amplifier: Scattering parameters and input coupling

    SciTech Connect

    Kinion, D; Clarke, J

    2008-01-24

    The scattering parameters of an amplifier based on a dc Superconducting QUantum Interference Device (SQUID) are directly measured at 4.2 K. The results can be described using an equivalent circuit model of the fundamental resonance of the microstrip resonator which forms the input of the amplifier. The circuit model is used to determine the series capacitance required for critical coupling of the microstrip to the input circuit.

  2. Input parameters for LEAP and analysis of the Model 22C data base

    SciTech Connect

    Stewart, L.; Goldstein, M.

    1981-05-01

    The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.

  3. Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence

    2016-05-31

    Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.

  4. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  5. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    PubMed

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  6. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint.

  7. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  8. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    NASA Astrophysics Data System (ADS)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains

  9. Estimating unknown input parameters when implementing the NGA ground-motion prediction equations in engineering practice

    USGS Publications Warehouse

    Kaklamanos, James; Baise, Laurie G.; Boore, David M.

    2011-01-01

    The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.

  10. Estimating input parameters from intracellular recordings in the Feller neuronal model

    NASA Astrophysics Data System (ADS)

    Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta

    2010-03-01

    We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.

  11. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  12. Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2011-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.

  13. Genetic algorithm to estimate the input parameters of Klatt and HLSyn formant-based speech synthesizers.

    PubMed

    Araújo, Fabíola; Filho, José; Klautau, Aldebaro

    2016-12-01

    Voice imitation basically consists in estimating a synthesizer's input parameters to mimic a target speech signal. This is a difficult inverse problem because the mapping is time-varying, non-linear and from many to one. It typically requires considerable amount of time to be done manually. This work presents the evolution of a system based on a genetic algorithm (GA) to automatically estimate the input parameters of the Klatt and HLSyn formant synthesizers using an analysis-by-synthesis process. Results are presented for natural (human-generated) speech for three male speakers. The results obtained with the GA-based system outperform those obtained with the baseline Winsnoori with respect to four objective figures of merit and a subjective test. The GA with Klatt synthesizer generated similar voices to the target and the subjective tests indicate an improvement in the quality of the synthetic voices when compared to the ones produced by the baseline.

  14. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    SciTech Connect

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  15. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters

    PubMed Central

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050

  16. Impacts of input parameter spatial aggregation on an agricultural nonpoint source pollution model

    NASA Astrophysics Data System (ADS)

    FitzHugh, T. W.; Mackay, D. S.

    2000-09-01

    The accuracy of agricultural nonpoint source pollution models depends in part on how well model input parameters describe the relevant characteristics of the watershed. The spatial extent of input parameter aggregation has previously been shown to have a substantial impact on model output. This study investigates this problem using the Soil and Water Assessment Tool (SWAT), a distributed-parameter agricultural nonpoint source pollution model. The primary question addressed here is: how does the size or number of subwatersheds used to partition the watershed affect model output, and what are the processes responsible for model behavior? SWAT was run on the Pheasant Branch watershed in Dane County, WI, using eight watershed delineations, each with a different number of subwatersheds. Model runs were conducted for the period 1990-1996. Streamflow and outlet sediment predictions were not seriously affected by changes in subwatershed size. The lack of change in outlet sediment is due to the transport-limited nature of the Pheasant Branch watershed and the stable transport capacity of the lower part of the channel network. This research identifies the importance of channel parameters in determining the behavior of SWAT's outlet sediment predictions. Sediment generation estimates do change substantially, dropping by 44% between the coarsest and the finest watershed delineations. This change is primarily due to the sensitivity of the runoff term in the Modified Universal Soil Loss Equation to the area of hydrologic response units (HRUs). This sensitivity likely occurs because SWAT was implemented in this study with a very detailed set of HRUs. In order to provide some insight on the scaling behavior of the model two indexes were derived using the mathematics of the model. The indexes predicted SWAT scaling behavior from the data inputs without a need for running the model. Such indexes could be useful for model users by providing a direct way to evaluate alternative models

  17. Time resolved diffuse optical spectroscopy with geometrically accurate models for bulk parameter recovery

    PubMed Central

    Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid

    2016-01-01

    A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137

  18. Accurate Structure Parameters for Tunneling Ionization Rates of Gas-Phase Linear Molecules

    NASA Astrophysics Data System (ADS)

    Zhao, Song-Feng; Li, Jian-Ke; Wang, Guo-Li; Li, Peng-Cheng; Zhou, Xiao-Xin

    2017-03-01

    In the molecular Ammosov–Delone–Krainov (MO-ADK) model of Tong et al. [Phys. Rev. A 66 (2002) 033402], the ionization rate depends on the structure parameters of the molecular orbital from which the electron is removed. We determine systematically and tabulate accurate structure parameters of the highest occupied molecular orbital (HOMO) for 123 gas-phase linear molecules by solving time-independent Schrödinger equation with B-spline functions and molecular potentials which are constructed numerically using the modified Leeuwen–Baerends (LBα) model. Supported by National Natural Science Foundation of China under Grant Nos. 11664035, 11674268, 11465016, 11364038, 11364039, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20116203120001 and the Basic Scientific Research Foundation for Institution of Higher Learning of Gansu Province

  19. State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications

    NASA Astrophysics Data System (ADS)

    Phanomchoeng, Gridsada

    A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is

  20. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    SciTech Connect

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  1. Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter

    DOE PAGES

    Voinov, A. V.; Grimes, S. M.; Brune, C. R.; ...

    2014-09-03

    Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.

  2. Accurate estimation of motion blur parameters in noisy remote sensing image

    NASA Astrophysics Data System (ADS)

    Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong

    2015-05-01

    The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.

  3. Ground Motion Simulations for Bursa Region (Turkey) Using Input Parameters derived from the Regional Seismic Network

    NASA Astrophysics Data System (ADS)

    Unal, B.; Askan, A.

    2014-12-01

    Earthquakes are among the most destructive natural disasters in Turkey and it is important to assess seismicity in different regions with the use of seismic networks. Bursa is located in Marmara Region, Northwestern Turkey and to the south of the very active North Anatolian Fault Zone. With around three million inhabitants and key industrial facilities of the country, Bursa is the fourth largest city in Turkey. Since most of the focus is on North Anatolian Fault zone, despite its significant seismicity, Bursa area has not been investigated extensively until recently. For reliable seismic hazard estimations and seismic design of structures, assessment of potential ground motions in this region is essential using both recorded and simulated data. In this study, we employ stochastic finite-fault simulation with dynamic corner frequency approach to model previous events as well to assess potential earthquakes in Bursa. To ensure simulations with reliable synthetic ground motion outputs, the input parameters must be carefully derived from regional data. In this study, using strong motion data collected at 33 stations in the region, site-specific parameters such as near-surface high frequency attenuation parameter and amplifications are obtained. Similarly, source and path parameters are adopted from previous studies that as well employ regional data. Initially, major previous events in the region are verified by comparing the records with the corresponding synthetics. Then simulations of scenario events in the region are performed. We present the results in terms of spatial distribution of peak ground motion parameters and time histories at selected locations.

  4. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    NASA Astrophysics Data System (ADS)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  5. Impact of spatial and temporal aggregation of input parameters on the assessment of irrigation scheme performance

    NASA Astrophysics Data System (ADS)

    Lorite, I. J.; Mateos, L.; Fereres, E.

    2005-01-01

    SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results

  6. Comparisons of CAP88PC version 2.0 default parameters to site specific inputs

    SciTech Connect

    Lehto, M. A.; Courtney, J. C.; Charter, N.; Egan, T.

    2000-03-02

    The effects of varying the input for the CAP88PC Version 2.0 program on the total effective dose equivalents (TEDEs) were determined for hypothetical releases from the Hot Fuel Examination Facility (HFEF) located at the Argonne National Laboratory site on the Idaho National Engineering and Environmental Laboratory (INEEL). Values for site specific meteorological conditions and agricultural production parameters were determined for the 80 km radius surrounding the HFEF. Four nuclides, {sup 3}H, {sup 85}Kr, {sup 129}I, and {sup 137}Cs (with its short lived progeny, {sup 137m}Ba) were selected for this study; these are the radioactive materials most likely to be released from HFEF under normal or abnormal operating conditions. Use of site specific meteorological parameters of annual precipitation, average temperature, and the height of the inversion layer decreased the TEDE from {sup 137}Cs-{sup 137m}Ba up to 36%; reductions for other nuclides were less than 3%. Use of the site specific agricultural parameters reduced TEDE values between 7% and 49%, depending on the nuclide. Reductions are associated with decreased committed effective dose equivalents (CEDEs) from the ingestion pathway. This is not surprising since the HFEF is located well within the INEEL exclusion area, and the surrounding area closest to the release point is a high desert with limited agricultural diversity. Livestock and milk production are important in some counties at distances greater than 30 km from the HFEF.

  7. Comparison of input parameters regarding rock mass in analytical solution and numerical modelling

    NASA Astrophysics Data System (ADS)

    Yasitli, N. E.

    2016-12-01

    Characteristics of stress redistribution around a tunnel excavated in rock are of prime importance for an efficient tunnelling operation and maintaining stability. As it is a well known fact that rock mass properties are the most important factors affecting stability together with in-situ stress field and tunnel geometry. Induced stresses and resultant deformation around a tunnel can be approximated by means of analytical solutions and application of numerical modelling. However, success of these methods depends on assumptions and input parameters which must be representative for the rock mass. However, mechanical properties of intact rock can be found by laboratory testing. The aim of this paper is to demonstrate the importance of proper representation of rock mass properties as input data for analytical solution and numerical modelling. For this purpose, intact rock data were converted into rock mass data by using the Hoek-Brown failure criterion and empirical relations. Stress-deformation analyses together with yield zone thickness determination have been carried out by using analytical solutions and numerical analyses by using FLAC3D programme. Analyses results have indicated that incomplete and incorrect design causes stability and economic problems in the tunnel. For this reason during the tunnel design analytical data and rock mass data should be used together. In addition, this study was carried out to prove theoretically that numerical modelling results should be applied to the tunnel design for the stability and for the economy of the support.

  8. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    NASA Astrophysics Data System (ADS)

    Steffens, K.; Larsbo, M.; Moeys, J.; Kjellström, E.; Jarvis, N.; Lewan, E.

    2014-02-01

    Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM), greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970-1999) for an important agricultural production area in south-western Sweden based on monthly change factors for 2070-2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  9. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    NASA Astrophysics Data System (ADS)

    Steffens, K.; Larsbo, M.; Moeys, J.; Kjellström, E.; Jarvis, N.; Lewan, E.

    2013-08-01

    The assessment of climate change impacts on the risk for pesticide leaching needs careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-west Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM), greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO-model were generated by scaling a reference climate data set (1970-1999) for an important agricultural production area in south-west Sweden based on monthly change factors for 2070-2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios could provide robust probabilistic estimates of future pesticide losses and assessments of changes in pesticide leaching risks.

  10. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  11. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    SciTech Connect

    Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling, the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.

  12. Accurate parameters for HD 209458 and its planet from HST spectrophotometry

    NASA Astrophysics Data System (ADS)

    del Burgo, C.; Allende Prieto, C.

    2016-12-01

    We present updated parameters for the star HD 209458 and its transiting giant planet. The stellar angular diameter θ = 0.2254 ± 0.0017 mas is obtained from the average ratio between the absolute flux observed with the Hubble Space Telescope and that of the best-fitting Kurucz model atmosphere. This angular diameter represents an improvement in precision of more than four times compared to available interferometric determinations. The stellar radius R⋆ = 1.20 ± 0.05 R⊙ is ascertained by combining the angular diameter with the Hipparcos trigonometric parallax, which is the main contributor to its uncertainty, and therefore the radius accuracy should be significantly improved with Gaia's measurements. The radius of the exoplanet Rp = 1.41 ± 0.06 RJ is derived from the corresponding transit depth in the light curve and our stellar radius. From the model fitting, we accurately determine the effective temperature, Teff = 6071 ± 20 K, which is in perfect agreement with the value of 6070 ± 24 K calculated from the angular diameter and the integrated spectral energy distribution. We also find precise values from recent Padova isochrones, such as R⋆ = 1.20 ± 0.06 R⊙ and Teff = 6099 ± 41 K. We arrive at a consistent picture from these methods and compare the results with those from the literature.

  13. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    SciTech Connect

    Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan

    2015-09-15

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  14. NASP: an accurate, rapid method for the identification of SNPs in WGS datasets that supports flexible input and output formats

    PubMed Central

    Travis, Jason; Schupp, James M.; Gillece, John D.; Aziz, Maliha; Driebe, Elizabeth M.; Drees, Kevin P.; Hicks, Nathan D.; Williamson, Charles Hall Davis; Hepp, Crystal M.; Smith, David Earl; Roe, Chandler; Engelthaler, David M.; Wagner, David M.; Keim, Paul

    2016-01-01

    Whole-genome sequencing (WGS) of bacterial isolates has become standard practice in many laboratories. Applications for WGS analysis include phylogeography and molecular epidemiology, using single nucleotide polymorphisms (SNPs) as the unit of evolution. NASP was developed as a reproducible method that scales well with the hundreds to thousands of WGS data typically used in comparative genomics applications. In this study, we demonstrate how NASP compares with other tools in the analysis of two real bacterial genomics datasets and one simulated dataset. Our results demonstrate that NASP produces similar, and often better, results in comparison with other pipelines, but is much more flexible in terms of data input types, job management systems, diversity of supported tools and output formats. We also demonstrate differences in results based on the choice of the reference genome and choice of inferring phylogenies from concatenated SNPs or alignments including monomorphic positions. NASP represents a source-available, version-controlled, unit-tested method and can be obtained from tgennorth.github.io/NASP. PMID:28348869

  15. Rainfall simulations on steep calanchi landscapes: Generating input parameters for physically based erosion modelling

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Buchholz, Arno; Neugirg, Fabian; Schindewolf, Marcus

    2016-04-01

    Calanchi landscapes in central Italy have been subject to geoscientific research since many years, not exclusively but especially for questions regarding soil erosion and land degradation. Seasonal dynamics play an important role for morphological processes within the Calanchi. As in most Mediterranean landscapes also in the research site at Val d'Orcia long and dry summers are ended by heavy rainfall events in autumn. The latter contribute to most of the annual sediment output of the incised hollows and can cause damage to agricultural land and infrastructures. While research for understanding Calanco development is of high importance, the complex morphology and thus limited accessibility impedes in situ works. To still improve the understanding of morphodynamics without unnecessarily impinging natural conditions a remote sensing and erosion modelling approach was carried out in the presented work. UAV and LiDAR based very high resolution digital surface models were produced and served as an input parameter for the raster and physically based soil erosion model EROSION3D. Additionally, data on infiltration, runoff generation and sediment detachment were generated with artificial rainfall simulations - the most invasive but unavoidable method. To increase the 1 m plot length virtually to around 20 m the sediment loaded runoff water was again introduced to the plot by a reflux system. Rather elaborate logistics were required to set up the simulator on strongly inclined slopes, to establish sufficient water supply and to secure the simulator on the slope but experiments produced plausible results and valuable input data for modelling. The model results are then compared to the repeated UAV and LiDAR campaigns and the resulting digital elevation models of difference. By simulating different rainfall and moisture scenarios and implementing in situ measured weather data runoff induced processes can be distinguished from gravitational slides and rockfall.

  16. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel.

  17. Star Classification for the Kepler Input Catalog: From Images to Stellar Parameters

    NASA Astrophysics Data System (ADS)

    Brown, T. M.; Everett, M.; Latham, D. W.; Monet, D. G.

    2005-12-01

    The Stellar Classification Project is a ground-based effort to screen stars within the Kepler field of view, to allow removal of stars with large radii (and small potential transit signals) from the target list. Important components of this process are: (1) An automated photometry pipeline estimates observed magnitudes both for target stars and for stars in several calibration fields. (2) Data from calibration fields yield extinction-corrected AB magnitudes (with g, r, i, z magnitudes transformed to the SDSS system). We merge these with 2MASS J, H, K magnitudes. (3) The Basel grid of stellar atmosphere models yields synthetic colors, which are transformed to our photometric system by calibration against observations of stars in M67. (4) We combine the r magnitude and stellar galactic latitude with a simple model of interstellar extinction to derive a relation connecting {Teff, luminosity} to distance and reddening. For models satisfying this relation, we compute a chi-squared statistic describing the match between each model and the observed colors. (5) We create a merit function based on the chi-squared statistic, and on a Bayesian prior probability distribution which gives probability as a function of Teff, luminosity, log(Z), and height above the galactic plane. The stellar parameters ascribed to a star are those of the model that maximizes this merit function. (6) Parameter estimates are merged with positional and other information from extant catalogs to yield the Kepler Input Catalog, from which targets will be chosen. Testing and validation of this procedure are underway, with encouraging initial results.

  18. Consistency of VDJ Rearrangement and Substitution Parameters Enables Accurate B Cell Receptor Sequence Annotation.

    PubMed

    Ralph, Duncan K; Matsen, Frederick A

    2016-01-01

    VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM.

  19. Improving Rotor-Stator Interaction Noise Code Through Analysis of Input Parameters

    NASA Technical Reports Server (NTRS)

    Unton, Timothy J.

    2004-01-01

    There are two major sources of aircraft noise. The first is from the airframe and the second is from the engines. The focus of the acoustics branch at NASA Glenn is on the engine noise sources. There are two major sources of engine noise; fan noise and jet noise. Fan noise, produced by rotating machinery of the engine, consists of both tonal noise, which occurs at discrete frequencies, and broadband noise, which occurs across a wide range of frequencies. The focus of my assignment is on the broadband noise generated by the interaction of fan flow turbulence and the stator blades. such as the sweep and stagger angles and blade count, as well as the flow parameters such as intensity of turbulence in the flow. The tool I employed in this work is a computer program that predicts broadband noise from fans. The program assumes that the complex shape of the curved blade can be represented as a single flat plate, allowing it to use fairly simple equations that can be solved in a reasonable amount of time. While the results from such representation provided reasonable estimates of the broadband noise levels, they did not usually represent the entire spectrum accurately. My investigation found that the discrepancy between data and theory can be improved if the leading edge and the trailing edge of the blade are treated separately. Using this approach, I reduced the maximum error in noise level from a high of 30% to less than 5% for the cases investigated. Detailed results of this investigation will be discussed at my presentation. The objective of this study is to investigate the influence of geometric parameters

  20. Variance estimation of modal parameters from output-only and input/output subspace-based system identification

    NASA Astrophysics Data System (ADS)

    Mellinger, Philippe; Döhler, Michael; Mevel, Laurent

    2016-09-01

    An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.

  1. A summary of the sources of input parameter values for the Waste Isolation Pilot Plant final porosity surface calculations

    SciTech Connect

    Butcher, B.M.

    1997-08-01

    A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.

  2. Guidance for Selecting Input Parameters in Modeling the Environmental Fate and Transport of Pesticides

    EPA Pesticide Factsheets

    Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.

  3. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    analysis . The previous application of TREECS™ that was selected for study was the one for Demolition (Demo) Area 2 at Massachusetts Military Reservation...MODEL UNCERTAINTY ANALYSIS : As with all modeling, there are uncertainties in the prescribed model inputs. The uncertainty is greater for some inputs...plume centerline may not be well known in most cases. Usually, the centerline is assumed to align in the primary direction of Darcy flow , or along the

  4. On the reliability of voltage and power as input parameters for the characterization of high power ultrasound applications

    NASA Astrophysics Data System (ADS)

    Haller, Julian; Wilkens, Volker

    2012-11-01

    For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.

  5. Active vibration control of Flexible Joint Manipulator using Input Shaping and Adaptive Parameter Auto Disturbance Rejection Controller

    NASA Astrophysics Data System (ADS)

    Li, W. P.; Luo, B.; Huang, H.

    2016-02-01

    This paper presents a vibration control strategy for a two-link Flexible Joint Manipulator (FJM) with a Hexapod Active Manipulator (HAM). A dynamic model of the multi-body, rigid-flexible system composed of an FJM, a HAM and a spacecraft was built. A hybrid controller was proposed by combining the Input Shaping (IS) technique with an Adaptive-Parameter Auto Disturbance Rejection Controller (APADRC). The controller was used to suppress the vibration caused by external disturbances and input motions. Parameters of the APADRC were adaptively adjusted to ensure the characteristic of the closed loop system to be a given reference system, even if the configuration of the manipulator significantly changes during motion. Because precise parameters of the flexible manipulator are not required in the IS system, the operation of the controller was sufficiently robust to accommodate uncertainties in system parameters. Simulations results verified the effectiveness of the HAM scheme and controller in the vibration suppression of FJM during operation.

  6. Accurate measurement method of Fabry-Perot cavity parameters via optical transfer function

    SciTech Connect

    Bondu, Francois; Debieu, Olivier

    2007-05-10

    It is shown how the transfer function from frequency noise to a Pound-Drever-Hall signal for a Fabry-Perot cavity can be used to accurately measure cavity length, cavity linewidth, mirror curvature, misalignments, laser beam shape mismatching with resonant beam shape, and cavity impedance mismatching with respect to vacuum.

  7. Polynomial fitting of DT-MRI fiber tracts allows accurate estimation of muscle architectural parameters.

    PubMed

    Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua

    2012-06-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.

  8. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  9. Accurate kinetic parameter estimation during progress curve analysis of systems with endogenous substrate production.

    PubMed

    Goudar, Chetan T

    2011-10-01

    We have identified an error in the published integral form of the modified Michaelis-Menten equation that accounts for endogenous substrate production. The correct solution is presented and the error in both the substrate concentration, S, and the kinetic parameters Vm , Km , and R resulting from the incorrect solution was characterized. The incorrect integral form resulted in substrate concentration errors as high as 50% resulting in 7-50% error in kinetic parameter estimates. To better reflect experimental scenarios, noise containing substrate depletion data were analyzed by both the incorrect and correct integral equations. While both equations resulted in identical fits to substrate depletion data, the final estimates of Vm , Km , and R were different and Km and R estimates from the incorrect integral equation deviated substantially from the actual values. Another observation was that at R = 0, the incorrect integral equation reduced to the correct form of the Michaelis-Menten equation. We believe this combination of excellent fits to experimental data, albeit with incorrect kinetic parameter estimates, and the reduction to the Michaelis-Menten equation at R = 0 is primarily responsible for the incorrectness to go unnoticed. However, the resulting error in kinetic parameter estimates will lead to incorrect biological interpretation and we urge the use of the correct integral form presented in this study.

  10. Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited

    SciTech Connect

    Fogtmann-Schulz, Alexandra; Hinrup, Brian; Van Eylen, Vincent; Christensen-Dalsgaard, Jørgen; Kjeldsen, Hans; Silva Aguirre, Víctor; Tingley, Brandon

    2014-02-01

    Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 is the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.

  11. Accurate Parameters for the Most Massive Stars in the Local Universe: the Brightest Eclipsing Binaries in M33

    NASA Astrophysics Data System (ADS)

    Prieto, José L.; Bonanos, Alceste; Stanek, Krzysztof

    2007-08-01

    Eclipsing binaries are the only systems that provide accurate fundamental parameters of distant stars. Currently, only a handful of accurate measurements of stars with masses between 40-80 Msun have been made. We propose to make accurate measurements of the masses, radii and luminosities of the most massive eclipsing binaries in M33. The results of this study will provide much needed constraints on theories that model the formation and evolution of massive stars and binary systems. Furthermore, it will provide vital statistics on the occurrence of massive binary twins, like the 80+80 solar masses WR 20a system and the 30+30 solar masses detached eclipsing binary in M33.

  12. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  13. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  14. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  15. Methods to Register Models and Input/Output Parameters for Integrated Modeling

    SciTech Connect

    Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.

    2010-07-10

    Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.

  16. Revised Charge Equilibration Parameters for More Accurate Hydration Free Energies of Alkanes.

    PubMed

    Davis, Joseph E; Patel, Sandeep

    2010-01-01

    We present a refined alkane charge equilibration (CHEQ) force field, improving our previously reported CHEQ alkane force field[1] to better reproduce experimental hydration free energies. Experimental hydration free energies of ethane, propane, butane, pentane, hexane, and heptane are reproduced to within 3.6% on average. We demonstrate that explicit polarization results in a shift in molecular dipole moment for water molecules associated with the alkane molecule. We also show that our new parameters do not have a significant effect on the alkane-water interactions as measured by the radial distribution function (RDF).

  17. Accurate prediction of severe allergic reactions by a small set of environmental parameters (NDVI, temperature).

    PubMed

    Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.

  18. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  19. An Integrated Bayesian Uncertainty Estimator: fusion of Input, Parameter and Model Structural Uncertainty Estimation in Hydrologic Prediction System

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Duan, Q.; Sorooshian, S.

    2005-12-01

    To-date single conceptual hydrologic models often applied to interpret physical processes within a watershed. Nevertheless hydrologic models regardless of their sophistication and complexity are simplified representation of the complex, spatially distributed and highly nonlinear real world system. Consequently their hydrologic predictions contain considerable uncertainty from different sources including: hydrometeorological forcing inputs, boundary/initial conditions, model structure, model parameters which need to be accounted for. Thus far the effort has gone to address these sources of uncertainty explicitly, making an implicit assumption that uncertainties from different sources are additive. Nevertheless because of the nonlinear nature of the hydrologic systems, it is not feasible to account for these uncertainties independently. Here we present the Integrated Bayesian Uncertainty Estimator (IBUNE) which accounts for total uncertainties from all major sources: inputs forcing, model structure, model parameters. This algorithm explores multi-model framework to tackle model structural uncertainty while using the Bayesian rules to estimate parameter and input uncertainty within individual models. Three hydrologic models including SACramento Soil Moisture Accounting (SAC-SMA) model, Hydrologic model (HYMOD) and Simple Water Balance (SWB) model were considered within IBUNE framework for this study. The results which are presented for the Leaf River Basin, MS, indicates that IBUNE gives a better quantification of uncertainty through hydrological modeling processes, therefore provide more reliable and less bias prediction with realistic uncertainty boundaries.

  20. Optimization of input parameters of acoustic-transfection for the intracellular delivery of macromolecules using FRET-based biosensors

    NASA Astrophysics Data System (ADS)

    Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.

    2016-03-01

    Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.

  1. A Sensitivity Study of Liric Algorithm to User-defined Input Parameters, Using Selected Cases from Thessaloniki's Measurements

    NASA Astrophysics Data System (ADS)

    Filioglou, M.; Balis, D.; Siomos, N.; Poupkou, A.; Dimopoulos, S.; Chaikovsky, A.

    2016-06-01

    A targeted sensitivity study of the LIRIC algorithm was considered necessary to estimate the uncertainty introduced to the volume concentration profiles, due to the arbitrary selection of user-defined input parameters. For this purpose three different tests were performed using Thessaloniki's Lidar data. Overall, tests in the selection of the regularization parameters, an upper and a lower limit test were performed. The different sensitivity tests were applied on two cases with different predominant aerosol types, a dust episode and a typical urban case.

  2. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    PubMed Central

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  3. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-04

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  4. Accurate structure and dynamics of the metal-site of paramagnetic metalloproteins from NMR parameters using natural bond orbitals.

    PubMed

    Hansen, D Flemming; Westler, William M; Kunze, Micha B A; Markley, John L; Weinhold, Frank; Led, Jens J

    2012-03-14

    A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal-ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal-ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for (15)N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of (15)N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of (15)N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site.

  5. Determination of critical nondimensional parameters in aircraft dynamic response to random input

    NASA Technical Reports Server (NTRS)

    Hillard, S. E.; Sevik, M. M.

    1974-01-01

    The critical parameters of subsonic jet aircraft response in a random atmospheric environment are determined. Equations of motion are presented for semirigid aircraft with a flexible primary airfoil. However, the analysis is easily extendable to include additional appendage flexibility. The analysis establishes the mechanical admittance values for pitching, plunging, and the first mode effects from wing elastic bending and torsion. Nondimensional parameters are established which allow the representation of all subsonic jet transport aircraft with one nondimensional model. The critical parameters for random forcing are found to be aircraft relative mass, reduced natural and forcing frequencies, and Mach number. Turbulence scale lengths are found to be directly related to the critical values of reduced forcing frequency. Results are given for subsonic craft traveling at constant altitude. Specific values of admittance functions are tabulated at Mach numbers of 0.2, 0.5, and 0.7. The relative mass range covers all aircraft currently in operation.

  6. Evaluation of severe accident risks: Quantification of major input parameters. Experts` determination of structural response issues

    SciTech Connect

    Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D.; Murfin, W.; Amos, C.N.

    1992-03-01

    In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a ``so-called`` point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.

  7. Optimal input experiment design and parameter estimation in core-scale pressure oscillation experiments

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Mansoori, M.; Bombois, X.; Jansen, J. D.; Van den Hof, P. M. J.

    2016-03-01

    This paper considers Pressure Oscillation (PO) experiments for which we find the minimum experiment time that guarantees user-imposed parameter variance upper bounds and honours actuator limits. The parameters permeability and porosity are estimated with a classical least-squares estimation method for which an expression of the covariance matrix of the estimates is calculated. This expression is used to tackle the optimization problem. We study the Dynamic Darcy Cell experiment set-up (Heller et al., 2002) and focus on data generation using square wave actuator signals, which, as we shall prove, deliver shorter experiment times than sinusoidal ones. Parameter identification is achieved using either inlet pressure/outlet pressure measurements (Heller et al., 2002) or actuator position/outlet pressure measurements, where the latter is a novel approach. The solution to the optimization problem reveals that for both measurement methods an optimal excitation frequency, an optimal inlet volume, and an optimal outlet volume exist. We find that under the same parameter variance bounds and actuator constraints, actuator position/outlet pressure measurements result in required experiment times that are a factor fourteen smaller compared to inlet pressure/outlet pressure measurements. This result is analysed in detail and we find that the dominant effect driving this difference originates from an identifiability problem when using inlet-outlet pressure measurements for joint estimation of permeability and porosity. We illustrate our results with numerical simulations, and show excellent agreement with theoretical expectations.

  8. Impact of input parameters on the prediction of hepatic plasma clearance using the well-stirred model.

    PubMed

    Wan, Hong; Bold, Peter; Larsson, Lars-Olof; Ulander, Johan; Peters, Sheila; Löfberg, Boel; Ungell, Anna-Lena; Någård, Mats; Llinàs, Antonio

    2010-09-01

    The in vitro metabolic stability assays are indispensable for screening the metabolic liability of new chemical entities (NCEs) in drug discovery. Intrinsic clearance (CL(int)) values from liver microsomes and/or hepatocytes are frequently used to assess metabolic stability as well as to quantitatively predict in vivo hepatic plasma clearance (CL(H)). An often used approximation is the so called well-stirred model which has gained widespread use. The applications of the well-stirred model are typically dependent on several measured parameters and hence with potential for error-propagation. Despite widespread use, it was recently suggested that the well-stirred model in some circumstances has been misused for in vitro in vivo extrapolation (IVIVE). In this work, we follow up that discussion and present a retrospective analysis of IVIVE for hepatic clearance prediction from in vitro metabolic stability data. We focus on the impact of input parameters on the well stirred model; in particular comparing "reference model" (with all experimentally determined values as input parameters) versus simplified models (with incomplete input parameters in the models). Based on a systematic comparative analysis and model comparison using datasets of diverse drug-like compounds and NCEs from rat and human, we conclude that simplified models, disregarding binding data, may be sufficiently good for IVIVE evaluation and compound ranking at early stage for cost-effective screening. Factors that can influence prediction accuracy are discussed, including in vitro intrinsic clearance (CL(int)) and in vivo CL(int) scaling factor used, non-specific binding to microsomes (fu(m)), blood to plasma ratio (C(B)/C(P)) and in particular fraction unbound in plasma (fu). In particular, the fu discrepancies between literature data and in-house values and between two different compound concentrations 1 and 10 µM are exemplified and its potential impact on prediction performance is demonstrated using a

  9. An Integrated Hydrologic Bayesian Multi-Model Combination Framework: Confronting Input, parameter and model structural uncertainty in Hydrologic Prediction

    SciTech Connect

    Ajami, N K; Duan, Q; Sorooshian, S

    2006-05-05

    This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.

  10. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  11. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  12. Isotope parameters (δD, δ18O) and sources of freshwater input to Kara Sea

    NASA Astrophysics Data System (ADS)

    Dubinina, E. O.; Kossova, S. A.; Miroshnikov, A. Yu.; Fyaizullina, R. V.

    2017-01-01

    The isotope characteristics (δD, δ18O) of Kara Sea water were studied for quantitative estimation of freshwater runoff at stations located along transect from Yamal Peninsula to Blagopoluchiya Bay (Novaya Zemlya). Freshwater samples were studied for glaciers (Rose, Serp i Molot) and for Yenisei and Ob estuaries. As a whole, δD and δ18O are higher in glaciers than in river waters. isotope composition of estuarial water from Ob River is δD =-131.4 and δ18O =-17.6‰. Estuarial waters of Yenisei River are characterized by compositions close to those of Ob River (-134.4 and-17.7‰), as well as by isotopically "heavier" compositions (-120.7 and-15.8‰). Waters from studied section of Kara Sea can be product of mixing of freshwater (δD =-119.4, δ18O =-15.5) and seawater (S = 34.9, δD = +1.56, δ18O = +0.25) with a composition close to that of Barents Sea water. isotope parameters of water vary significantly with salinity in surface layer, and Kara Sea waters are desalinated along entire studied transect due to river runoff. concentration of freshwater is 5-10% in main part of water column, and <5% at a depth of >100 m. maximum contribution of freshwater (>65%) was recorded in surface layer of central part of sea.

  13. Simulation and Flight Evaluation of a Parameter Estimation Input Design Method for Hybrid-Wing-Body Aircraft

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.; Ratnayake, Nalin A.

    2010-01-01

    As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.

  14. Linear-In-The-Parameters Oblique Least Squares (LOLS) Provides More Accurate Estimates of Density-Dependent Survival

    PubMed Central

    Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure

    2016-01-01

    Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048

  15. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  16. Sensitivity of soil water content simulation to different methods of soil hydraulic parameter characterization as initial input values

    NASA Astrophysics Data System (ADS)

    Rezaei, Meisam; Seuntjens, Piet; Shahidi, Reihaneh; Joris, Ingeborg; Boënne, Wesley; Cornelis, Wim

    2016-04-01

    Soil hydraulic parameters, which can be derived from in situ and/or laboratory experiments, are key input parameters for modeling water flow in the vadose zone. In this study, we measured soil hydraulic properties with typical laboratory measurements and field tension infiltration experiments using Wooding's analytical solution and inverse optimization along the vertical direction within two typical podzol profiles with sand texture in a potato field. The objective was to identify proper sets of hydraulic parameters and to evaluate their relevance on hydrological model performance for irrigation management purposes. Tension disc infiltration experiments were carried out at five different depths for both profiles at consecutive negative pressure heads of 12, 6, 3 and 0.1 cm. At the same locations and depths undisturbed samples were taken to determine the water retention curve with hanging water column and pressure extractors and lab saturated hydraulic conductivity with the constant head method. Both approaches allowed to determine the Mualem-van Genuchten (MVG) hydraulic parameters (residual water content θr, saturated water content θs,, shape parameters α and n, and field or lab saturated hydraulic conductivity Kfs and Kls). Results demonstrated horizontal differences and vertical variability of hydraulic properties. Inverse optimization resulted in excellent matches between observed and fitted infiltration rates in combination with final water content at the end of the experiment, θf, using Hydrus 2D/3D. It also resulted in close correspondence of  and Kfs with those from Logsdon and Jaynes' (1993) solution of Wooding's equation. The MVG parameters Kfs and α estimated from the inverse solution (θr set to zero), were relatively similar to values from Wooding's solution which were used as initial value and the estimated θs corresponded to (effective) field saturated water content θf. We found the Gardner parameter αG to be related to the optimized van

  17. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  18. Accurate characterization of the stellar and orbital parameters of the exoplanetary system WASP-33 b from orbital dynamics

    NASA Astrophysics Data System (ADS)

    Iorio, L.

    2016-01-01

    By using the most recently published Doppler tomography measurements and accurate theoretical modelling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination ip to the plane of the sky and of the sky-projected spin-orbit misalignment λ at two epochs about six years apart allowed for the determination of the longitude of the ascending node Ω and of the orbital inclination I to the apparent equatorial plane at the same epochs. As a consequence, average rates of change dot{Ω }_exp, dot{I}_exp of this two orbital elements, accurate to a ≈10-2 deg yr-1 level, were calculated as well. By comparing them to general theoretical expressions dot{Ω }_{J_2}, dot{I}_{J_2} for their precessions induced by an oblate star whose symmetry axis is arbitrarily oriented, we were able to determine the angle i⋆ between the line of sight the star's spin {S}^{star } and its first even zonal harmonic J_2^{star } obtaining i^{star } = {142}^{+10}_{-11} deg, J_2^{star } = 2.1^{+0.8}_{-0.5}times; 10^{-4}. As a by-product, the angle between {S}^{star } and the orbital angular momentum L is as large as about ψ ≈ 100 ° psi; ^{2008} = 99^{+5}_{-4} deg, ψ ^{{2014}} = 103^{+5}_{-4} deg and changes at a rate dot{ψ }= 0.{7}^{+1.5}_{-1.6} deg {yr}^{-1}. The predicted general relativistic Lense-Thirring precessions, of the order of ≈10-3deg yr-1, are, at present, about one order of magnitude below the measurability threshold.

  19. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    SciTech Connect

    M. Gross

    2004-09-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the

  20. Regulatory consideration of bioavailability for metals: simplification of input parameters for the chronic copper biotic ligand model.

    PubMed

    Peters, Adam; Merrington, Graham; de Schamphelaere, Karel; Delbeke, Katrien

    2011-07-01

    The chronic Cu biotic ligand model (CuBLM) provides a means by which the bioavailability of Cu can be taken into account in assessing the potential chronic risks posed by Cu at specific freshwater locations. One of the barriers to the widespread regulatory application of the CuBLM is the perceived complexity of the approach when compared to the current systems that are in place in many regulatory organizations. The CuBLM requires 10 measured input parameters, although some of these have a relatively limited influence on the predicted no-effect concentration (PNEC) for Cu. Simplification of the input requirements of the CuBLM is proposed by estimating the concentrations of the major ions Mg2+, Na+, K+, SO4(2-), Cl- , and alkalinity from Ca concentrations. A series of relationships between log10 (Ca, mg l(-1)) and log10 (major ion, mg l(-1)) was established from surface water monitoring data for Europe, and applied in the prediction of Cu PNEC values for some UK freshwater monitoring data. The use of default values for major ion concentrations was also considered, and both approaches were compared to the use of measured major ion concentrations. Both the use of fixed default major ion concentrations, and major ion concentrations estimated from Ca concentrations, provided Cu PNEC predictions which were in good agreement with the results of calculations using measured data. There is a slight loss of accuracy when using estimates of major ion concentrations compared to using measured concentration data, although to a lesser extent than when fixed default values are applied. The simplifications proposed provide a practical evidence-based methodology to facilitate the regulatory implementation of the CuBLM.

  1. Modeling the Effects of Irrigation on Land Surface Fluxes and States over the Conterminous United States: Sensitivity to Input Data and Model Parameters

    SciTech Connect

    Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.

    2013-09-16

    Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.

  2. Transportation radiological risk assessment for the programmatic environmental impact statement: An overview of methodologies, assumptions, and input parameters

    SciTech Connect

    Monette, F.; Biwer, B.; LePoire, D.; Chen, S.Y.

    1994-02-01

    The U.S. Department of Energy is considering a broad range of alternatives for the future configuration of radioactive waste management at its network of facilities. Because the transportation of radioactive waste is an integral component of the management alternatives being considered, the estimated human health risks associated with both routine and accident transportation conditions must be assessed to allow a complete appraisal of the alternatives. This paper provides an overview of the technical approach being used to assess the radiological risks from the transportation of radioactive wastes. The approach presented employs the RADTRAN 4 computer code to estimate the collective population risk during routine and accident transportation conditions. Supplemental analyses are conducted using the RISKIND computer code to address areas of specific concern to individuals or population subgroups. RISKIND is used for estimating routine doses to maximally exposed individuals and for assessing the consequences of the most severe credible transportation accidents. The transportation risk assessment is designed to ensure -- through uniform and judicious selection of models, data, and assumptions -- that relative comparisons of risk among the various alternatives are meaningful. This is accomplished by uniformly applying common input parameters and assumptions to each waste type for all alternatives. The approach presented can be applied to all radioactive waste types and provides a consistent and comprehensive evaluation of transportation-related risk.

  3. User's manual for a parameter identification technique. [with options for model simulation for fixed input forcing functions and identification from wind tunnel and flight measurements

    NASA Technical Reports Server (NTRS)

    Kanning, G.

    1975-01-01

    A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.

  4. The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2012-04-01

    Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of

  5. A New Metamodeling Approach for Time-dependent Reliability of Dynamic Systems with Random Parameters Excited by Input Random Processes

    DTIC Science & Technology

    2014-04-09

    random process. A kriging model is then established between the input and output decomposition coefficients and subsequently used to quantify the...output random process. A kriging model is then established between the input and output decomposition coefficients and subsequently used to quantify the...estimate the distributions of the decomposition coefficients. A similar decomposition is also performed on the output random process. A kriging

  6. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  7. Numerical parameter constraints for accurate PIC-DSMC simulation of breakdown from arc initiation to stable arcs

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith

    2015-09-01

    Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.

  8. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data(1)

    PubMed Central

    Rosen, I.G.; Luczak, Susan E.; Weiss, Jordan

    2014-01-01

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented. PMID:24707065

  9. A data-input program (MFI2005) for the U.S. Geological Survey modular groundwater model (MODFLOW-2005) and parameter estimation program (UCODE_2005)

    USGS Publications Warehouse

    Harbaugh, Arien W.

    2011-01-01

    The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.

  10. Interpretation and application of reaction class transition state theory for accurate calculation of thermokinetic parameters using isodesmic reaction method.

    PubMed

    Wang, Bi-Yao; Li, Ze-Rong; Tan, Ning-Xin; Yao, Qian; Li, Xiang-Yuan

    2013-04-25

    We present a further interpretation of reaction class transition state theory (RC-TST) proposed by Truong et al. for the accurate calculation of rate coefficients for reactions in a class. It is found that the RC-TST can be interpreted through the isodesmic reaction method, which is usually used to calculate reaction enthalpy or enthalpy of formation for a species, and the theory can also be used for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. A correction scheme based on this theory is proposed for the calculation of the reaction barriers and reaction enthalpies for reactions in a class. To validate the scheme, 16 combinations of various ab initio levels with various basis sets are used as the approximate methods and CCSD(T)/CBS method is used as the benchmarking method in this study to calculate the reaction energies and energy barriers for a representative set of five reactions from the reaction class: R(c)CH(R(b))CR(a)CH2 + OH(•) → R(c)C(•)(R(b))CR(a)CH2 + H2O (R(a), R(b), and R(c) in the reaction formula represent the alkyl or hydrogen). Then the results of the approximate methods are corrected by the theory. The maximum values of the average deviations of the energy barrier and the reaction enthalpy are 99.97 kJ/mol and 70.35 kJ/mol, respectively, before correction and are reduced to 4.02 kJ/mol and 8.19 kJ/mol, respectively, after correction, indicating that after correction the results are not sensitive to the level of the ab initio method and the size of the basis set, as they are in the case before correction. Therefore, reaction energies and energy barriers for reactions in a class can be calculated accurately at a relatively low level of ab initio method using our scheme. It is also shown that the rate coefficients for the five representative reactions calculated at the BHandHLYP/6-31G(d,p) level of theory via our scheme are very close to the values calculated at CCSD(T)/CBS level. Finally, reaction

  11. BASELINE PARAMETER UPDATE FOR HUMAN HEALTH INPUT AND TRANSFER FACTORS FOR RADIOLOGICAL PERFORMANCE ASSESSMENTS AT THE SAVANNAH RIVER SITE

    SciTech Connect

    Coffield, T; Patricia Lee, P

    2007-01-31

    The purpose of this report is to update parameters utilized in Human Health Exposure calculations and Bioaccumulation Transfer Factors utilized at SRS for Performance Assessment modeling. The reason for the update is to utilize more recent information issued, validate information currently used and correct minor inconsistencies between modeling efforts performed in SRS contiguous areas of the heavy industrialized central site usage areas called the General Separations Area (GSA). SRS parameters utilized were compared to a number of other DOE facilities and generic national/global references to establish relevance of the parameters selected and/or verify the regional differences of the southeast USA. The parameters selected were specifically chosen to be expected values along with identifying a range for these values versus the overly conservative specification of parameters for estimating an annual dose to the maximum exposed individual (MEI). The end uses are to establish a standardized source for these parameters that is up to date with existing data and maintain it via review of any future issued national references to evaluate the need for changes as new information is released. These reviews are to be added to this document by revision.

  12. Accurate extraction of WSe2 FETs parameters by using pulsed I-V method at various temperatures

    NASA Astrophysics Data System (ADS)

    Lee, Sung Tae; Cho, In Tak; Kang, Won Mook; Park, Byung Gook; Lee, Jong-Ho

    2016-11-01

    This work investigates the intrinsic characteristics of multilayer WSe2 field effect transistors (FETs) by analysing Pulsed I- V (PIV) and DC characteristics measured at various temperatures. In DC measurement, unwanted charge trapping due to the gate bias stress results in I- V curves different from the intrinsic characteristic. However, PIV reduces the effect of gate bias stress so that intrinsic characteristic of WSe2 FETs is obtained. The parameters such as hysteresis, field effect mobility (μeff), subthreshold slope ( SS), and threshold voltage ( V th) measured by PIV are significantly different from those obtained by DC measurement. In PIV results, the hysteresis is considerably reduced compared with DC measurement, because the charge trapping effect is significantly reduced. With increasing temperature, the field effect mobility (μeff) and subthreshold swing ( SS) are deteriorated, and threshold voltage ( V th) decreases.

  13. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  14. Coupling 1D Navier Stokes equation with autoregulation lumped parameter networks for accurate cerebral blood flow modeling

    NASA Astrophysics Data System (ADS)

    Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.

    2014-11-01

    The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.

  15. Application of QbD principles for the evaluation of empty hard capsules as an input parameter in formulation development and manufacturing.

    PubMed

    Stegemann, Sven; Connolly, Paul; Matthews, Wayne; Barnett, Rodger; Aylott, Mike; Schrooten, Karin; Cadé, Dominique; Taylor, Anthony; Bresciani, Massimo

    2014-06-01

    Understanding the product and process variable on the final product performance is an essential part of the quality-by-design (QbD) principles in pharmaceutical development. The hard capsule is an established pharmaceutical dosage form used worldwide in development and manufacturing. The empty hard capsules are supplied as an excipient that is filled by pharmaceutical manufacturers with a variety of different formulations and products. To understand the potential variations of the empty hard capsules as an input parameter and its potential impact on the finished product quality, a study was performed investigating the critical quality parameters within and in between different batches of empty hard gelatin capsules. The variability of the hard capsules showed high consistency within the specification of the critical quality parameters. This also accounts for the disintegration times, when automatic endpoint detection was used. Based on these data, hard capsules can be considered as a suitable excipient for product development using QbD principles.

  16. A tailored multi-frequency EPR approach to accurately determine the magnetic resonance parameters of dynamic nuclear polarization agents: application to AMUPol.

    PubMed

    Gast, P; Mance, D; Zurlo, E; Ivanov, K L; Baldus, M; Huber, M

    2017-02-01

    To understand the dynamic nuclear polarization (DNP) enhancements of biradical polarizing agents, the magnetic resonance parameters need to be known. We describe a tailored EPR approach to accurately determine electron spin-spin coupling parameters using a combination of standard (9 GHz), high (95 GHz) and ultra-high (275 GHz) frequency EPR. Comparing liquid- and frozen-solution continuous-wave EPR spectra provides accurate anisotropic dipolar interaction D and isotropic exchange interaction J parameters of the DNP biradical AMUPol. We found that D was larger by as much as 30% compared to earlier estimates, and that J is 43 MHz, whereas before it was considered to be negligible. With the refined data, quantum mechanical calculations confirm that an increase in dipolar electron-electron couplings leads to higher cross-effect DNP efficiencies. Moreover, the DNP calculations qualitatively reproduce the difference of TOTAPOL and AMUPol DNP efficiencies found experimentally and suggest that AMUPol is particularly effective in improving the DNP efficiency at magnetic fields higher than 500 MHz. The multi-frequency EPR approach will aid in predicting the optimal structures for future DNP agents.

  17. Noontime Latitudinal Behavior of the Ionospheric Peak Parameters (foF2 and hmF2) to the Variation of Solar Energy Input for the American Sector

    NASA Astrophysics Data System (ADS)

    Cabassa-Miranda, E.; Garnett Marques Brum, C.

    2013-12-01

    We are presenting a statistical study of the behavior of the noontime F2 peak parameters (foF2 and hmF2) to the variation of solar energy input based on digisonde data and EUV-UV solar emissions registered by SOHO satellite for geomagnetic quiet-to-normal condition. For this, we selected digisonde data from fourteen different stations spread along the American sector (ten of them located above and four below the equator). These registers were collected from 2000 to 2012 and encompass the last unusual super minimum period.

  18. Extension of the AMBER force field for nitroxide radicals and combined QM/MM/PCM approach to the accurate determination of EPR parameters of DMPOH in solution

    PubMed Central

    Hermosilla, Laura; Prampolini, Giacomo; Calle, Paloma; García de la Vega, José Manuel; Brancato, Giuseppe; Barone, Vincenzo

    2015-01-01

    A computational strategy that combines both time-dependent and time-independent approaches is exploited to accurately model molecular dynamics and solvent effects on the isotropic hyperfine coupling constants of the DMPO-H nitroxide. Our recent general force field for nitroxides derived from AMBER ff99SB is further extended to systems involving hydrogen atoms in β-positions with respect to NO. The resulting force-field has been employed in a series of classical molecular dynamics simulations, comparing the computed EPR parameters from selected molecular configurations to the corresponding experimental data in different solvents. The effect of vibrational averaging on the spectroscopic parameters is also taken into account, by second order vibrational perturbation theory involving semi-diagonal third energy derivatives together first and second property derivatives. PMID:26584116

  19. Our Sun IV: The Standard Model and Helioseismology: Consequences of Uncertainties in Input Physics and in Observed Solar Parameters

    NASA Technical Reports Server (NTRS)

    Boothroyd, Arnold I.; Sackmann, I.-Juliana

    2001-01-01

    Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only

  20. A rapid and accurate method, ventilated chamber C-history method, of measuring the emission characteristic parameters of formaldehyde/VOCs in building materials.

    PubMed

    Huang, Shaodan; Xiong, Jianyin; Zhang, Yinping

    2013-10-15

    The indoor pollution caused by formaldehyde and volatile organic compounds (VOCs) emitted from building materials poses an adverse effect on people's health. It is necessary to understand and control the behaviors of the emission sources. Based on detailed mass transfer analysis on the emission process in a ventilated chamber, this paper proposes a novel method of measuring the three emission characteristic parameters, i.e., the initial emittable concentration, the diffusion coefficient and the partition coefficient. A linear correlation between the logarithm of dimensionless concentration and time is derived. The three parameters can then be calculated from the intercept and slope of the correlation. Compared with the closed chamber C-history method, the test is performed under ventilated condition thus some commonly-used measurement instruments (e.g., GC/MS, HPLC) can be applied. While compared with other methods, the present method can rapidly and accurately measure the three parameters, with experimental time less than 12h and R(2) ranging from 0.96 to 0.99 for the cases studied. Independent experiment was carried out to validate the developed method, and good agreement was observed between the simulations based on the determined parameters and experiments. The present method should prove useful for quick characterization of formaldehyde/VOC emissions from indoor materials.

  1. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  2. THE HYPERFINE STRUCTURE OF THE ROTATIONAL SPECTRUM OF HDO AND ITS EXTENSION TO THE THz REGION: ACCURATE REST FREQUENCIES AND SPECTROSCOPIC PARAMETERS FOR ASTROPHYSICAL OBSERVATIONS

    SciTech Connect

    Cazzoli, Gabriele; Lattanzi, Valerio; Puzzarini, Cristina; Alonso, José Luis; Gauss, Jürgen

    2015-06-10

    The rotational spectrum of the mono-deuterated isotopologue of water, HD{sup 16}O, has been investigated in the millimeter- and submillimeter-wave frequency regions, up to 1.6 THz. The Lamb-dip technique has been exploited to obtain sub-Doppler resolution and to resolve the hyperfine (hf) structure due to the deuterium and hydrogen nuclei, thus enabling the accurate determination of the corresponding hf parameters. Their experimental determination has been supported by high-level quantum-chemical calculations. The Lamb-dip measurements have been supplemented by Doppler-limited measurements (weak high-J and high-frequency transitions) in order to extend the predictive capability of the available spectroscopic constants. The possibility of resolving hf splittings in astronomical spectra has been discussed.

  3. Seasonal variation in coat characteristics, tick loads, cortisol levels, some physiological parameters and temperature humidity index on Nguni cows raised in low- and high-input farms

    NASA Astrophysics Data System (ADS)

    Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.

    2014-08-01

    Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals (P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms (P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale (P < 0.05). White and brown-white patched cows had significantly longer (P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower (P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different (P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.

  4. Seasonal variation in coat characteristics, tick loads, cortisol levels, some physiological parameters and temperature humidity index on Nguni cows raised in low- and high-input farms

    NASA Astrophysics Data System (ADS)

    Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.

    2015-06-01

    Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals ( P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms ( P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale ( P < 0.05). White and brown-white patched cows had significantly longer ( P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower ( P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different ( P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.

  5. Application of Model Based Parameter Estimation for Fast Frequency Response Calculations of Input Characteristics of Cavity-Backed Aperture Antennas Using Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy C. J.

    1998-01-01

    Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.

  6. Influence of the input radiation pulse characteristics on the parameters of a XeF(C - A) amplifier in a THL-100 laser system

    NASA Astrophysics Data System (ADS)

    Yastremskii, A. G.; Ivanov, N. G.; Losev, V. F.

    2016-11-01

    We report the results of experimental and theoretical investigations on the influence of spatial and energy parameters of input radiation with a pulse duration of 50 {\\text{ps}} on output characteristics of a XeF(C - A) amplifier in a visible-range, multi-terawatt THL-100 laser system. Dynamics of the energy density radial distribution for laser radiation passing through the amplifier is studied. Results of numerical simulation are presented for amplification of laser beams with Gaussian and super-Gaussian radial energy density distributions. It is shown that the laser energy of 3.2 {\\text{J}} obtained experimentally is not the limiting value. According to calculations, the output energy of the amplifier with such mirror configuration may reach 4.1 {\\text{J}}, which in the case of a pulse compressed down to 50 {\\text{fs}} corresponds to the radiation power of 82 {\\text{TW}}.

  7. Accurate calculations of spectroscopic parameters, transition properties of 17 Λ-S states and 32 Ω states of SiB+ cation

    NASA Astrophysics Data System (ADS)

    Xing, Wei; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue

    2017-02-01

    This work computed the potential energy curves of 17 Λ-S states, which came from the first three dissociation limits, Si+(2Pu) + B(2Pu), Si(3Pg) + B+(1Sg), and Si(1Dg) + B+(1Sg), of the SiB+ cation. The potential energy curves were also calculated for the 32 Ω states generated from these Λ-S states. The calculations were done using the CASSCF method, which was followed by internally contracted MRCI approach with Davidson correction. To obtain the reliable and accurate spectroscopic parameters and vibrational properties, core-valence correlation and scalar relativistic corrections were included. Of these 17 Λ-S states, the C3Σ+, E3Π, 33Π, 23Σ+, 21Π, and 31Σ+ states had double wells. The 31Π state had three wells. The D3Σ-, E3Π, 33Π, and B3Δ states were inverted with the spin-orbit coupling effect accounted for. The 21Δ state, the first well of 31Σ+ state, the second wells of 33Π, 23Σ+, and 21Π states and the second and third wells of 31Π state were weakly bound, which well depths were within several hundreds cm-1. The second well of 31Π state had no vibrational states. The first wells of E3Π and 31Σ+ states had only one vibrational state. The spectroscopic parameters were evaluated. The vibrational properties of some weaklybound states were predicted. Franck-Condon factors of some transitions between different two Λ-S states were determined. The spin-orbit coupling effect on the spectroscopic parameters and vibrational properties was discussed. These results reported here can be expected to be reliably predicted ones.

  8. Root Parameters Show How Management Alters Resource Distribution and Soil Quality in Conventional and Low-Input Cropping Systems in Central Iowa

    PubMed Central

    Liebman, Matt; Wander, Michelle M.

    2016-01-01

    Plant-soil relations may explain why low-external input (LEI) diversified cropping systems are more efficient than their conventional counterparts. This work sought to identify links between management practices, soil quality changes, and root responses in a long-term cropping systems experiment in Iowa where grain yields of 3-year and 4-year LEI rotations have matched or exceeded yield achieved by a 2-year maize (Zea mays L.) and soybean (Glycine max L.) rotation. The 2-year system was conventionally managed and chisel-ploughed, whereas the 3-year and 4-year systems received plant residues and animal manures and were periodically moldboard ploughed. We expected changes in soil quality to be driven by organic matter inputs, and root growth to reflect spatial and temporal fluctuations in soil quality resulting from those additions. We constructed a carbon budget and measured soil quality indicators (SQIs) and rooting characteristics using samples taken from two depths of all crop-phases of each rotation system on multiple dates. Stocks of particulate organic matter carbon (POM-C) and potentially mineralizable nitrogen (PMN) were greater and more evenly distributed in the LEI than conventional systems. Organic C inputs, which were 58% and 36% greater in the 3-year rotation than in the 4-year and 2-year rotations, respectively, did not account for differences in SQI abundance or distribution. Surprisingly, SQIs did not vary with crop-phase or date. All biochemical SQIs were more stratified (p<0.001) in the conventionally-managed soils. While POM-C and PMN in the top 10 cm were similar in all three systems, stocks in the 10–20 cm depth of the conventional system were less than half the size of those found in the LEI systems. This distribution was mirrored by maize root length density, which was also concentrated in the top 10 cm of the conventionally managed plots and evenly distributed between depths in the LEI systems. The plow-down of organic amendments and

  9. Root Parameters Show How Management Alters Resource Distribution and Soil Quality in Conventional and Low-Input Cropping Systems in Central Iowa.

    PubMed

    Lazicki, Patricia A; Liebman, Matt; Wander, Michelle M

    2016-01-01

    Plant-soil relations may explain why low-external input (LEI) diversified cropping systems are more efficient than their conventional counterparts. This work sought to identify links between management practices, soil quality changes, and root responses in a long-term cropping systems experiment in Iowa where grain yields of 3-year and 4-year LEI rotations have matched or exceeded yield achieved by a 2-year maize (Zea mays L.) and soybean (Glycine max L.) rotation. The 2-year system was conventionally managed and chisel-ploughed, whereas the 3-year and 4-year systems received plant residues and animal manures and were periodically moldboard ploughed. We expected changes in soil quality to be driven by organic matter inputs, and root growth to reflect spatial and temporal fluctuations in soil quality resulting from those additions. We constructed a carbon budget and measured soil quality indicators (SQIs) and rooting characteristics using samples taken from two depths of all crop-phases of each rotation system on multiple dates. Stocks of particulate organic matter carbon (POM-C) and potentially mineralizable nitrogen (PMN) were greater and more evenly distributed in the LEI than conventional systems. Organic C inputs, which were 58% and 36% greater in the 3-year rotation than in the 4-year and 2-year rotations, respectively, did not account for differences in SQI abundance or distribution. Surprisingly, SQIs did not vary with crop-phase or date. All biochemical SQIs were more stratified (p<0.001) in the conventionally-managed soils. While POM-C and PMN in the top 10 cm were similar in all three systems, stocks in the 10-20 cm depth of the conventional system were less than half the size of those found in the LEI systems. This distribution was mirrored by maize root length density, which was also concentrated in the top 10 cm of the conventionally managed plots and evenly distributed between depths in the LEI systems. The plow-down of organic amendments and manures

  10. Input Impedance of the Microstrip SQUID Amplifier

    NASA Astrophysics Data System (ADS)

    Kinion, Darin; Clarke, John

    2008-03-01

    We present measurements of the complex scattering parameters of microstrip SQUID amplifiers (MSA) cooled to 4.2 K. The input of the MSA is a microstrip transmission line in the shape of a square spiral coil surrounding the hole in the SQUID washer that serves as the ground plane. The input impedance is found by measuring the reverse scattering parameter (S11) and is described well by a low-loss transmission line model. We map the low-loss transmission line model into an equivalent parallel RLC circuit in which a resistance R, inductance L, and capacitance C are calculated from the resonant frequency, characteristic impedance and attenuation factor. Using this equivalent RLC circuit, we model the MSA and input network with a lumped circuit model that accurately predicts the observed gain given by the forward scattering parameter (S21). We will summarize results for different coil geometries and terminations as well as SQUID bias conditions. A portion of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory in part under Contract W-7405-Eng-48 and in part under Contract DE-AC52-07NA27344 and by Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231.

  11. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    SciTech Connect

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    2016-09-26

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.

  12. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  13. The influence of anatomical and physiological parameters on the interference voltage at the input of unipolar cardiac pacemakers in low frequency electric fields.

    PubMed

    Joosten, S; Pammler, K; Silny, J

    2009-02-07

    The problem of electromagnetic interference of electronic implants such as cardiac pacemakers has been well known for many years. An increasing number of field sources in everyday life and occupational environment leads unavoidably to an increased risk for patients with electronic implants. However, no obligatory national or international safety regulations exist for the protection of this patient group. The aim of this study is to find out the anatomical and physiological worst-case conditions for patients with an implanted pacemaker adjusted to unipolar sensing in external time-varying electric fields. The results of this study with 15 volunteers show that, in electric fields, variation of the interference voltage at the input of a cardiac pacemaker adds up to 200% only because of individual factors. These factors should be considered in human studies and in the setting of safety regulations.

  14. Sensitivity of Forward Radiative Transfer Model on Spectroscopic Assumptions and Input Geophysical Parameters at 23.8 GHz and 183 GHz Channels and its Impact on Inter-calibration of Microwave Radiometers

    NASA Astrophysics Data System (ADS)

    Datta, S.; Jones, W. L.; Ebrahimi, H.; Chen, R.; Payne, V.; Kroodsma, R.

    2014-12-01

    The first step in radiometric inter-calibration is to ascertain the self-consistency and reasonableness of the observed brightness temperature (Tb) for each individual sensor involved. One of the widely used approaches is to compare the observed Tb with a simulated Tb using a forward radiative transfer model (RTM) and input geophysical parameters at the geographic location and time of the observation. In this study we intend to test the sensitivity of the RTM to uncertainties in the input geophysical parameters as well as to the underlying physical assumptions of gaseous absorption and surface emission in the RTM. SAPHIR, a cross track scanner onboard Indo-French Megha-Tropique Satellite, gives us a unique opportunity of studying 6 dual band 183 GHz channels at an inclined orbit over the Tropics for the first time. We will also perform the same sensitivity analysis using the Advance Technology Microwave Sounder (ATMS) 23 GHz and five 183 GHz channels. Preliminary analysis comparing GDAS and an independent retrieved profile show some sensitivity of the RTM to the input data. An extended analysis of this work using different input geophysical parameters will be presented. Two different absorption models, the Rosenkranz and the MonoRTM will be tested to analyze the sensitivity of the RTM to spectroscopic assumptions in each model. Also for the 23.8 GHz channel, the sensitivity of the RTM to the surface emissivity model will be checked. Finally the impact of these sensitivities on radiometric inter-calibration of radiometers at sounding frequencies will be assessed.

  15. Accurate calculations on 9 Λ-S and 28 Ω states of NSe radical in the gas phase: Potential energy curves, spectroscopic parameters and spin-orbit couplings

    NASA Astrophysics Data System (ADS)

    Shi, Deheng; Li, Peiling; Sun, Jinfeng; Zhu, Zunlue

    2014-01-01

    The potential energy curves (PECs) of 28 Ω states generated from 9 Λ-S states (X2Π, 14Π, 16Π, 12Σ+, 14Σ+, 16Σ+, 14Σ-, 24Π and 14Δ) are studied for the first time using an ab initio quantum chemical method. All the 9 Λ-S states correlate to the first two dissociation limits, N(4Su) + Se(3Pg) and N(4Su) + Se(3Dg), of NSe radical. Of these Λ-S states, the 16Σ+, 14Σ+, 16Π, 24Π and 14Δ are found to be rather weakly bound states. The 12Σ+ is found to be unstable and has double wells. And the 16Σ+, 14Σ+, 14Π and 16Π are found to be the inverted ones with the SO coupling included. The PEC calculations are made by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson modification. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian. The convergence of the present calculations is discussed with respect to the basis set and the level of theory. Core-valence correlation corrections are included with a cc-pCVTZ basis set. Scalar relativistic corrections are calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of spin-orbit coupling constants is discussed in brief for some Λ-S states with one shallow well on each PEC. The spectroscopic parameters of 9 Λ-S and 28 Ω states are determined by fitting the first ten vibrational levels whenever available, which are calculated by solving the rovibrational Schrödinger equation with Numerov's method. The splitting energy in the X2Π Λ-S state is determined to be about 864.92 cm-1, which agrees favorably with the measurements of 891.80 cm-1. Moreover, other spectroscopic parameters of Λ-S and Ω states involved here are also in fair agreement with available measurements. It

  16. Is scoring system of computed tomography based metric parameters can accurately predicts shock wave lithotripsy stone-free rates and aid in the development of treatment strategies?

    PubMed Central

    Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim

    2016-01-01

    Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192

  17. The Araucaria Project: accurate stellar parameters and distance to evolved eclipsing binary ASAS J180057-2333.8 in Sagittarius Arm

    NASA Astrophysics Data System (ADS)

    Suchomska, K.; Graczyk, D.; Smolec, R.; Pietrzyński, G.; Gieren, W.; Stȩpień, K.; Konorski, P.; Pilecki, B.; Villanova, S.; Thompson, I. B.; Górski, M.; Karczmarek, P.; Wielgórski, P.; Anderson, R. I.

    2015-07-01

    We have analyzed the double-lined eclipsing binary system ASAS J180057-2333.8 from the All Sky Automated Survey (ASAS) catalogue. We measure absolute physical and orbital parameters for this system based on archival V-band and I-band ASAS photometry, as well as on high-resolution spectroscopic data obtained with ESO 3.6 m/HARPS and CORALIE spectrographs. The physical and orbital parameters of the system were derived with an accuracy of about 0.5-3 per cent. The system is a very rare configuration of two bright well-detached giants of spectral types K1 and K4 and luminosity class II. The radii of the stars are R1 = 52.12 ± 1.38 and R2 = 67.63 ± 1.40 R⊙ and their masses are M1 = 4.914 ± 0.021 and M2 = 4.875 ± 0.021 M⊙. The exquisite accuracy of 0.5 per cent obtained for the masses of the components is one of the best mass determinations for giants. We derived a precise distance to the system of 2.14 ± 0.06 kpc (stat.) ± 0.05 (syst.) which places the star in the Sagittarius-Carina arm. The Galactic rotational velocity of the star is Θs = 258 ± 26 km s-1 assuming Θ0 = 238 km s-1. A comparison with PARSEC isochrones places the system at the early phase of core helium burning with an age of slightly larger than 100 million years. The effect of overshooting on stellar evolutionary tracks was explored using the MESA star code.

  18. Quantitative Microbial Risk Assessment Tutorial – SDMProjectBuilder: Import Local Data Files to Identify and Modify Contamination Sources and Input Parameters

    EPA Science Inventory

    Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...

  19. Possibilities of improving the parameters of hyperthermia in regional isolated limb perfusion using epidural bupivacaine and accurate temperature measurement of the three layers of limb tissue.

    PubMed

    Jastrzebski, Tomasz; Sommer, Anna; Swierblewski, Maciej; Lass, Piotr; Rogowski, Jan; Drucis, Kamil; Kopacz, Andrzej

    2006-06-01

    The present study presents the author's modification of the method, which aims to create proper parameters of the treatment. The selected group consisted of 15 women and eight men, with a mean age of 57.2 years (range from 26 to 72 years). The patients were divided into two groups, depending on whether they were given epidural bupivacaine (group I - 13 patients treated between the years 2001 and 2004) or not [group II (control) - 10 patients treated earlier, between the years 1997 and 2000]. We observed a significant change in the temperature of thigh muscles (P=0.009) and shank muscles (P=0.006). In the control group II, there was a statistically significant difference (P=0.048) in the temperatures between the muscles and subcutaneous tissue on the one hand and the shank skin on the other. That difference was mean 0.67 degrees Celsius (from 0.4 to 0.9) during the perfusion after applying the cytostatic. The temperature of the skin was lower than the temperature of the deeper tissues of the shank and did not exceed 39.9 degrees Celsius. Such a difference in the temperatures was not observed in case of the group I patients who were given bupivacaine into the extrameningeal space before applying the cytostatic. The difference in the temperatures was on average 0.26 degrees Celsius and was not statistically significant (P=0.99), whereas the shank skin temperature was 40.0-40.6 degrees Celsius. The attained results imply that despite the noticeable improvement in the heating of the limb muscles after application of bupivacaine, the improvement in the heating of the skin and subcutaneous tissue is still not satisfactory, although the growing tendency implies such a possibility.

  20. LAND AND WATER USE CHARACTERISTICS AND HUMAN HEALTH INPUT PARAMETERS FOR USE IN ENVIRONMENTAL DOSIMETRY AND RISK ASSESSMENTS AT THE SAVANNAH RIVER SITE

    SciTech Connect

    Jannik, T.; Karapatakis, D.; Lee, P.; Farfan, E.

    2010-08-06

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6

  1. Parasitic analysis and π-type Butterworth-Van Dyke model for complementary-metal-oxide-semiconductor Lamb wave resonator with accurate two-port Y-parameter characterizations

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu

    2016-04-01

    The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.

  2. Development of a Three-Dimensional Heat-Transfer Model for the Gas Tungsten Arc Welding Process Using the Finite Element Method Coupled with a Genetic Algorithm Based Identification of Uncertain Input Parameters

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2008-11-01

    An accurate estimation of the temperature field in weld pool and its surrounding area is important for a priori determination of the weld-pool dimensions and the weld thermal cycles. A finite element based three-dimensional (3-D) quasi-steady heat-transfer model is developed in the present work to compute temperature field in gas tungsten arc welding (GTAW) process. The numerical model considers temperature-dependent material properties and latent heat of melting and solidification. A novelty of the numerical model is that the welding heat source is considered in the form of an adaptive volumetric heat source that confirms to the size and the shape of the weld pool. The need to predefine the dimensions of the volumetric heat source is thus overcome. The numerical model is further integrated with a parent-centric recombination (PCX) operated generalized generation gap (G3) model based genetic algorithm to identify the magnitudes of process efficiency and arc radius that are usually unknown but required for the accurate estimation of the net heat input into the workpiece. The complete numerical model and the genetic algorithm based optimization code are developed indigenously using an Intel Fortran Compiler. The integrated model is validated further with a number of experimentally measured weld dimensions in GTA-welded samples in stainless steels.

  3. Data including GROMACS input files for atomistic molecular dynamics simulations of mixed, asymmetric bilayers including molecular topologies, equilibrated structures, and force field for lipids compatible with OPLS-AA parameters.

    PubMed

    Róg, Tomasz; Orłowski, Adam; Llorente, Alicia; Skotland, Tore; Sylvänne, Tuulia; Kauhanen, Dimple; Ekroos, Kim; Sandvig, Kirsten; Vattulainen, Ilpo

    2016-06-01

    In this Data in Brief article we provide a data package of GROMACS input files for atomistic molecular dynamics simulations of multicomponent, asymmetric lipid bilayers using the OPLS-AA force field. These data include 14 model bilayers composed of 8 different lipid molecules. The lipids present in these models are: cholesterol (CHOL), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylcholine (POPC), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylethanolamine (POPE), 1-stearoyl-2-oleoyl-sn-glycero-3-phosphatidyl-ethanolamine (SOPE), 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylserine (POPS), 1-stearoyl-2-oleoyl-sn-glycero-3-phosphatidylserine (SOPS), N-palmitoyl-D-erythro-sphingosyl-phosphatidylcholine (SM16), and N-lignoceroyl-D-erythro-sphingosyl-phosphatidylcholine (SM24). The bilayers׳ compositions are based on lipidomic studies of PC-3 prostate cancer cells and exosomes discussed in Llorente et al. (2013) [1], showing an increase in the section of long-tail lipid species (SOPS, SOPE, and SM24) in the exosomes. Former knowledge about lipid asymmetry in cell membranes was accounted for in the models, meaning that the model of the inner leaflet is composed of a mixture of PC, PS, PE, and cholesterol, while the extracellular leaflet is composed of SM, PC and cholesterol discussed in Van Meer et al. (2008) [2]. The provided data include lipids׳ topologies, equilibrated structures of asymmetric bilayers, all force field parameters, and input files with parameters describing simulation conditions (md.mdp). The data is associated with the research article "Interdigitation of Long-Chain Sphingomyelin Induces Coupling of Membrane Leaflets in a Cholesterol Dependent Manner" (Róg et al., 2016) [3].

  4. Toward an inventory of nitrogen input to the United States

    EPA Science Inventory

    Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...

  5. Beyond Rainfall Multipliers: Describing Input Uncertainty as an Autocorrelated Stochastic Process Improves Inference in Hydrology

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.

    2015-12-01

    Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this

  6. An updated Quantitative Water Air Sediment Interaction (QWASI) model for evaluating chemical fate and input parameter sensitivities in aquatic systems: application to D5 (decamethylcyclopentasiloxane) and PCB-180 in two lakes.

    PubMed

    Mackay, Donald; Hughes, Lauren; Powell, David E; Kim, Jaeshin

    2014-09-01

    The QWASI fugacity mass balance model has been widely used since 1983 for both scientific and regulatory purposes to estimate the concentrations of organic chemicals in water and sediment, given an assumed rate of chemical emission, advective inflow in water or deposition from the atmosphere. It has become apparent that an updated version is required, especially to incorporate improved methods of obtaining input parameters such as partition coefficients. Accordingly, the model has been revised and it is now available in spreadsheet format. Changes to the model are described and the new version is applied to two chemicals, D5 (decamethylcyclopentasiloxane) and PCB-180, in two lakes, Lake Pepin (MN, USA) and Lake Ontario, showing the model's capability of illustrating both the chemical to chemical differences and lake to lake differences. Since there are now increased regulatory demands for rigorous sensitivity and uncertainty analyses, these aspects are discussed and two approaches are illustrated. It is concluded that the new QWASI water quality model can be of value for both evaluative and simulation purposes, thus providing a tool for obtaining an improved understanding of chemical mass balances in lakes, as a contribution to the assessment of fate and exposure and as a step towards the assessment of risk.

  7. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    NASA Astrophysics Data System (ADS)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  8. INDES User's guide multistep input design with nonlinear rotorcraft modeling

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.

  9. High input impedance amplifier

    NASA Technical Reports Server (NTRS)

    Kleinberg, Leonard L.

    1995-01-01

    High input impedance amplifiers are provided which reduce the input impedance solely to a capacitive reactance, or, in a somewhat more complex design, provide an extremely high essentially infinite, capacitive reactance. In one embodiment, where the input impedance is reduced in essence, to solely a capacitive reactance, an operational amplifier in a follower configuration is driven at its non-inverting input and a resistor with a predetermined magnitude is connected between the inverting and non-inverting inputs. A second embodiment eliminates the capacitance from the input by adding a second stage to the first embodiment. The second stage is a second operational amplifier in a non-inverting gain-stage configuration where the output of the first follower stage drives the non-inverting input of the second stage and the output of the second stage is fed back to the non-inverting input of the first stage through a capacitor of a predetermined magnitude. These amplifiers, while generally useful, are very useful as sensor buffer amplifiers that may eliminate significant sources of error.

  10. MDS MIC Catalog Inputs

    NASA Technical Reports Server (NTRS)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  11. Talking Speech Input.

    ERIC Educational Resources Information Center

    Berliss-Vincent, Jane; Whitford, Gigi

    2002-01-01

    This article presents both the factors involved in successful speech input use and the potential barriers that may suggest that other access technologies could be more appropriate for a given individual. Speech input options that are available are reviewed and strategies for optimizing use of speech recognition technology are discussed. (Contains…

  12. Signal Prediction With Input Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Chen, Ya-Chin

    1999-01-01

    A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.

  13. Digital system accurately controls velocity of electromechanical drive

    NASA Technical Reports Server (NTRS)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  14. Inferring Indel Parameters using a Simulation-based Approach

    PubMed Central

    Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A.; Pupko, Tal

    2015-01-01

    In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226

  15. Input and Input Processing in Second Language Acquisition.

    ERIC Educational Resources Information Center

    Alcon, Eva

    1998-01-01

    Analyzes second-language learners' processing of linguistic data within the target language, focusing on input and intake in second-language acquisition and factors and cognitive processes that affect input processing. Input factors include input simplification, input enhancement, and interactional modifications. Individual learner differences…

  16. Input Decimated Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.

  17. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  18. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  19. Evaluation of Piloted Inputs for Onboard Frequency Response Estimation

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Martos, Borja

    2013-01-01

    Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.

  20. Nuclear reaction inputs based on effective interactions

    NASA Astrophysics Data System (ADS)

    Hilaire, S.; Goriely, S.; Péru, S.; Dubray, N.; Dupuis, M.; Bauge, E.

    2016-11-01

    Extensive nuclear structure studies have been performed for decades using effective interactions as sole input. They have shown a remarkable ability to describe rather accurately many types of nuclear properties. In the early 2000s, a major effort has been engaged to produce nuclear reaction input data out of the Gogny interaction, in order to challenge its quality also with respect to nuclear reaction observables. The status of this project, well advanced today thanks to the use of modern computers as well as modern nuclear reaction codes, is reviewed and future developments are discussed.

  1. Measuring Input Thresholds on an Existing Board

    NASA Technical Reports Server (NTRS)

    Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.

    2011-01-01

    A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and

  2. Predicting hydration Gibbs energies of alkyl-aromatics using molecular simulation: a comparison of current force fields and the development of a new parameter set for accurate solvation data.

    PubMed

    Garrido, Nuno M; Jorge, Miguel; Queimada, António J; Gomes, José R B; Economou, Ioannis G; Macedo, Eugénia A

    2011-10-14

    The Gibbs energy of hydration is an important quantity to understand the molecular behavior in aqueous systems at constant temperature and pressure. In this work we review the performance of some popular force fields, namely TraPPE, OPLS-AA and Gromos, in reproducing the experimental Gibbs energies of hydration of several alkyl-aromatic compounds--benzene, mono-, di- and tri-substituted alkylbenzenes--using molecular simulation techniques. In the second part of the paper, we report a new model that is able to improve such hydration energy predictions, based on Lennard Jones parameters from the recent TraPPE-EH force field and atomic partial charges obtained from natural population analysis of density functional theory calculations. We apply a scaling factor determined by fitting the experimental hydration energy of only two solutes, and then present a simple rule to generate atomic partial charges for different substituted alkyl-aromatics. This rule has the added advantages of eliminating the unnecessary assumption of fixed charge on every substituted carbon atom and providing a simple guideline for extrapolating the charge assignment to any multi-substituted alkyl-aromatic molecule. The point charges derived here yield excellent predictions of experimental Gibbs energies of hydration, with an overall absolute average deviation of less than 0.6 kJ mol(-1). This new parameter set can also give good predictive performance for other thermodynamic properties and liquid structural information.

  3. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma

    PubMed Central

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-01-01

    (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). CONCLUSION: A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability. PMID:27053857

  4. Accurate determination of optical bandgap and lattice parameters of Zn{sub 1-x}Mg{sub x}O epitaxial films (0{<=}x{<=}0.3) grown by plasma-assisted molecular beam epitaxy on a-plane sapphire

    SciTech Connect

    Laumer, Bernhard; Schuster, Fabian; Stutzmann, Martin; Bergmaier, Andreas; Dollinger, Guenther; Eickhoff, Martin

    2013-06-21

    Zn{sub 1-x}Mg{sub x}O epitaxial films with Mg concentrations 0{<=}x{<=}0.3 were grown by plasma-assisted molecular beam epitaxy on a-plane sapphire substrates. Precise determination of the Mg concentration x was performed by elastic recoil detection analysis. The bandgap energy was extracted from absorption measurements with high accuracy taking electron-hole interaction and exciton-phonon complexes into account. From these results a linear relationship between bandgap energy and Mg concentration is established for x{<=}0.3. Due to alloy disorder, the increase of the photoluminescence emission energy with Mg concentration is less pronounced. An analysis of the lattice parameters reveals that the epitaxial films grow biaxially strained on a-plane sapphire.

  5. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  6. How accurate is the Kubelka-Munk theory of diffuse reflection? A quantitative answer

    NASA Astrophysics Data System (ADS)

    Joseph, Richard I.; Thomas, Michael E.

    2012-10-01

    The (heuristic) Kubelka-Munk theory of diffuse reflectance and transmittance of a film on a substrate, which is widely used because it gives simple analytic results, is compared to the rigorous radiative transfer model of Chandrasekhar. The rigorous model has to be numerically solved, thus is less intuitive. The Kubelka-Munk theory uses an absorption coefficient and scatter coefficient as inputs, similar to the rigorous model of Chandrasekhar. The relationship between these two sets of coefficients is addressed. It is shown that the Kubelka-Munk theory is remarkably accurate if one uses the proper albedo parameter.

  7. Clinical application of a novel automatic algorithm for actigraphy-based activity and rest period identification to accurately determine awake and asleep ambulatory blood pressure parameters and cardiovascular risk.

    PubMed

    Crespo, Cristina; Fernández, José R; Aboy, Mateo; Mojón, Artemio

    2013-03-01

    This paper reports the results of a study designed to determine whether there are statistically significant differences between the values of ambulatory blood pressure monitoring (ABPM) parameters obtained using different methods-fixed schedule, diary, and automatic algorithm based on actigraphy-of defining the main activity and rest periods, and to determine the clinical relevance of such differences. We studied 233 patients (98 men/135 women), 61.29 ± .83 yrs of age (mean ± SD). Statistical methods were used to measure agreement in the diagnosis and classification of subjects within the context of ABPM and cardiovascular disease risk assessment. The results show that there are statistically significant differences both at the group and individual levels. Those at the individual level have clinically significant implications, as they can result in a different classification, and, therefore, different diagnosis and treatment for individual subjects. The use of an automatic algorithm based on actigraphy can lead to better individual treatment by correcting the accuracy problems associated with the fixed schedule on patients whose actual activity/rest routine differs from the fixed schedule assumed, and it also overcomes the limitations and reliability issues associated with the use of diaries.

  8. Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan; Deng, Weiling

    2010-01-01

    To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…

  9. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  10. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  11. TASSRAP Input Module

    DTIC Science & Technology

    1977-07-29

    retrieve data necessary for the other modules to function. Initially there are 13 inputs, with the CRT dis - playing the information to be entered...id 46aý .0sso somma % 4bt--f. ft Aa W #4t - lQ *a - 4 c ,0 45 40 aK 43 ’ C = 04 ZSC 0 de *020.4 %- li’l ~ ~ ~ ~ ~ ~ & 1&.1 gol~ -,.-’ ow. -6 -N*4••1L...tv Z (𔃽 - C- ft %- ftb 0*4 *- -1 *4* (30 w ag &h 𔃾 0 a _6a .N I 0 A. 6.2 IL ILN ’ S MS 6C 0 to ~ 0 " di a S 0 m J *- -j f’ md op9 -9 $-. -6 = -A U .Af

  12. Input Multiplicities in Process Control.

    ERIC Educational Resources Information Center

    Koppel, Lowell B.

    1983-01-01

    Describes research investigating potential effect of input multiplicity on multivariable chemical process control systems. Several simple processes are shown to exhibit the possibility of theoretical developments on input multiplicity and closely related phenomena are discussed. (JN)

  13. Modeling and generating input processes

    SciTech Connect

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  14. Developing Accurate Spatial Maps of Cotton Fiber Quality Parameters

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Awareness of the importance of cotton fiber quality (Gossypium, L. sps.) has increased as advances in spinning technology require better quality cotton fiber. Recent advances in geospatial information sciences allow an improved ability to study the extent and causes of spatial variability in fiber p...

  15. Clarifying types of uncertainty: when are models accurate, and uncertainties small?

    PubMed

    Cox, Louis Anthony Tony

    2011-10-01

    Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.

  16. The Kepler Input Catalog

    NASA Astrophysics Data System (ADS)

    Latham, D. W.; Brown, T. M.; Monet, D. G.; Everett, M.; Esquerdo, G. A.; Hergenrother, C. W.

    2005-12-01

    The Kepler mission will monitor 170,000 planet-search targets during the first year, and 100,000 after that. The Kepler Input Catalog (KIC) will be used to select optimum targets for the search for habitable earth-like transiting planets. The KIC will include all known catalogued stars in an area of about 177 square degrees centered at RA 19:22:40 and Dec +44:30 (l=76.3 and b=+13.5). 2MASS photometry will be supplemented with new ground-based photometry obtained in the SDSS g, r, i, and z bands plus a custom filter centered on the Mg b lines, using KeplerCam on the 48-inch telescope at the Whipple Observatory on Mount Hopkins, Arizona. The photometry will be used to estimate stellar characteristics for all stars brighter than K 14.5 mag. The KIC will include effective temperature, surface gravity, metallicity, reddening, distance, and radius estimates for these stars. The CCD images are pipeline processed to produce instrumental magnitudes at PSI. The photometry is then archived and transformed to the SDSS system at HAO, where the astrophysical analysis of the stellar characteristics is carried out. The results are then merged with catalogued data at the USNOFS to produce the KIC. High dispersion spectroscopy with Hectochelle on the MMT will be used to supplement the information for many of the most interesting targets. The KIC will be released before launch for use by the astronomical community and will be available for queries over the internet. Support from the Kepler mission is gratefully acknowledged.

  17. Bayesian robot system identification with input and output noise.

    PubMed

    Ting, Jo-Anne; D'Souza, Aaron; Schaal, Stefan

    2011-01-01

    For complex robots such as humanoids, model-based control is highly beneficial for accurate tracking while keeping negative feedback gains low for compliance. However, in such multi degree-of-freedom lightweight systems, conventional identification of rigid body dynamics models using CAD data and actuator models is inaccurate due to unknown nonlinear robot dynamic effects. An alternative method is data-driven parameter estimation, but significant noise in measured and inferred variables affects it adversely. Moreover, standard estimation procedures may give physically inconsistent results due to unmodeled nonlinearities or insufficiently rich data. This paper addresses these problems, proposing a Bayesian system identification technique for linear or piecewise linear systems. Inspired by Factor Analysis regression, we develop a computationally efficient variational Bayesian regression algorithm that is robust to ill-conditioned data, automatically detects relevant features, and identifies input and output noise. We evaluate our approach on rigid body parameter estimation for various robotic systems, achieving an error of up to three times lower than other state-of-the-art machine learning methods.

  18. Serial Input Output

    SciTech Connect

    Waite, Anthony; /SLAC

    2011-09-07

    Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each

  19. SDR Input Power Estimation Algorithms

    NASA Technical Reports Server (NTRS)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  20. Intermediate inputs and economic productivity.

    PubMed

    Baptist, Simon; Hepburn, Cameron

    2013-03-13

    Many models of economic growth exclude materials, energy and other intermediate inputs from the production function. Growing environmental pressures and resource prices suggest that this may be increasingly inappropriate. This paper explores the relationship between intermediate input intensity, productivity and national accounts using a panel dataset of manufacturing subsectors in the USA over 47 years. The first contribution is to identify sectoral production functions that incorporate intermediate inputs, while allowing for heterogeneity in both technology and productivity. The second contribution is that the paper finds a negative correlation between intermediate input intensity and total factor productivity (TFP)--sectors that are less intensive in their use of intermediate inputs have higher productivity. This finding is replicated at the firm level. We propose tentative hypotheses to explain this association, but testing and further disaggregation of intermediate inputs is left for further work. Further work could also explore more directly the relationship between material inputs and economic growth--given the high proportion of materials in intermediate inputs, the results in this paper are suggestive of further work on material efficiency. Depending upon the nature of the mechanism linking a reduction in intermediate input intensity to an increase in TFP, the implications could be significant. A third contribution is to suggest that an empirical bias in productivity, as measured in national accounts, may arise due to the exclusion of intermediate inputs. Current conventions of measuring productivity in national accounts may overstate the productivity of resource-intensive sectors relative to other sectors.

  1. Master control data handling program uses automatic data input

    NASA Technical Reports Server (NTRS)

    Alliston, W.; Daniel, J.

    1967-01-01

    General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.

  2. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  3. Third order TRANSPORT with MAD (Methodical Accelerator Design) input

    SciTech Connect

    Carey, D.C.

    1988-09-20

    This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)

  4. Evaluation of severe accident risks: Quantification of major input parameters

    SciTech Connect

    Harper, F.T.; Payne, A.C.; Breeding, R.J.; Gorham, E.D.; Brown, T.D.; Rightley, G.S.; Gregory, J.J. ); Murfin, W. ); Amos, C.N. )

    1991-04-01

    This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the U.S. Nuclear Regulatory Commission. The results of the Containment Loads and Molten Core/Containment Interaction Expert Panel Elicitation are presented in this part of Volume 2 of NUREG/CR-4551. The Containment Loads Expert Panel considered seven issues: (1) hydrogen phenomena at Grand Gulf; (2) hydrogen burn at vessel breach at Sequoyah; (3) BWR reactor building failure due to hydrogen; (4) Grand Gulf containment loads at vessel breach; (5) pressure increment in the Sequoyah containment at vessel breach; (6) loads at vessel breach: Surry; and (7) pressure increment in the Zion containment at vessel breach. The report begins with a brief discussion of the methods used to elicit the information from the experts. The information for each issue is then presented in five sections: (1) a brief definition of the issue, (2) a brief summary of the technical rationale supporting the distributions developed by each of the experts, (3) a brief description of the operations that the project staff performed on the raw elicitation results in order to aggregate the distributions, (4) the aggregated distributions, and (5) the individual expert elicitation summaries. The Molten Core/Containment Interaction Panel considered three issues. The results of the following two of these issues are presented in this document: (1) Peach Bottom drywell shell meltthrough; and (2) Grand Gulf pedestal erosion. 89 figs., 154 tabs.

  5. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    DTIC Science & Technology

    2013-08-01

    Science (69):370-380. Parker, G., C. Paola, K. X. Whipple, and D. Mohrig. 1998. Alluvial fans formed by channelized fluvial sheet flow. I: Theory...simulates the evolution of a prograding fan -shaped delta advancing into open water. This model is an extension of a tool developed for managing the

  6. Methods for Combining Payload Parameter Variations with Input Environment

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Straayer, J. W.

    1975-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.

  7. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  8. System and method for motor parameter estimation

    DOEpatents

    Luhrs, Bin; Yan, Ting

    2014-03-18

    A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.

  9. REL - English Bulk Data Input.

    ERIC Educational Resources Information Center

    Bigelow, Richard Henry

    A bulk data input processor which is available for the Rapidly Extensible Language (REL) English versions is described. In REL English versions, statements that declare names of data items and their interrelationships normally are lines from a terminal or cards in a batch input stream. These statements provide a convenient means of declaring some…

  10. Inputs for L2 Acquisition.

    ERIC Educational Resources Information Center

    Saleemi, Anjum P.

    1989-01-01

    Major approaches of describing or examining linguistic data from a potential target language (input) are analyzed for adequacy in addressing the concerns of second language learning theory. Suggestions are made for making the best of these varied concepts of input and for reformulation of a unified concept. (MSE)

  11. Input in Second Language Acquisition.

    ERIC Educational Resources Information Center

    Gass, Susan M., Ed.; Madden, Carolyn G., Ed.

    This collection of conference papers includes: "When Does Teacher Talk Work as Input?"; "Cultural Input in Second Language Learning"; "Skilled Variation in a Kindergarten Teacher's Use of Foreigner Talk"; "Teacher-Pupil Interaction in Second Language Development"; "Foreigner Talk in the University…

  12. PREVIMER : Meteorological inputs and outputs

    NASA Astrophysics Data System (ADS)

    Ravenel, H.; Lecornu, F.; Kerléguer, L.

    2009-09-01

    PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during

  13. Real-Time Parameter Estimation in the Frequency Domain

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented

  14. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  15. Input management of production systems.

    PubMed

    Odum, E P

    1989-01-13

    Nonpoint sources of pollution, which are largely responsible for stressing regional and global life-supporting atmosphere, soil, and water, can only be reduced (and ultimately controlled) by input management that involves increasing the efficiency of production systems and reducing the inputs of environmentally damaging materials. Input management requires a major change, an about-face, in the approach to management of agriculture, power plants, and industries because the focus is on waste reduction and recycling rather than on waste disposal. For large-scale ecosystem-level situations a top-down hierarchical approach is suggested and illustrated by recent research in agroecology and landscape ecology.

  16. System monitors discrete computer inputs

    NASA Technical Reports Server (NTRS)

    Burns, J. J.

    1966-01-01

    Computer system monitors inputs from checkout devices. The comparing, addressing, and controlling functions are performed in the I/O unit. This leaves the computer main frame free to handle memory, access priority, and interrupt instructions.

  17. Development of a decision tree to classify the most accurate tissue-specific tissue to plasma partition coefficient algorithm for a given compound.

    PubMed

    Yun, Yejin Esther; Cotton, Cecilia A; Edginton, Andrea N

    2014-02-01

    Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism and are used to predict a drug's pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), key PBPK model parameters, define the steady-state concentration differential between tissue and plasma and are used to predict the volume of distribution. The experimental determination of these parameters once limited the development of PBPK models; however, in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy, and none are considered standard, warranting further research. In this study, a novel decision-tree-based Kp prediction method was developed using six previously published algorithms. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physicochemical space. Three versions of tissue-specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy than that of any single Kp prediction algorithm for all tissues, the current mode of use in PBPK model building. Because built-in estimation equations for those input parameters are not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The presented innovative method will improve tissue distribution prediction accuracy, thus enhancing the confidence in PBPK modeling outputs.

  18. Use of WRF result as meteorological input to DNDC model for greenhouse gas flux simulation

    NASA Astrophysics Data System (ADS)

    Grosz, B.; Horváth, L.; Gyöngyösi, A. Z.; Weidinger, T.; Pintér, K.; Nagy, Z.; André, K.

    2015-12-01

    Continuous evolution of biogeochemical models developed in the past decades makes possible the more and more accurate estimation of trace and greenhouse gas fluxes of soils. Due to the detailed meteorological, soil, biological and chemical processes the modeled fluxes are getting closer and closer to the real values. For appropriate evaluation models need large amount of input data. In this paper we have investigated how to build an easily accessible meteorological input data source for biogeochemical models, as it is one of the most important input data sets that is either missing or difficult to get from meteorological networks. The DNDC ecological model was used for testing the WRF numerical weather prediction system as a potential data source. The reference dataset was built by numerical interpolation based on measured data. The average differences between the modeled output data using WRF and observed meteorological data in 2009 and 2010 are less than 3.98 ± 1.6; 8.68 ± 6.72 and 6.5 ± 2.17 per cent for CO2, N2O and CH4, respectively, for the test years. Generalization of the results for other regions is restricted, however this work encourages others to examine the applicability of WRF data instead of observed climate parameters.

  19. Analyzing the sensitivity of a flood risk assessment model towards its input data

    NASA Astrophysics Data System (ADS)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.

  20. Online parameter estimation for surgical needle steering model.

    PubMed

    Yan, Kai Guo; Podder, Tarun; Xiao, Di; Yu, Yan; Liu, Tien-I; Ling, Keck Voon; Ng, Wan Sing

    2006-01-01

    Estimation of the system parameters, given noisy input/output data, is a major field in control and signal processing. Many different estimation methods have been proposed in recent years. Among various methods, Extended Kalman Filtering (EKF) is very useful for estimating the parameters of a nonlinear and time-varying system. Moreover, it can remove the effects of noises to achieve significantly improved results. Our task here is to estimate the coefficients in a spring-beam-damper needle steering model. This kind of spring-damper model has been adopted by many researchers in studying the tissue deformation. One difficulty in using such model is to estimate the spring and damper coefficients. Here, we proposed an online parameter estimator using EKF to solve this problem. The detailed design is presented in this paper. Computer simulations and physical experiments have revealed that the simulator can estimate the parameters accurately with fast convergent speed and improve the model efficacy.

  1. Recursive identification and tracking of parameters for linear and nonlinear multivariable systems

    NASA Technical Reports Server (NTRS)

    Sidar, M.

    1975-01-01

    The problem of identifying constant and variable parameters in multi-input, multi-output, linear and nonlinear systems is considered, using the maximum likelihood approach. An iterative algorithm, leading to recursive identification and tracking of the unknown parameters and the noise covariance matrix, is developed. Agile tracking, and accurate and unbiased identified parameters are obtained. Necessary conditions for a globally, asymptotically stable identification process are provided; the conditions proved to be useful and efficient. Among different cases studied, the stability derivatives of an aircraft were identified and some of the results are shown as examples.

  2. Carmencita, The CARMENES Input Catalogue of Bright, Nearby M Dwarfs

    NASA Astrophysics Data System (ADS)

    Caballero, J. A.; Cortés-Contreras, M.; Alonso-Floriano, F. J.; Montes, D.; Quirrenbach, A.; Amado, P. J.; Ribas, I.; Reiners, A.; Abellan, F. J.; Béjar, V. J. S.; Brinkmöller, M.; Czesla, S.; Dorda, R.; Gallardo, I.; González-Álvarez, E.; Hidalgo, D.; Holgado, G.; Jeffers, S. V.; Kim, M.; Klutsch, A.; Lamert, A.; Llamas, M.; López-Santiago, J.; Martínez-Rodríguez, H.; Morales, J. C.; Mundt, R.; Passegger, V. M.; Schöfer, P.; Seifert, W.; Zechmeister, M.

    2016-08-01

    CARMENES, the brand-new, Spanish-German, two-channel, ultra-stabilised, high-resolution spectrograph at the 3.5 m Calar Alto telescope, started its science survey on 01 Jan 2016. In one shot, it covers from 0.52 to 1.71 μm with resolution R = 94,600 (λ < 0.96 μm) and 80,400 (λ > 0.96 μm). During guaranteed time observations, CARMENES carries out the programme for which the instrument was designed: radial-velocity monitoring of bright, nearby, low-mass dwarfs with spectral types be- tween M0.0 V and M9.5 V. Carmencita is the "CARMEN(ES) Cool dwarf Information and daTa Archive", our input catalogue, from which we select the about 300 targets being observed during guaranteed time. Besides that, Carmencita is perhaps the most comprehensive database of bright, nearby M dwarfs ever built, as well as a useful tool for forthcoming exo-planet hunters: ESPRESSO, HPF, IRD, SPIRou, TESS or even PLATO. Carmencita contains dozens of parameters measured by us or compiled from the literature for about 2,200 M dwarfs in the solar neighbourhood brighter than J = 11.5 mag: accurate coordinates, spectral types, photometry from ultraviolet to mid-infrared, parallaxes and spectro-photometric distances, rotational and radial velocities, Hα pseudo-equivalent widths, X-ray count rates and hardness ratios, close and wide multiplicity data, proper motions, Galactocentric space velocities, metallicities, full references, homogeneously derived astrophysical parameters, and much more. In my talk at Cool Stars 19, I explained how we build Carmencita standing on the shoulders of giants and observing with 2-m class telescopes, and produce a dozen MSc theses and several PhD theses in the process (http://carmenes.caha.es).

  3. Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Dayton, James A., Jr.

    1997-01-01

    Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.

  4. Mass exchange processes with input

    NASA Astrophysics Data System (ADS)

    Krapivsky, P. L.

    2015-05-01

    We investigate a system of interacting clusters evolving through mass exchange and supplemented by input of small clusters. Three possibilities depending on the rate of exchange generically occur when input is homogeneous: continuous growth, gelation, and instantaneous gelation. We mostly study the growth regime using scaling methods. An exchange process with reaction rates equal to the product of reactant masses admits an exact solution which allows us to justify the validity of scaling approaches in this special case. We also investigate exchange processes with a localized input. We show that if the diffusion coefficients are mass-independent, the cluster mass distribution becomes stationary and develops an algebraic tail far away from the source.

  5. Accurate equilibrium structures for piperidine and cyclohexane.

    PubMed

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  6. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, Arlan

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  7. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  8. Analog Input Data Acquisition Software

    NASA Technical Reports Server (NTRS)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  9. Optimal Inputs for System Identification.

    DTIC Science & Technology

    1995-09-01

    The derivation of the power spectral density of the optimal input for system identification is addressed in this research. Optimality is defined in...identification potential of general System Identification algorithms, a new and efficient System Identification algorithm that employs Iterated Weighted Least

  10. World Input-Output Network

    PubMed Central

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389

  11. The advanced LIGO input optics

    NASA Astrophysics Data System (ADS)

    Mueller, Chris L.; Arain, Muzammil A.; Ciani, Giacomo; DeRosa, Ryan. T.; Effler, Anamaria; Feldbaum, David; Frolov, Valery V.; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J.; Kokeyama, Keiko; Korth, William Z.; Martin, Rodica M.; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H.; Tanner, David B.; Vorvick, Cheryl; Williams, Luke F.; Mueller, Guido

    2016-01-01

    The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.

  12. Lab Inputs for Common Micros.

    ERIC Educational Resources Information Center

    Tinker, Robert

    1984-01-01

    The game paddle inputs of Apple microcomputers provide a simple way to get laboratory measurements into the computer. Discusses these game paddles and the necessary interface software. Includes schematics for Apple built-in paddle electronics, TRS-80 game paddle I/O, Commodore circuit for user port, and bus interface for Sinclair/Timex, Commodore,…

  13. The advanced LIGO input optics

    SciTech Connect

    Mueller, Chris L. Arain, Muzammil A.; Ciani, Giacomo; Feldbaum, David; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Martin, Rodica M.; Reitze, David H.; Tanner, David B.; Williams, Luke F.; Mueller, Guido; DeRosa, Ryan T.; Effler, Anamaria; Kokeyama, Keiko; Frolov, Valery V.; Mullavey, Adam; Kawabe, Keita; Vorvick, Cheryl; King, Eleanor J.; and others

    2016-01-15

    The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.

  14. A quick accurate model of nozzle backflow

    NASA Technical Reports Server (NTRS)

    Kuharski, R. A.

    1991-01-01

    Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.

  15. Incorporating uncertainty in RADTRAN 6.0 input files.

    SciTech Connect

    Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John

    2010-02-01

    Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine is required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.

  16. Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Chen, J.; Kunkel, V.; Skov, T. M.

    2015-12-01

    Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph

  17. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  18. Exploring storm time ring current formation and response on the energy input

    NASA Astrophysics Data System (ADS)

    Ilie, Raluca

    While extensive research has been made over the last decades regarding the storm-time dynamics, there are still unanswered questions regarding the ring current formation and plasmasphere evolution, specifically about the ring current response on the energy input. Large-scale data analysis and global magnetospheric simulations provide complementary alternatives for exploring highly complex coupling of the solar wind-ionosphere-magnetosphere system. Superposed Epoch analysis of intense storms data suggests that a distinct time stamp is needed in order to resolve certain solar wind features. However, when it conies to hot proton at geosynchronous orbit, the choice of reference time primarily matters to accurately describe the size of peaks, while the presence and time evolution is unaltered by it. Examination of the role the transient spikes in the solar wind parameters play in the development of magnetic storms, reveals that changes in the energy input produce a nonlinear response of the inner magnetosphere. While initial increases in the energy input enhance the magnetospheric response, as the power transferred to the system is increased, the growth of the ring current is stalled and a saturation limits sets in. A threshold in the energy input is necessary for the ring current to develop, while the short time scale fluctuations in the solar wind parameters did not have a significant contribution. This implies the existence of an internal feedback mechanism as the magnetosphere acts as a low-pass filter of the IMF, limiting the energy flow in the magnetosphere. Further, the main characteristic in determining IMF Bz fluctuation periodicity transfer of solar wind mass and energy to the inner magnetosphere, is the peak signal to noise ratio in the power spectrum of the input parameter, suggesting that a ratio of 10 is needed in order to trigger a similar periodicity in the magnetosphere response. Theoretical and numerical modifications to an inner magnetosphere model

  19. Systems and methods for reconfiguring input devices

    NASA Technical Reports Server (NTRS)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  20. Hunting for hydrogen: random structure searching and prediction of NMR parameters of hydrous wadsleyite† †Electronic supplementary information (ESI) available: Further information on the structures generated by AIRSS, alternative structural models, supercell calculations, total enthalpies of all computed structures and further information on 1H/2H NMR parameters. Example input and all raw output files from AIRSS and CASTEP NMR calculations are also included. See DOI: 10.1039/c6cp01529h Click here for additional data file.

    PubMed Central

    Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.

    2016-01-01

    The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937

  1. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    SciTech Connect

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  2. National hospital input price index.

    PubMed

    Freeland, M S; Anderson, G; Schendler, C E

    1979-01-01

    The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies.

  3. Effects of input frequency content and signal-to-noise ratio on the parametric estimation of surface EMG-torque dynamics.

    PubMed

    Golkar, Mahsa A; Kearney, Robert E

    2016-08-01

    The dynamic relationship between surface EMG (sEMG) and torque can be estimated from data acquired while subjects voluntarily modulate joint torque. We have shown that for such data, the input (EMG) contains a feedback component from the output (torque) and so accurate estimates of the dynamics require the use of closed-loop identification algorithms. Moreover, this approach has several other limitations since the input is controlled indirectly and so the frequency content and signal-to-noise ratio cannot be controlled. This paper investigates how these factors influence the accuracy of estimates. This was studied using experimental sEMG recorded from healthy human subjects for tasks with different modulation rates. Box-Jenkin (BJ) method was used for identification. Results showed that input frequency content had little effect on estimates of gain and natural frequency but had strong effect on damping factor estimates. It was demonstrated that to accurately estimate the damping factor, the command signal switching rate must be less than 2s. It was also shown that random errors increased with noise level but was limited to 10% of the parameters true value for highest noise level tested. To summarize, simulation study of this work showed that voluntary modulation paradigm can accurately identify sEMG-torque dynamics.

  4. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  5. Intraoperative measurement of mounting parameters for the Taylor Spatial Frame.

    PubMed

    Gantsoudes, George D; Fragomen, Austin T; Rozbruch, S Robert

    2010-04-01

    The Taylor Spatial Frame (Smith & Nephew, Memphis, TN) is a powerful tool in providing gradual correction of deformity. The Taylor Spatial Frame has the potential to allow for very accurate corrections achieved over one or more schedules through the use of the software on www.spatialframe.com. The accuracy of the frame is contingent upon the input of precise parameters. The correction occurs about a virtual hinge in space called the origin. The location of the origin is defined by its spatial relationship to the reference ring. Mounting parameters are the measurements that define the location of the origin (virtual hinge). We present a simple practical method for obtaining mounting parameters during surgery using standard equipment.

  6. Sparse and accurate high resolution SAR imaging

    NASA Astrophysics Data System (ADS)

    Vu, Duc; Zhao, Kexin; Rowe, William; Li, Jian

    2012-05-01

    We investigate the usage of an adaptive method, the Iterative Adaptive Approach (IAA), in combination with a maximum a posteriori (MAP) estimate to reconstruct high resolution SAR images that are both sparse and accurate. IAA is a nonparametric weighted least squares algorithm that is robust and user parameter-free. IAA has been shown to reconstruct SAR images with excellent side lobes suppression and high resolution enhancement. We first reconstruct the SAR images using IAA, and then we enforce sparsity by using MAP with a sparsity inducing prior. By coupling these two methods, we can produce a sparse and accurate high resolution image that are conducive for feature extractions and target classification applications. In addition, we show how IAA can be made computationally efficient without sacrificing accuracies, a desirable property for SAR applications where the size of the problems is quite large. We demonstrate the success of our approach using the Air Force Research Lab's "Gotcha Volumetric SAR Data Set Version 1.0" challenge dataset. Via the widely used FFT, individual vehicles contained in the scene are barely recognizable due to the poor resolution and high side lobe nature of FFT. However with our approach clear edges, boundaries, and textures of the vehicles are obtained.

  7. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  8. Calculating the mounting parameters for Taylor Spatial Frame correction using computed tomography.

    PubMed

    Kucukkaya, Metin; Karakoyun, Ozgur; Armagan, Raffi; Kuzgun, Unal

    2011-07-01

    The Taylor Spatial Frame uses a computer program-based six-axis deformity analysis. However, there is often a residual deformity after the initial correction, especially in deformities with a rotational component. This problem can be resolved by recalculating the parameters and inputting all new deformity and mounting parameters. However, this may necessitate repeated x-rays and delay treatment. We believe that error in the mounting parameters is the main reason for most residual deformities. To prevent these problems, we describe a new calculation technique for determining the mounting parameters that uses computed tomography. This technique is especially advantageous for deformities with a rotational component. Using this technique, exact calculation of the mounting parameters is possible and the residual deformity and number of repeated x-rays can be minimized. This new technique is an alternative method to accurately calculating the mounting parameters.

  9. Input statistics and Hebbian cross-talk effects.

    PubMed

    Rădulescu, Anca

    2014-04-01

    As an extension of prior work, we studied inspecific Hebbian learning using the classical Oja model. We used a combination of analytical tools and numerical simulations to investigate how the effects of synaptic cross talk (which we also refer to as synaptic inspecificity) depend on the input statistics. We investigated a variety of patterns that appear in dimensions higher than two (and classified them based on covariance type and input bias). We found that the effects of cross talk on learning dynamics and outcome is highly dependent on the input statistics and that cross talk may lead in some cases to catastrophic effects on learning or development. Arbitrarily small levels of cross talk are able to trigger bifurcations in learning dynamics, or bring the system in close enough proximity to a critical state, to make the effects indistinguishable from a real bifurcation. We also investigated how cross talk behaves toward unbiased ("competitive") inputs and in which circumstances it can help the system productively resolve the competition. Finally, we discuss the idea that sophisticated neocortical learning requires accurate synaptic updates (similar to polynucleotide copying, which requires highly accurate replication). Since it is unlikely that the brain can completely eliminate cross talk, we support the proposal that is uses a neural mechanism that "proofreads" the accuracy of the updates, much as DNA proofreading lowers copying error rate.

  10. Online Dynamic Parameter Estimation of Synchronous Machines

    NASA Astrophysics Data System (ADS)

    West, Michael R.

    Traditionally, synchronous machine parameters are determined through an offline characterization procedure. The IEEE 115 standard suggests a variety of mechanical and electrical tests to capture the fundamental characteristics and behaviors of a given machine. These characteristics and behaviors can be used to develop and understand machine models that accurately reflect the machine's performance. To perform such tests, the machine is required to be removed from service. Characterizing a machine offline can result in economic losses due to down time, labor expenses, etc. Such losses may be mitigated by implementing online characterization procedures. Historically, different approaches have been taken to develop methods of calculating a machine's electrical characteristics, without removing the machine from service. Using a machine's input and response data combined with a numerical algorithm, a machine's characteristics can be determined. This thesis explores such characterization methods and strives to compare the IEEE 115 standard for offline characterization with the least squares approximation iterative approach implemented on a 20 h.p. synchronous machine. This least squares estimation method of online parameter estimation shows encouraging results for steady-state parameters, in comparison with steady-state parameters obtained through the IEEE 115 standard.

  11. Dynamic Input Conductances Shape Neuronal Spiking(1,2).

    PubMed

    Drion, Guillaume; Franci, Alessio; Dethier, Julie; Sepulchre, Rodolphe

    2015-01-01

    Assessing the role of biophysical parameter variations in neuronal activity is critical to the understanding of modulation, robustness, and homeostasis of neuronal signalling. The paper proposes that this question can be addressed through the analysis of dynamic input conductances. Those voltage-dependent curves aggregate the concomitant activity of all ion channels in distinct timescales. They are shown to shape the current-voltage dynamical relationships that determine neuronal spiking. We propose an experimental protocol to measure dynamic input conductances in neurons. In addition, we provide a computational method to extract dynamic input conductances from arbitrary conductance-based models and to analyze their sensitivity to arbitrary parameters. We illustrate the relevance of the proposed approach for modulation, compensation, and robustness studies in a published neuron model based on data of the stomatogastric ganglion of the crab Cancer borealis.

  12. The IVS data input to ITRF2014

    NASA Astrophysics Data System (ADS)

    Nothnagel, Axel; Alef, Walter; Amagai, Jun; Andersen, Per Helge; Andreeva, Tatiana; Artz, Thomas; Bachmann, Sabine; Barache, Christophe; Baudry, Alain; Bauernfeind, Erhard; Baver, Karen; Beaudoin, Christopher; Behrend, Dirk; Bellanger, Antoine; Berdnikov, Anton; Bergman, Per; Bernhart, Simone; Bertarini, Alessandra; Bianco, Giuseppe; Bielmaier, Ewald; Boboltz, David; Böhm, Johannes; Böhm, Sigrid; Boer, Armin; Bolotin, Sergei; Bougeard, Mireille; Bourda, Geraldine; Buttaccio, Salvo; Cannizzaro, Letizia; Cappallo, Roger; Carlson, Brent; Carter, Merri Sue; Charlot, Patrick; Chen, Chenyu; Chen, Maozheng; Cho, Jungho; Clark, Thomas; Collioud, Arnaud; Colomer, Francisco; Colucci, Giuseppe; Combrinck, Ludwig; Conway, John; Corey, Brian; Curtis, Ronald; Dassing, Reiner; Davis, Maria; de-Vicente, Pablo; De Witt, Aletha; Diakov, Alexey; Dickey, John; Diegel, Irv; Doi, Koichiro; Drewes, Hermann; Dube, Maurice; Elgered, Gunnar; Engelhardt, Gerald; Evangelista, Mark; Fan, Qingyuan; Fedotov, Leonid; Fey, Alan; Figueroa, Ricardo; Fukuzaki, Yoshihiro; Gambis, Daniel; Garcia-Espada, Susana; Gaume, Ralph; Gaylard, Michael; Geiger, Nicole; Gipson, John; Gomez, Frank; Gomez-Gonzalez, Jesus; Gordon, David; Govind, Ramesh; Gubanov, Vadim; Gulyaev, Sergei; Haas, Ruediger; Hall, David; Halsig, Sebastian; Hammargren, Roger; Hase, Hayo; Heinkelmann, Robert; Helldner, Leif; Herrera, Cristian; Himwich, Ed; Hobiger, Thomas; Holst, Christoph; Hong, Xiaoyu; Honma, Mareki; Huang, Xinyong; Hugentobler, Urs; Ichikawa, Ryuichi; Iddink, Andreas; Ihde, Johannes; Ilijin, Gennadiy; Ipatov, Alexander; Ipatova, Irina; Ishihara, Misao; Ivanov, D. V.; Jacobs, Chris; Jike, Takaaki; Johansson, Karl-Ake; Johnson, Heidi; Johnston, Kenneth; Ju, Hyunhee; Karasawa, Masao; Kaufmann, Pierre; Kawabata, Ryoji; Kawaguchi, Noriyuki; Kawai, Eiji; Kaydanovsky, Michael; Kharinov, Mikhail; Kobayashi, Hideyuki; Kokado, Kensuke; Kondo, Tetsuro; Korkin, Edward; Koyama, Yasuhiro; Krasna, Hana; Kronschnabl, Gerhard; Kurdubov, Sergey; Kurihara, Shinobu; Kuroda, Jiro; Kwak, Younghee; La Porta, Laura; Labelle, Ruth; Lamb, Doug; Lambert, Sébastien; Langkaas, Line; Lanotte, Roberto; Lavrov, Alexey; Le Bail, Karine; Leek, Judith; Li, Bing; Li, Huihua; Li, Jinling; Liang, Shiguang; Lindqvist, Michael; Liu, Xiang; Loesler, Michael; Long, Jim; Lonsdale, Colin; Lovell, Jim; Lowe, Stephen; Lucena, Antonio; Luzum, Brian; Ma, Chopo; Ma, Jun; Maccaferri, Giuseppe; Machida, Morito; MacMillan, Dan; Madzak, Matthias; Malkin, Zinovy; Manabe, Seiji; Mantovani, Franco; Mardyshkin, Vyacheslav; Marshalov, Dmitry; Mathiassen, Geir; Matsuzaka, Shigeru; McCarthy, Dennis; Melnikov, Alexey; Michailov, Andrey; Miller, Natalia; Mitchell, Donald; Mora-Diaz, Julian Andres; Mueskens, Arno; Mukai, Yasuko; Nanni, Mauro; Natusch, Tim; Negusini, Monia; Neidhardt, Alexander; Nickola, Marisa; Nicolson, George; Niell, Arthur; Nikitin, Pavel; Nilsson, Tobias; Ning, Tong; Nishikawa, Takashi; Noll, Carey; Nozawa, Kentarou; Ogaja, Clement; Oh, Hongjong; Olofsson, Hans; Opseth, Per Erik; Orfei, Sandro; Pacione, Rosa; Pazamickas, Katherine; Petrachenko, William; Pettersson, Lars; Pino, Pedro; Plank, Lucia; Ploetz, Christian; Poirier, Michael; Poutanen, Markku; Qian, Zhihan; Quick, Jonathan; Rahimov, Ismail; Redmond, Jay; Reid, Brett; Reynolds, John; Richter, Bernd; Rioja, Maria; Romero-Wolf, Andres; Ruszczyk, Chester; Salnikov, Alexander; Sarti, Pierguido; Schatz, Raimund; Scherneck, Hans-Georg; Schiavone, Francesco; Schreiber, Ulrich; Schuh, Harald; Schwarz, Walter; Sciarretta, Cecilia; Searle, Anthony; Sekido, Mamoru; Seitz, Manuela; Shao, Minghui; Shibuya, Kazuo; Shu, Fengchun; Sieber, Moritz; Skjaeveland, Asmund; Skurikhina, Elena; Smolentsev, Sergey; Smythe, Dan; Sousa, Don; Sovers, Ojars; Stanford, Laura; Stanghellini, Carlo; Steppe, Alan; Strand, Rich; Sun, Jing; Surkis, Igor; Takashima, Kazuhiro; Takefuji, Kazuhiro; Takiguchi, Hiroshi; Tamura, Yoshiaki; Tanabe, Tadashi; Tanir, Emine; Tao, An; Tateyama, Claudio; Teke, Kamil; Thomas, Cynthia; Thorandt, Volkmar; Thornton, Bruce; Tierno Ros, Claudia; Titov, Oleg; Titus, Mike; Tomasi, Paolo; Tornatore, Vincenza; Trigilio, Corrado; Trofimov, Dmitriy; Tsutsumi, Masanori; Tuccari, Gino; Tzioumis, Tasso; Ujihara, Hideki; Ullrich, Dieter; Uunila, Minttu; Venturi, Tiziana; Vespe, Francesco; Vityazev, Veniamin; Volvach, Alexandr; Vytnov, Alexander; Wang, Guangli; Wang, Jinqing; Wang, Lingling; Wang, Na; Wang, Shiqiang; Wei, Wenren; Weston, Stuart; Whitney, Alan; Wojdziak, Reiner; Yatskiv, Yaroslav; Yang, Wenjun; Ye, Shuhua; Yi, Sangoh; Yusup, Aili; Zapata, Octavio; Zeitlhoefler, Reinhard; Zhang, Hua; Zhang, Ming; Zhang, Xiuzhong; Zhao, Rongbing; Zheng, Weimin; Zhou, Ruixian; Zubko, Nataliya

    2015-01-01

    Very Long Baseline Interferometry (VLBI) is a primary space-geodetic technique for determining precise coordinates on the Earth, for monitoring the variable Earth rotation and orientation with highest precision, and for deriving many other parameters of the Earth system. The International VLBI Service for Geodesy and Astrometry (IVS, http://ivscc.gsfc.nasa.gov/) is a service of the International Association of Geodesy (IAG) and the International Astronomical Union (IAU). The datasets published here are the results of individual Very Long Baseline Interferometry (VLBI) sessions in the form of normal equations in SINEX 2.0 format (http://www.iers.org/IERS/EN/Organization/AnalysisCoordinator/SinexFormat/sinex.html, the SINEX 2.0 description is attached as pdf) provided by IVS as the input for the next release of the International Terrestrial Reference System (ITRF): ITRF2014. This is a new version of the ITRF2008 release (Bockmann et al., 2009). For each session/ file, the normal equation systems contain elements for the coordinate components of all stations having participated in the respective session as well as for the Earth orientation parameters (x-pole, y-pole, UT1 and its time derivatives plus offset to the IAU2006 precession-nutation components dX, dY (https://www.iau.org/static/resolutions/IAU2006_Resol1.pdf). The terrestrial part is free of datum. The data sets are the result of a weighted combination of the input of several IVS Analysis Centers. The IVS contribution for ITRF2014 is described in Bachmann et al (2015), Schuh and Behrend (2012) provide a general overview on the VLBI method, details on the internal data handling can be found at Behrend (2013).

  13. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  14. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  15. Rapid Airplane Parametric Input Design (RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool

  16. Remote sensing inputs to water demand modeling

    NASA Technical Reports Server (NTRS)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  17. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  18. Canopy Research Network seeks input

    NASA Astrophysics Data System (ADS)

    In July 1993, the Canopy Research Network was established with a 2-year planning grant from the National Science Foundation to bring together forest canopy researchers, quantitative scientists, and computer specialists to establish methods for collecting, storing, analyzing, interpreting, and displaying three-dimensional data that relate to tree crowns and forest canopies. The CRN is now soliciting input from scientists in other fields who may have developed techniques and software to help obtain answers to questions that concern the complex three-dimensional structure of tree crowns and forest canopies. Over the next 3 years, the CRN plans to compile an array of research questions and issues requiring information on canopy structure, examine useful information models and software tools already in use in allied fields, and develop conceptual models and recommendations for the types and format of information and analyses necessary to answer research questions posed by canopy researchers.

  19. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  20. Modeling the Meteoroid Input Function at Mid-Latitude Using Meteor Observations by the MU Radar

    NASA Technical Reports Server (NTRS)

    Pifko, Steven; Janches, Diego; Close, Sigrid; Sparks, Jonathan; Nakamura, Takuji; Nesvorny, David

    2012-01-01

    The Meteoroid Input Function (MIF) model has been developed with the purpose of understanding the temporal and spatial variability of the meteoroid impact in the atmosphere. This model includes the assessment of potential observational biases, namely through the use of empirical measurements to characterize the minimum detectable radar cross-section (RCS) for the particular High Power Large Aperture (HPLA) radar utilized. This RCS sensitivity threshold allows for the characterization of the radar system s ability to detect particles at a given mass and velocity. The MIF has been shown to accurately predict the meteor detection rate of several HPLA radar systems, including the Arecibo Observatory (AO) and the Poker Flat Incoherent Scatter Radar (PFISR), as well as the seasonal and diurnal variations of the meteor flux at various geographic locations. In this paper, the MIF model is used to predict several properties of the meteors observed by the Middle and Upper atmosphere (MU) radar, including the distributions of meteor areal density, speed, and radiant location. This study offers new insight into the accuracy of the MIF, as it addresses the ability of the model to predict meteor observations at middle geographic latitudes and for a radar operating frequency in the low VHF band. Furthermore, the interferometry capability of the MU radar allows for the assessment of the model s ability to capture information about the fundamental input parameters of meteoroid source and speed. This paper demonstrates that the MIF is applicable to a wide range of HPLA radar instruments and increases the confidence of using the MIF as a global model, and it shows that the model accurately considers the speed and sporadic source distributions for the portion of the meteoroid population observable by MU.

  1. Detection of Floating Inputs in Logic Circuits

    NASA Technical Reports Server (NTRS)

    Cash, B.; Thornton, M. G.

    1984-01-01

    Simple modification of oscilloscope probe allows easy detection of floating inputs or tristate outputs in digital-IC's. Oscilloscope probe easily modified with 1/4 W resistor and switch for detecting floating inputs in CMOS logic circuits.

  2. Repositioning Recitation Input in College English Teaching

    ERIC Educational Resources Information Center

    Xu, Qing

    2009-01-01

    This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.

  3. Textual Enhancement of Input: Issues and Possibilities

    ERIC Educational Resources Information Center

    Han, ZhaoHong; Park, Eun Sung; Combs, Charles

    2008-01-01

    The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…

  4. Input Devices for Young Handicapped Children.

    ERIC Educational Resources Information Center

    Morris, Karen

    The versatility of the computer can be expanded considerably for young handicapped children by using input devices other than the typewriter-style keyboard. Input devices appropriate for young children can be classified into four categories: alternative keyboards, contact switches, speech input devices, and cursor control devices. Described are…

  5. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Lee, F. C.

    1984-01-01

    Problems caused by input filter interaction and conventional input filter design techniques are discussed. The concept of feedforward control is modeled with an input filter and a buck regulator. Experimental measurement and comparison to the analytical predictions is carried out. Transient response and the use of a feedforward loop to stabilize the regulator system is described. Other possible applications for feedforward control are included.

  6. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  7. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  8. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  11. Setting the process parameters for the coating process in order to assure tablet appearance based on multivariate analysis of prior data.

    PubMed

    Tanabe, Shuichi; Nakagawa, Hiroshi; Watanabe, Tomoyuki; Minami, Hidemi; Kano, Manabu; Urbanetz, Nora A

    2016-09-10

    Designing efficient, robust process parameters in drug product manufacturing is important to assure a drug's critical quality attributes. In this research, an efficient, novel procedure for a coating process parameter setting was developed, which establishes a prediction model for setting suitable input process parameters by utilizing prior manufacturing knowledge for partial least squares regression (PLSR). In the proposed procedure, target values or ranges of the output parameters are first determined, including tablet moisture content, spray mist condition, and mechanical stress on tablets. Following the preparation of predictive models relating input process parameters to corresponding output parameters, optimal input process parameters are determined using these models so that the output parameters hold within the target ranges. In predicting the exhaust air temperature output parameter, which reflects the tablets' moisture content, PLSR was employed based on prior measured data (such as batch records of other products rather than design of experiments), leading to minimal new experiments. The PLSR model was revealed to be more accurate at predicting the exhaust air temperature than a conventional semi-empirical thermodynamic model. A commercial scale verification demonstrated that the proposed process parameter setting procedure enabled assurance of the quality of tablet appearance without any trial-and-error experiments.

  12. Biogenic inputs to ocean mixing.

    PubMed

    Katija, Kakani

    2012-03-15

    Recent studies have evoked heated debate about whether biologically generated (or biogenic) fluid disturbances affect mixing in the ocean. Estimates of biogenic inputs have shown that their contribution to ocean mixing is of the same order as winds and tides. Although these estimates are intriguing, further study using theoretical, numerical and experimental techniques is required to obtain conclusive evidence of biogenic mixing in the ocean. Biogenic ocean mixing is a complex problem that requires detailed understanding of: (1) marine organism behavior and characteristics (i.e. swimming dynamics, abundance and migratory behavior), (2) mechanisms utilized by swimming animals that have the ability to mix stratified fluids (i.e. turbulence and fluid drift) and (3) knowledge of the physical environment to isolate contributions of marine organisms from other sources of mixing. In addition to summarizing prior work addressing the points above, observations on the effect of animal swimming mode and body morphology on biogenic fluid transport will also be presented. It is argued that to inform the debate on whether biogenic mixing can contribute to ocean mixing, our studies should focus on diel vertical migrators that traverse stratified waters of the upper pycnocline. Based on our understanding of mixing mechanisms, body morphologies, swimming modes and body orientation, combined with our knowledge of vertically migrating populations of animals, it is likely that copepods, krill and some species of gelatinous zooplankton and fish have the potential to be strong sources of biogenic mixing.

  13. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  14. Directional hearing by linear summation of binaural inputs at the medial superior olive

    PubMed Central

    van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.

    2013-01-01

    SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292

  15. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  16. Turn customer input into innovation.

    PubMed

    Ulwick, Anthony W

    2002-01-01

    It's difficult to find a company these days that doesn't strive to be customer-driven. Too bad, then, that most companies go about the process of listening to customers all wrong--so wrong, in fact, that they undermine innovation and, ultimately, the bottom line. What usually happens is this: Companies ask their customers what they want. Customers offer solutions in the form of products or services. Companies then deliver these tangibles, and customers just don't buy. The reason is simple--customers aren't expert or informed enough to come up with solutions. That's what your R&D team is for. Rather, customers should be asked only for outcomes--what they want a new product or service to do for them. The form the solutions take should be up to you, and you alone. Using Cordis Corporation as an example, this article describes, in fine detail, a series of effective steps for capturing, analyzing, and utilizing customer input. First come indepth interviews, in which a moderator works with customers to deconstruct a process or activity in order to unearth "desired outcomes." Addressing participants' comments one at a time, the moderator rephrases them to be both unambiguous and measurable. Once the interviews are complete, researchers then compile a comprehensive list of outcomes that participants rank in order of importance and degree to which they are satisfied by existing products. Finally, using a simple mathematical formula called the "opportunity calculation," researchers can learn the relative attractiveness of key opportunity areas. These data can be used to uncover opportunities for product development, to properly segment markets, and to conduct competitive analysis.

  17. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  18. An input shaping controller enabling cranes to move without sway

    SciTech Connect

    Singer, N.; Singhose, W.; Kriikku, E.

    1997-06-01

    A gantry crane at the Savannah River Technology Center was retrofitted with an Input Shaping controller. The controller intercepts the operator`s pendant commands and modifies them in real time so that the crane is moved without residual sway in the suspended load. Mechanical components on the crane were modified to make the crane suitable for the anti-sway algorithm. This paper will describe the required mechanical modifications to the crane, as well as, a new form of Input Shaping that was developed for use on the crane. Experimental results are presented which demonstrate the effectiveness of the new process. Several practical considerations will be discussed including a novel (patent pending) approach for making small, accurate moves without residual oscillations.

  19. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  20. Mixing at the microscale: Power input in shaken microtiter plates.

    PubMed

    Dürauer, Astrid; Hobiger, Stefanie; Walther, Cornelia; Jungbauer, Alois

    2016-12-01

    Power input and local energy dissipation are crucial parameters for the engineering characterization of mixing and fluid dynamics at the microscale. Since hydrodynamic stress is solely dependent on the maximum power input, we adapted the clay/polymer method to obtain flock destruction kinetics in six-, 24-, and 96-well microtiter plates on orbital shakers. We also determined the specific power input using calorimetry and found that the power input is at the same order of magnitude for the six- and 96-well plates and the laboratory-scale stirred tank reactor, with 40 to 90 W/m(3) (Re' = 180 to 440), 40 to 140 W/m3 (Re' = 320 to 640), and 30 to 50 W/m(3) (Re = 4000 to 8500), respectively. All of these values are significantly below 450 to 2100 W/m(3) determined for the pilot-scale reactor. The hydrodynamic stress differs significantly between the different formats of MTPs, as the 96-well plates showed very low shear stress on the shaker with a shaking amplitude of 3 mm. Thus, the transfer of mixing conditions from the microtiter plate to small-scale and pilot-scale reactors must be undertaken with care. Our findings, especially the power input determined by the calorimetric method, show that the hydrodynamic conditions in laboratory- and pilot-scale reactors cannot be reached.

  1. Identification of single-input-single-output quantum linear systems

    NASA Astrophysics Data System (ADS)

    Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin

    2017-03-01

    The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.

  2. Earthquake motion input and its dissemination via the Internet

    NASA Astrophysics Data System (ADS)

    Halldorsson, Benedikt; Dong, Gang; Papageorgiou, Apostolos S.

    2002-06-01

    Objectives of this task are to conduct research on seismic hazards, and to provide relevant input on the expected levels of these hazards to other tasks. Other tasks requiring this input include those dealing with inventory, fragility curves, rehabilitation strategies and demonstration projects. The corresponding input is provided in various formats depending on the intended use: as peak ground motion parameters and/or response spectral values for a given magnitude, epicentral distance and site conditions; or as time histories for scenario earthquakes that are selected based on the disaggregated seismic hazard mapped by the U.S. Geological Survey and are incorporated in building codes. The user community for this research is both academic researchers and practicing engineers who may use the seismic input generated by the synthesis techniques that are developed under this task for a variety of applications. These include ground motions for scenario earthquakes, for developing fragility curves and in specifying ground motion input for critical facilities (such as hospitals) located in the eastern U.S.

  3. Handling Input and Output for COAMPS

    NASA Technical Reports Server (NTRS)

    Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine

    2007-01-01

    Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.

  4. Parameter determination for singlet oxygen modeling of BPD-mediated PDT

    NASA Astrophysics Data System (ADS)

    McMillan, Dayton D.; Chen, Daniel; Kim, Michele M.; Liang, Xing; Zhu, Timothy C.

    2013-03-01

    Photodynamic therapy (PDT) offers a cancer treatment modality capable of providing minimally invasive localized tumor necrosis. To accurately predict PDT treatment outcome based on pre-treatment patient specific parameters, an explicit dosimetry model is used to calculate apparent reacted 1O2 concentration ([1O2]rx) at varied radial distances from the activating light source inserted into tumor tissue and apparent singlet oxygen threshold concentration for necrosis ([1O2]rx, sd) for type-II PDT photosensitizers. Inputs into the model include a number of photosensitizer independent parameters as well as photosensitizer specific photochemical parameters ξ σ, and β. To determine the specific photochemical parameters of benzoporphyrin derivative monoacid A (BPD), mice were treated with BPDPDT with varied light source strengths and treatment times. All photosensitizer independent inputs were assessed pre-treatment and average necrotic radius in treated tissue was determined post-treatment. Using the explicit dosimetry model, BPD specific ξ σ, and β photochemical parameters were determined which estimated necrotic radii similar to those observed in initial BPD-PDT treated mice using an optimization algorithm that minimizes the difference between the model and that of the measurements. Photochemical parameters for BPD are compared with those of other known photosensitizers, such as Photofrin. The determination of these BPD specific photochemical parameters provides necessary data for predictive treatment outcome in clinical BPD-PDT using the explicit dosimetry model.

  5. Estimating Building Simulation Parameters via Bayesian Structure Learning

    SciTech Connect

    Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards

    2013-01-01

    Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.

  6. Information Fusion of Conflicting Input Data

    PubMed Central

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-01-01

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible. PMID:27801874

  7. Information Fusion of Conflicting Input Data.

    PubMed

    Mönks, Uwe; Dörksen, Helene; Lohweg, Volker; Hübner, Michael

    2016-10-29

    Sensors, and also actuators or external sources such as databases, serve as data sources in order to realise condition monitoring of industrial applications or the acquisition of characteristic parameters like production speed or reject rate. Modern facilities create such a large amount of complex data that a machine operator is unable to comprehend and process the information contained in the data. Thus, information fusion mechanisms gain increasing importance. Besides the management of large amounts of data, further challenges towards the fusion algorithms arise from epistemic uncertainties (incomplete knowledge) in the input signals as well as conflicts between them. These aspects must be considered during information processing to obtain reliable results, which are in accordance with the real world. The analysis of the scientific state of the art shows that current solutions fulfil said requirements at most only partly. This article proposes the multilayered information fusion system MACRO (multilayer attribute-based conflict-reducing observation) employing the μBalTLCS (fuzzified balanced two-layer conflict solving) fusion algorithm to reduce the impact of conflicts on the fusion result. The performance of the contribution is shown by its evaluation in the scope of a machine condition monitoring application under laboratory conditions. Here, the MACRO system yields the best results compared to state-of-the-art fusion mechanisms. The utilised data is published and freely accessible.

  8. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    PubMed

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  9. Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.

    PubMed

    Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza

    2015-09-15

    The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible.

  10. UNCERTAINTY IN MODEL PREDICTIONS-PLAUSIBLE OUTCOMES FROM ESTIMATES OF INPUT RANGES

    EPA Science Inventory

    Models are commonly used to predict the future extent of contamination given estimates of hydraulic conductivity, porosity, hydraulic gradient, biodegradation rate, and other parameters. Often best estimates or averages of these are used as inputs to models, which then transform...

  11. The role of the input scale in parton distribution analyses

    SciTech Connect

    Pedro Jimenez-Delgado

    2012-08-01

    A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.

  12. Evaluating the Sensitivity of Agricultural Model Performance to Different Climate Inputs: Supplemental Material

    NASA Technical Reports Server (NTRS)

    Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.

    2015-01-01

    Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.

  13. Chemical input multiplicity facilitates arithmetical processing.

    PubMed

    Margulies, David; Melman, Galina; Felder, Clifford E; Arad-Yellin, Rina; Shanzer, Abraham

    2004-12-01

    We describe the design and function of a molecular logic system, by which a combinatorial recognition of the input signals is utilized to efficiently process chemically encoded information. Each chemical input can target simultaneously multiple domains on the same molecular platform, resulting in a unique combination of chemical states, each with its characteristic fluorescence output. Simple alteration of the input reagents changes the emitted logic pattern and enables it to perform different algebraic operations between two bits, solely in the fluorescence mode. This system exhibits parallelism in both its chemical inputs and light outputs.

  14. Input apparatus for dynamic signature verification systems

    DOEpatents

    EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.

    1978-01-01

    The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.

  15. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    Typical estimates of standing wood derived from remote sensing sources take advantage of aggregate measurements of canopy heights (e.g. LIDAR) and canopy diameters (segmentation of IKONOS imagery) to obtain a wood volume estimate by assuming homogeneous species and a fixed function that returns volume. The validation of such techniques use manually measured diameter at breast height records (DBH). Our goal is to improve the accuracy and applicability of biomass estimation methods to heterogeneous forests and transitional areas. We are developing estimates with quantifiable uncertainty using a new form of estimation function, active sampling, and volumetric reconstruction image rendering for species specific mass truth. Initially we are developing a Bayesian adaptive sampling method for BRDF associated with the MISR Rahman model with respect to categorical biomes. This involves characterizing the probability distributions of the 3 free parameters of the Rahman model for the 6 categories of biomes used by MISR. Subsequently, these distributions can be used to determine the optimal sampling methodology to distinguish biomes during acquisition. We have a remotely controlled semi-autonomous helicopter that has stereo imaging, lidar, differential GPS, and spectrometers covering wavelengths from visible to NIR. We intend to automatically vary the way points of the flight path via the Bayesian adaptive sampling method. The second critical part of this work is in automating the validation of biomass estimates via using machine vision techniques. This involves taking 2-D pictures of trees of known species, and then via Bayesian techniques, reconstructing 3-D models of the trees to estimate the distribution moments associated with wood volume. Similar techniques have been developed by the medical imaging community. This then provides probability distributions conditional upon species. The final part of this work is in relating the BRDF actively sampled measurements to species

  16. Accurate calculation of the transverse anisotropy of a magnetic domain wall in perpendicularly magnetized multilayers

    NASA Astrophysics Data System (ADS)

    Büttner, Felix; Krüger, Benjamin; Eisebitt, Stefan; Kläui, Mathias

    2015-08-01

    Bloch domain walls are the most common type of transition between two out-of-plane magnetized domains (one magnetized upwards, one downwards) in films with perpendicular magnetic anisotropy. The rotation of the spins of such domain walls in the plane of the film requires energy, which is described by an effective anisotropy, the so-called transverse or hard axis anisotropy K⊥. This anisotropy and the related Döring mass density of the domain wall are key parameters of the one-dimensional model to describe the motion of magnetic domain walls. In particular, the critical field strength or current density where oscillatory domain wall motion sets in (Walker breakdown) is directly proportional to K⊥. So far, no general framework is available to determine K⊥ from static characterizations such as magnetometry measurements. Here, we derive a universal analytical expression to calculate the transverse anisotropy constant for the important class of perpendicular magnetic multilayers. All the required input parameters of the model, such as the number of repeats, the thickness of a single magnetic layer, and the layer periodicity, as well as the effective perpendicular anisotropy, the saturation magnetization, and the static domain wall width are accessible by static sample characterizations. We apply our model to a widely used multilayer system and find that the effective transverse anisotropy constant is a factor of seven different from that when using the conventional approximations, showing the importance of using our analysis scheme. Our model is also applicable to domain walls in materials with Dzyaloshinskii-Moriya interaction (DMI). The accurate knowledge of K⊥ is needed to determine other unknown parameters from measurements, such as the DMI strength or the spin polarization of the spin current in current-induced domain wall motion experiments.

  17. Helicopter Based Magnetic Detection Of Wells At The Teapot Dome (Naval Petroleum Reserve No. 3 Oilfield: Rapid And Accurate Geophysical Algorithms For Locating Wells

    NASA Astrophysics Data System (ADS)

    Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.

    2011-12-01

    In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.

  18. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    NASA Astrophysics Data System (ADS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  19. Articulatory Parameters.

    ERIC Educational Resources Information Center

    Ladefoged, Peter

    1980-01-01

    Summarizes the 16 parameters hypothesized to be necessary and sufficient for linguistic phonetic specifications. Suggests seven parameters affecting tongue shapes, three determining the positions of the lips, one controlling the position of the velum, four varying laryngeal actions, and one controlling respiratory activity. (RL)

  20. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  1. Tools to Develop or Convert MOVES Inputs

    EPA Pesticide Factsheets

    The following tools are designed to help users develop inputs to MOVES and post-process the output. With the release of MOVES2014, EPA strongly encourages state and local agencies to develop local inputs based on MOVES fleet and activity categories.

  2. EDP Applications to Musical Bibliography: Input Considerations

    ERIC Educational Resources Information Center

    Robbins, Donald C.

    1972-01-01

    The application of Electronic Data Processing (EDP) has been a boon in the analysis and bibliographic control of music. However, an extra step of encoding must be undertaken for input of music. The best hope to facilitate musical input is the development of an Optical Character Recognition (OCR) music-reading machine. (29 references) (Author/NH)

  3. 7 CFR 3430.607 - Stakeholder input.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...

  4. 7 CFR 3430.607 - Stakeholder input.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...

  5. 7 CFR 3430.607 - Stakeholder input.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...

  6. 7 CFR 3430.607 - Stakeholder input.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...

  7. CREATING INPUT TABLES FROM WAPDEG FOR RIP

    SciTech Connect

    K.G. Mon

    1998-08-10

    The purpose of this calculation is to create tables for input into RIP ver. 5.18 (Integrated Probabilistic Simulator for Environmental Systems) from WAPDEG ver. 3.06 (Waste Package Degradation) output. This calculation details the creation of the RIP input tables for TSPA-VA REV.00.

  8. Managing Input during Assistive Technology Product Design

    ERIC Educational Resources Information Center

    Choi, Young Mi

    2011-01-01

    Many different sources of input are available to assistive technology innovators during the course of designing products. However, there is little information on which ones may be most effective or how they may be efficiently utilized within the design process. The aim of this project was to compare how three types of input--from simulation tools,…

  9. Input, Interaction, and Second Language Production.

    ERIC Educational Resources Information Center

    Gass, Susan M.; Varonis, Evangeline Marlos

    1994-01-01

    This study investigated the relationship among input, interaction, and second-language production among 16 native-nonnative dyads. The results indicated that both modified input and interaction initiated by the native speaker lead to greater comprehension by the nonnative speaker, as measured by task performance. (Contains 48 references.) (MDM)

  10. Making Input Comprehensible: Do Interactional Modifications Help?

    ERIC Educational Resources Information Center

    Pica, Teresa; And Others

    1990-01-01

    A pilot study of a larger project on second language comprehension under two input conditions is reported. The first condition is characterized by the availability of samples of target input that have been modified a priori toward greater semantic redundancy and transparency and less complex syntax. The second condition is characterized by the…

  11. Statistical identification of effective input variables. [SCREEN

    SciTech Connect

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.

  12. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.

  13. Microstructure-Dependent Gas Adsorption: Accurate Predictions of Methane Uptake in Nanoporous Carbons

    SciTech Connect

    Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R

    2014-01-01

    We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.

  14. Comparative performance of decoupled input-output linearizing controller and linear interpolation PID controller: enhancing biomass and ethanol production in Saccharomyces cerevisiae.

    PubMed

    Persad, A; Chopda, V R; Rathore, A S; Gomes, J

    2013-02-01

    A decoupled input-output linearizing controller (DIOLC) was designed as an alternative advanced control strategy for controlling bioprocesses. Simulation studies of its implementation were carried out to control ethanol and biomass production in Saccharomyces cerevisiae and its performance was compared to that of a proportional-integral-derivative (PID) controller with parameters tuned according to a linear schedule. The overall performance of the DIOLC was better in the test experiments requiring the controllers to respond accurately to simultaneous changes in the trajectories of the substrate and dissolved oxygen concentration. It also exhibited better performance in perturbation experiments of the most significant parameters q (S,max), q (O2,max), and k ( s ), determined through a statistical design of experiments involving 730 simulations. DIOLC exhibited a superior ability of constraining the process when implemented in extreme metabolic regimes of high oxygen demand for maximizing biomass concentration and low oxygen demand for maximizing ethanol concentration.

  15. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  16. Detailed map of a cis-regulatory input function

    NASA Astrophysics Data System (ADS)

    Setty, Y.; Mayo, A. E.; Surette, M. G.; Alon, U.

    2003-06-01

    Most genes are regulated by multiple transcription factors that bind specific sites in DNA regulatory regions. These cis-regulatory regions perform a computation: the rate of transcription is a function of the active concentrations of each of the input transcription factors. Here, we used accurate gene expression measurements from living cell cultures, bearing GFP reporters, to map in detail the input function of the classic lacZYA operon of Escherichia coli, as a function of about a hundred combinations of its two inducers, cAMP and isopropyl -D-thiogalactoside (IPTG). We found an unexpectedly intricate function with four plateau levels and four thresholds. This result compares well with a mathematical model of the binding of the regulatory proteins cAMP receptor protein (CRP) and LacI to the lac regulatory region. The model is also used to demonstrate that with few mutations, the same region could encode much purer AND-like or even OR-like functions. This possibility means that the wild-type region is selected to perform an elaborate computation in setting the transcription rate. The present approach can be generally used to map the input functions of other genes.

  17. The series product for gaussian quantum input processes

    NASA Astrophysics Data System (ADS)

    Gough, John E.; James, Matthew R.

    2017-02-01

    We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.

  18. Input space-dependent controller for multi-hazard mitigation

    NASA Astrophysics Data System (ADS)

    Cao, Liang; Laflamme, Simon

    2016-04-01

    Semi-active and active structural control systems are advanced mechanical devices and systems capable of high damping performance, ideal for mitigation of multi-hazards. The implementation of these devices within structural systems is still in its infancy, because of the complexity in designing a robust closed-loop control system that can ensure reliable and high mitigation performance. Particular challenges in designing a controller for multi-hazard mitigation include: 1) very large uncertainties on dynamic parameters and unknown excitations; 2) limited measurements with probabilities of sensor failure; 3) immediate performance requirements; and 4) unavailable sets of input-output during design. To facilitate the implementation of structural control systems, a new type of controllers with high adaptive capabilities is proposed. It is based on real-time identification of an embedding that represents the essential dynamics found in the input space, or in the sensors measurements. This type of controller is termed input-space dependent controllers (ISDC). In this paper, the principle of ISDC is presented, their stability and performance derived analytically for the case of harmonic inputs, and their performance demonstrated in the case of different types of hazards. Results show the promise of this new type of controller at mitigating multi-hazards by 1) relying on local and limited sensors only; 2) not requiring prior evaluation or training; and 3) adapting to systems non-stationarities.

  19. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    NASA Astrophysics Data System (ADS)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  20. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, Marco D.; Fienen, Michael N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  1. MODFLOW-Style parameters in underdetermined parameter estimation.

    PubMed

    D'Oria, Marco; Fienen, Michael N

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.

  2. MODFLOW-style parameters in underdetermined parameter estimation

    USGS Publications Warehouse

    D'Oria, M.; Fienen, M.N.

    2012-01-01

    In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.

  3. Kinetic parameters estimation in an anaerobic digestion process using successive quadratic programming.

    PubMed

    Aceves-Lara, C A; Aguilar-Garnica, E; Alcaraz-González, V; González-Reynoso, O; Steyer, J P; Dominguez-Beltran, J L; González-Alvarez, V

    2005-01-01

    In this work, an optimization method is implemented in an anaerobic digestion model to estimate its kinetic parameters and yield coefficients. This method combines the use of advanced state estimation schemes and powerful nonlinear programming techniques to yield fast and accurate estimates of the aforementioned parameters. In this method, we first implement an asymptotic observer to provide estimates of the non-measured variables (such as biomass concentration) and good guesses for the initial conditions of the parameter estimation algorithm. These results are then used by the successive quadratic programming (SQP) technique to calculate the kinetic parameters and yield coefficients of the anaerobic digestion process. The model, provided with the estimated parameters, is tested with experimental data from a pilot-scale fixed bed reactor treating raw industrial wine distillery wastewater. It is shown that SQP reaches a fast and accurate estimation of the kinetic parameters despite highly noise corrupted experimental data and time varying inputs variables. A statistical analysis is also performed to validate the combined estimation method. Finally, a comparison between the proposed method and the traditional Marquardt technique shows that both yield similar results; however, the calculation time of the traditional technique is considerable higher than that of the proposed method.

  4. Contributions of skin and muscle afferent input to movement sense in the human hand.

    PubMed

    Cordo, Paul J; Horn, Jean-Louis; Künster, Daniela; Cherry, Anne; Bratt, Alex; Gurfinkel, Victor

    2011-04-01

    In the stationary hand, static joint-position sense originates from multimodal somatosensory input (e.g., joint, skin, and muscle). In the moving hand, however, it is uncertain how movement sense arises from these different submodalities of proprioceptors. In contrast to static-position sense, movement sense includes multiple parameters such as motion detection, direction, joint angle, and velocity. Because movement sense is both multimodal and multiparametric, it is not known how different movement parameters are represented by different afferent submodalities. In theory, each submodality could redundantly represent all movement parameters, or, alternatively, different afferent submodalities could be tuned to distinctly different movement parameters. The study described in this paper investigated how skin input and muscle input each contributes to movement sense of the hand, in particular, to the movement parameters dynamic position and velocity. Healthy adult subjects were instructed to indicate with the left hand when they sensed the unseen fingers of the right hand being passively flexed at the metacarpophalangeal (MCP) joint through a previously learned target angle. The experimental approach was to suppress input from skin and/or muscle: skin input by anesthetizing the hand, and muscle input by unexpectedly extending the wrist to prevent MCP flexion from stretching the finger extensor muscle. Input from joint afferents was assumed not to play a significant role because the task was carried out with the MCP joints near their neutral positions. We found that, during passive finger movement near the neutral position in healthy adult humans, both skin and muscle receptors contribute to movement sense but qualitatively differently. Whereas skin input contributes to both dynamic position and velocity sense, muscle input may contribute only to velocity sense.

  5. Surrogate models of precessing numerical relativity gravitational waveforms for use in parameter estimation

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott; Galley, Chad; Hemberger, Daniel; Scheel, Mark; Schmidt, Patricia; Smith, Rory; SXS Collaboration Collaboration

    2016-03-01

    We are now in the advanced detector era of gravitational wave astronomy, and the merger of two black holes (BHs) is one of the most promising sources of gravitational waves that could be detected on earth. To infer the BH masses and spins, the observed signal must be compared to waveforms predicted by general relativity for millions of binary configurations. Numerical relativity (NR) simulations can produce accurate waveforms, but are prohibitively expensive to use for parameter estimation. Other waveform models are fast enough but may lack accuracy in portions of the parameter space. Numerical relativity surrogate models attempt to rapidly predict the results of a NR code with a small or negligible modeling error, after being trained on a set of input waveforms. Such surrogate models are ideal for parameter estimation, as they are both fast and accurate, and have already been built for the case of non-spinning BHs. Using 250 input waveforms, we build a surrogate model for waveforms from the Spectral Einstein Code (SpEC) for a subspace of precessing systems.

  6. Input functions for 6-[fluorine-18]Fluorodopa quantitation in parkinsonism: Comparative studies and clinical correlations

    SciTech Connect

    Takikawa, S.; Dhawan, V.; Chaly, T.; Robeson, W.; Dahl, R.; Zanzi, I.; Mandel, F.; Spetsieris, P.; Eidelberg, D.

    1994-06-01

    PET has been used to quantify striatal 6-[{sup 18}F]fluro-L-dopa (FDOPA) uptake as a measure of presynaptic dopaminergic function. Striatal FDOPA uptake rate constants (K{sub 1}) can be calculated using dynamic PET imaging with measurements of the plasma FDOPA input function determined either directly or by several estimation procedures. The authors assessed the comparative clinical utility of these methods by calculating the striato-occipital ratio (SOR) and striatal K{sub 1} values in 12 patients with mild to moderate PD and 12 age-matched normal volunteers. The plasma FDOPA time-activity curve (K{sub 1}{sup FD}); the plasma {sup 18}F time-activity curve (K{sub i}{sup P}); the occipital time-activity curve (K{sub i}{sup OCC}); and a simplified population-derived FDOPA input function (K{sub i}{sup EFD}) were used to calculate striatal K{sub i}. Mean values for all striatal K{sub i} estimates and SOR were significantly lower in the PD group. Although all measured parameters discriminated PD patients with normals, K{sub i}{sup FD} and K{sub i}{sup EFD} provided the best between-group separation. K{sub i}{sup FD}, K{sub i}{sup EFD}, and K{sub i}{sup OCC} measures correlated significantly with quantitative disease severity ratings, although K{sub i}{sup FD} predicted quantitative clinical disability most accurately. These results suggest that K{sub i}{sup FD} may be an optimal marker of the parkinsonian disease process. K{sub i}{sup EFD} may be a useful alternative to K{sub i}{sup FD} for most clinical research applications. 40 refs., 4 figs., 7 tabs.

  7. A robust interpolation procedure for producing tidal current ellipse inputs for regional and coastal ocean numerical models

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Alves, Jose Henrique; Greenslade, Diana; Horsburgh, Kevin; Swail, Val

    2017-03-01

    Regional and/or coastal ocean models can use tidal current harmonic forcing, together with tidal harmonic forcing along open boundaries in order to successfully simulate tides and tidal currents. These inputs can be freely generated using online open-access data, but the data produced are not always at the resolution required for regional or coastal models. Subsequent interpolation procedures can produce tidal current forcing data errors for parts of the world's coastal ocean where tidal ellipse inclinations and phases move across the invisible mathematical "boundaries" between 359° and 0° degrees (or 179° and 0°). In nature, such "boundaries" are in fact smooth transitions, but if these mathematical "boundaries" are not treated correctly during interpolation, they can produce inaccurate input data and hamper the accurate simulation of tidal currents in regional and coastal ocean models. These avoidable errors arise due to procedural shortcomings involving vector embodiment problems (i.e., how a vector is represented mathematically, for example as velocities or as coordinates). Automated solutions for producing correct tidal ellipse parameter input data are possible if a series of steps are followed correctly, including the use of Cartesian coordinates during interpolation. This note comprises the first published description of scenarios where tidal ellipse parameter interpolation errors can arise, and of a procedure to successfully avoid these errors when generating tidal inputs for regional and/or coastal ocean numerical models. We explain how a straightforward sequence of data production, format conversion, interpolation, and format reconversion steps may be used to check for the potential occurrence and avoidance of tidal ellipse interpolation and phase errors. This sequence is demonstrated via a case study of the M2 tidal constituent in the seas around Korea but is designed to be universally applicable. We also recommend employing tidal ellipse parameter

  8. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  9. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    DOEpatents

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  10. Input-state approach to Boolean networks.

    PubMed

    Cheng, Daizhan

    2009-03-01

    This paper investigates the structure of Boolean networks via input-state structure. Using the algebraic form proposed by the author, the logic-based input-state dynamics of Boolean networks, called the Boolean control networks, is converted into an algebraic discrete-time dynamic system. Then the structure of cycles of Boolean control systems is obtained as compounded cycles. Using the obtained input-state description, the structure of Boolean networks is investigated, and their attractors are revealed as nested compounded cycles, called rolling gears. This structure explains why small cycles mainly decide the behaviors of cellular networks. Some illustrative examples are presented.

  11. Non-recursive sequential input deconvolution

    NASA Astrophysics Data System (ADS)

    Bernal, Dionisio

    2017-01-01

    A scheme for sequential deconvolution of inputs from measured outputs is presented. The key feature in the formulation is elimination of the initial state from the input-output relations by projecting the output in the left null space of the observability block. Removal of the initial state allows the sequential format of the deconvolution, essential for computational reasons, to be implemented non-recursively, assuring unconditional stability. Identifiability is realized when the input-output arrangement does not have transmission zeros, and observability and controllability are shown immaterial. Comparison of results from the scheme with those from Dynamic Programming highlights the benefits of eliminating the initial state.

  12. Wireless, relative-motion computer input device

    DOEpatents

    Holzrichter, John F.; Rosenbury, Erwin T.

    2004-05-18

    The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.

  13. Accurate and simple calibration of DLP projector systems

    NASA Astrophysics Data System (ADS)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus

    2014-03-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  14. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  15. Reactive nitrogen inputs to US lands and waterways: how certain are we about sources and fluxes?

    EPA Science Inventory

    An overabundance of reactive nitrogen (N) as a result of anthropogenic activities has led to multiple human health and environmental concerns. Efforts to address these concerns require an accurate accounting of N inputs. Here, we present a novel synthesis of data describing N inp...

  16. Set Theory Applied to Uniquely Define the Inputs to Territorial Systems in Emergy Analyses

    EPA Science Inventory

    The language of set theory can be utilized to represent the emergy involved in all processes. In this paper we use set theory in an emergy evaluation to ensure an accurate representation of the inputs to territorial systems. We consider a generic territorial system and we describ...

  17. Quantitative Amyloid Imaging Using Image-Derived Arterial Input Function

    PubMed Central

    Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Hornbeck, Russ C.; Aldea, Patricia; Morris, John C.; Benzinger, Tammie L. S.

    2015-01-01

    Amyloid PET imaging is an indispensable tool widely used in the investigation, diagnosis and monitoring of Alzheimer’s disease (AD). Currently, a reference region based approach is used as the mainstream quantification technique for amyloid imaging. This approach assumes the reference region is amyloid free and has the same tracer influx and washout kinetics as the regions of interest. However, this assumption may not always be valid. The goal of this work is to evaluate an amyloid imaging quantification technique that uses arterial region of interest as the reference to avoid potential bias caused by specific binding in the reference region. 21 participants, age 58 and up, underwent Pittsburgh compound B (PiB) PET imaging and MR imaging including a time-of-flight (TOF) MR angiography (MRA) scan and a structural scan. FreeSurfer based regional analysis was performed to quantify PiB PET data. Arterial input function was estimated based on coregistered TOF MRA using a modeling based technique. Regional distribution volume (VT) was calculated using Logan graphical analysis with estimated arterial input function. Kinetic modeling was also performed using the estimated arterial input function as a way to evaluate PiB binding (DVRkinetic) without a reference region. As a comparison, Logan graphical analysis was also performed with cerebellar cortex as reference to obtain DVRREF. Excellent agreement was observed between the two distribution volume ratio measurements (r>0.89, ICC>0.80). The estimated cerebellum VT was in line with literature reported values and the variability of cerebellum VT in the control group was comparable to reported variability using arterial sampling data. This study suggests that image-based arterial input function is a viable approach to quantify amyloid imaging data, without the need of arterial sampling or a reference region. This technique can be a valuable tool for amyloid imaging, particularly in population where reference normalization

  18. Quantitative amyloid imaging using image-derived arterial input function.

    PubMed

    Su, Yi; Blazey, Tyler M; Snyder, Abraham Z; Raichle, Marcus E; Hornbeck, Russ C; Aldea, Patricia; Morris, John C; Benzinger, Tammie L S

    2015-01-01

    Amyloid PET imaging is an indispensable tool widely used in the investigation, diagnosis and monitoring of Alzheimer's disease (AD). Currently, a reference region based approach is used as the mainstream quantification technique for amyloid imaging. This approach assumes the reference region is amyloid free and has the same tracer influx and washout kinetics as the regions of interest. However, this assumption may not always be valid. The goal of this work is to evaluate an amyloid imaging quantification technique that uses arterial region of interest as the reference to avoid potential bias caused by specific binding in the reference region. 21 participants, age 58 and up, underwent Pittsburgh compound B (PiB) PET imaging and MR imaging including a time-of-flight (TOF) MR angiography (MRA) scan and a structural scan. FreeSurfer based regional analysis was performed to quantify PiB PET data. Arterial input function was estimated based on coregistered TOF MRA using a modeling based technique. Regional distribution volume (VT) was calculated using Logan graphical analysis with estimated arterial input function. Kinetic modeling was also performed using the estimated arterial input function as a way to evaluate PiB binding (DVRkinetic) without a reference region. As a comparison, Logan graphical analysis was also performed with cerebellar cortex as reference to obtain DVRREF. Excellent agreement was observed between the two distribution volume ratio measurements (r>0.89, ICC>0.80). The estimated cerebellum VT was in line with literature reported values and the variability of cerebellum VT in the control group was comparable to reported variability using arterial sampling data. This study suggests that image-based arterial input function is a viable approach to quantify amyloid imaging data, without the need of arterial sampling or a reference region. This technique can be a valuable tool for amyloid imaging, particularly in population where reference normalization may

  19. Sinusoidal input describing function for hysteresis followed by elementary backlash

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.

    1976-01-01

    The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.

  20. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  1. Cumulative distribution function solutions of advection-reaction equations with uncertain parameters.

    PubMed

    Boso, F; Broyda, S V; Tartakovsky, D M

    2014-06-08

    We derive deterministic cumulative distribution function (CDF) equations that govern the evolution of CDFs of state variables whose dynamics are described by the first-order hyperbolic conservation laws with uncertain coefficients that parametrize the advective flux and reactive terms. The CDF equations are subjected to uniquely specified boundary conditions in the phase space, thus obviating one of the major challenges encountered by more commonly used probability density function equations. The computational burden of solving CDF equations is insensitive to the magnitude of the correlation lengths of random input parameters. This is in contrast to both Monte Carlo simulations (MCSs) and direct numerical algorithms, whose computational cost increases as correlation lengths of the input parameters decrease. The CDF equations are, however, not exact because they require a closure approximation. To verify the accuracy and robustness of the large-eddy-diffusivity closure, we conduct a set of numerical experiments which compare the CDFs computed with the CDF equations with those obtained via MCSs. This comparison demonstrates that the CDF equations remain accurate over a wide range of statistical properties of the two input parameters, such as their correlation lengths and variance of the coefficient that parametrizes the advective flux.

  2. Cumulative distribution function solutions of advection–reaction equations with uncertain parameters

    PubMed Central

    Boso, F.; Broyda, S. V.; Tartakovsky, D. M.

    2014-01-01

    We derive deterministic cumulative distribution function (CDF) equations that govern the evolution of CDFs of state variables whose dynamics are described by the first-order hyperbolic conservation laws with uncertain coefficients that parametrize the advective flux and reactive terms. The CDF equations are subjected to uniquely specified boundary conditions in the phase space, thus obviating one of the major challenges encountered by more commonly used probability density function equations. The computational burden of solving CDF equations is insensitive to the magnitude of the correlation lengths of random input parameters. This is in contrast to both Monte Carlo simulations (MCSs) and direct numerical algorithms, whose computational cost increases as correlation lengths of the input parameters decrease. The CDF equations are, however, not exact because they require a closure approximation. To verify the accuracy and robustness of the large-eddy-diffusivity closure, we conduct a set of numerical experiments which compare the CDFs computed with the CDF equations with those obtained via MCSs. This comparison demonstrates that the CDF equations remain accurate over a wide range of statistical properties of the two input parameters, such as their correlation lengths and variance of the coefficient that parametrizes the advective flux. PMID:24910529

  3. Using a model to assess the role of the spatiotemporal pattern of inhibitory input and intrasegmental electrical coupling in the intersegmental and side-to-side coordination of motor neurons by the leech heartbeat central pattern generator.

    PubMed

    García, Paul S; Wright, Terrence M; Cunningham, Ian R; Calabrese, Ronald L

    2008-09-01

    Previously we presented a quantitative description of the spatiotemporal pattern of inhibitory synaptic input from the heartbeat central pattern generator (CPG) to segmental motor neurons that drive heartbeat in the medicinal leech and the resultant coordination of CPG interneurons and motor neurons. To begin elucidating the mechanisms of coordination, we explore intersegmental and side-to-side coordination in an ensemble model of all heart motor neurons and their known synaptic inputs and electrical coupling. Model motor neuron intrinsic properties were kept simple, enabling us to determine the extent to which input and electrical coupling acting together can account for observed coordination in the living system in the absence of a substantive contribution from the motor neurons themselves. The living system produces an asymmetric motor pattern: motor neurons on one side fire nearly in synchrony (synchronous), whereas on the other they fire in a rear-to-front progression (peristaltic). The model reproduces the general trends of intersegmental and side-to-side phase relations among motor neurons, but the match with the living system is not quantitatively accurate. Thus realistic (experimentally determined) inputs do not produce similarly realistic output in our model, suggesting that motor neuron intrinsic properties may contribute to their coordination. By varying parameters that determine electrical coupling, conduction delays, intraburst synaptic plasticity, and motor neuron excitability, we show that the most important determinant of intersegmental and side-to-side phase relations in the model was the spatiotemporal pattern of synaptic inputs, although phasing was influenced significantly by electrical coupling.

  4. Network dynamics for optimal compressive-sensing input-signal recovery.

    PubMed

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2014-10-01

    By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.

  5. Network dynamics for optimal compressive-sensing input-signal recovery

    NASA Astrophysics Data System (ADS)

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2014-10-01

    By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.

  6. 7 CFR 3430.907 - Stakeholder input.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... ADMINISTRATIVE PROVISIONS New Era Rural Technology Competitive Grants Program § 3430.907 Stakeholder input..., for technology development, applied research, and/or training....

  7. Multi-input distributed classifiers for synthetic genetic circuits.

    PubMed

    Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail

    2015-01-01

    For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry.

  8. Adaptive approximation method for joint parameter estimation and identical synchronization of chaotic systems.

    PubMed

    Mariño, Inés P; Míguez, Joaquín

    2005-11-01

    We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.

  9. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  10. Combined LTP and LTD of modulatory inputs controls neuronal processing of primary sensory inputs.

    PubMed

    Doiron, Brent; Zhao, Yanjun; Tzounopoulos, Thanos

    2011-07-20

    A hallmark of brain organization is the integration of primary and modulatory pathways by principal neurons. However, the pathway interactions that shape primary input processing remain unknown. We investigated this problem in mouse dorsal cochlear nucleus (DCN) where principal cells integrate primary, auditory nerve input with modulatory, parallel fiber input. Using a combined experimental and computational approach, we show that combined LTP and LTD of parallel fiber inputs to DCN principal cells and interneurons, respectively, broaden the time window within which synaptic inputs summate. Enhanced summation depolarizes the resting membrane potential and thus lowers the response threshold to auditory nerve inputs. Combined LTP and LTD, by preserving the variance of membrane potential fluctuations and the membrane time constant, fixes response gain and spike latency as threshold is lowered. Our data reveal a novel mechanism mediating adaptive and concomitant homeostatic regulation of distinct features of neuronal processing of sensory inputs.

  11. Subsonic flight test evaluation of a propulsion system parameter estimation process for the F100 engine

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1992-01-01

    Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.

  12. Input filter compensation for switching regulators

    NASA Technical Reports Server (NTRS)

    Kelkar, S. S.; Lee, F. C.

    1983-01-01

    A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.

  13. Influential input classification in probabilistic multimedia models

    SciTech Connect

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.; Geng, Shu

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions one should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.

  14. LCA of emerging technologies: addressing high uncertainty on inputs' variability when performing global sensitivity analysis.

    PubMed

    Lacirignola, Martino; Blanc, Philippe; Girard, Robin; Pérez-López, Paula; Blanc, Isabelle

    2017-02-01

    In the life cycle assessment (LCA) context, global sensitivity analysis (GSA) has been identified by several authors as a relevant practice to enhance the understanding of the model's structure and ensure reliability and credibility of the LCA results. GSA allows establishing a ranking among the input parameters, according to their influence on the variability of the output. Such feature is of high interest in particular when aiming at defining parameterized LCA models. When performing a GSA, the description of the variability of each input parameter may affect the results. This aspect is critical when studying new products or emerging technologies, where data regarding the model inputs are very uncertain and may cause misleading GSA outcomes, such as inappropriate input rankings. A systematic assessment of this sensitivity issue is now proposed. We develop a methodology to analyze the sensitivity of the GSA results (i.e. the stability of the ranking of the inputs) with respect to the description of such inputs of the model (i.e. the definition of their inherent variability). With this research, we aim at enriching the debate on the application of GSA to LCAs affected by high uncertainties. We illustrate its application with a case study, aiming at the elaboration of a simple model expressing the life cycle greenhouse gas emissions of enhanced geothermal systems (EGS) as a function of few key parameters. Our methodology allows identifying the key inputs of the LCA model, taking into account the uncertainty related to their description.

  15. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  16. More-Accurate Model of Flows in Rocket Injectors

    NASA Technical Reports Server (NTRS)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  17. An Accurate and Dynamic Computer Graphics Muscle Model

    NASA Technical Reports Server (NTRS)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  18. The Building Loads Analysis System Thermodynamics (BLAST) Program, Version 2.0: Input Booklet.

    DTIC Science & Technology

    1979-06-01

    Z ); Delete all handwritten text i.e., form should be blank Input Form. Delete all hyphens between words. Accession For " 7 MIS GfSki rry DOC...NO 4. TITLE f«nd Subtitle) THE BUILDING LOADS ANALYSIS SYSTEM THERMODYNAMICS (BLAST) PROGRAM, VERSION 2.0: INPUT BOOkLET 7 . AUTHORf»; E... 7 I quipmenl Types 8 Speeial Parameters Applicability Al Other System Parameters A2 Speeial Parameters Page 30 34 7 5 ->b 46 77 [08 110

  19. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

    NASA Astrophysics Data System (ADS)

    Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

    2012-01-01

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum

  20. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies.

    PubMed

    Winant, Celeste D; Aparici, Carina Mari; Zelnik, Yuval R; Reutter, Bryan W; Sitek, Arkadiusz; Bacharach, Stephen L; Gullberg, Grant T

    2012-01-21

    Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic (94)Tc-methoxyisobutylisonitrile ((94)Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K(1) for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K(1). For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from (94)Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of (99m)Tc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The

  1. An accurate metric for the spacetime around rotating neutron stars.

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-01-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parameterisation of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a 3-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  2. Fast and accurate estimation for astrophysical problems in large databases

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  3. Interaction between rod and cone inputs in mixed-input bipolar cells in goldfish retina.

    PubMed

    Joselevitch, Christina; Kamermans, Maarten

    2007-05-15

    One class of goldfish bipolar cells, the mixed-input bipolar cell, contacts both rods and cones. Although the morphology of the different mixed-input bipolar cell subtypes has been described, insight into the interaction between rods and cones at the bipolar cell level is scarce. The aim of this study was to characterize this interaction in the different physiological types of mixed-input bipolar cells. We found mixed-input bipolar cells that depolarized, hyperpolarized, or showed a combination of the two types of response after center stimulation. The relative contributions of rod and cone inputs varied strongly in these cell populations. Depolarizing mixed-input bipolar cells are rod-dominated, having the highest sensitivity and the smallest dynamic range. Hyperpolarizing mixed-input bipolar cells, on the other hand, have a more balanced rod-cone input ratio. This extends their dynamic range and decreases their sensitivity. Finally, opponent mixed-input bipolar cells seem to be mostly cone-dominated, although some rod input is present. The antagonistic photoreceptor inputs form a push-pull system that makes these mixed-input bipolar cells very sensitive to changes in light intensity. Our finding that spectral tuning changes with light intensity conflicts with the idea that the separate non-opponent and opponent channels are related to coding of brightness and color, respectively. The organization of mixed-input bipolar cells into various classes with different dynamic ranges and absolute sensitivities might be a strategy to transmit information about all visual aspects most efficiently, given the sustained nature of bipolar cell responses and their limited voltage range.

  4. Optimizing Input/Output Using Adaptive File System Policies

    NASA Technical Reports Server (NTRS)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  5. Assessment of the MODIS Algorithm for Retrieval of Aerosol Parameters over the Ocean

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Li, W.; Stamnes, K.; Eide, H.; Spurr, R.; Tsay, S.

    2006-12-01

    The MODIS aerosol algorithm over the ocean derives spectral aerosol optical depth and aerosol size parameters from satellite measured radiances at the top of atmosphere (TOA). It is based on the addition of Apparent Optical Properties (AOPs): TOA reflectance is approximated as a linear combination of reflectance resulting from a small particle mode and a large particle mode. The weighting parameter is defined as the fraction of the optical depth at 550 nm due to the small mode. The AOP approach is correct only in the single scattering limit. For a physically correct TOA reflectance simulation, we create linear combinations of the Inherent Optical Properties (IOPs) of small and large particle modes, in which the weighting parameter is defined as the fraction of the number density attributed to the small particle mode. We use these IOPs as inputs to an accurate multiple scattering radiative transfer model. We show that the use of accurate radiative transfer simulations and weighting parameters as used in the IOP approach yields more satisfactory results for the retrieved aerosol optical depth and the size parameters.

  6. Rapid Bayesian point source inversion using pattern recognition --- bridging the gap between regional scaling relations and accurate physical modelling

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.

    2014-12-01

    Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern

  7. Six axis force feedback input device

    NASA Technical Reports Server (NTRS)

    Ohm, Timothy (Inventor)

    1998-01-01

    The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.

  8. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  9. Synchronization phenomena in pulse-coupled networks driven by spike-train inputs.

    PubMed

    Torikai, Hiroyuki; Saito, Toshimichi

    2004-03-01

    We present a pulse-coupled network (PCN) of spiking oscillators (SOCs) which can be implemented as a simple electrical circuit. The SOC has a periodic reset level that can realize rich dynamics represented by chaotic spike-trains. Applying a spike-train input, the PCN can exhibit the following interesting phenomena. 1) Each SOC synchronizes with a part of the input without overlapping, i.e., the input is decomposed. 2) Some SOCs synchronize with a part of the input with overlapping, i.e., the input is decomposed and the SOCs are clustered. The PCN has multiple synchronization phenomena and exhibits one of them depending on the initial state. We clarify the numbers of the synchronization phenomena and the parameter regions in which these phenomena can be observed. Also stability of the synchronization phenomena is clarified. Presenting a simple test circuit, typical phenomena are confirmed experimentally.

  10. Accurate measurement of the pulse wave delay with imaging photoplethysmography

    PubMed Central

    Kamshilin, Alexei A.; Sidorov, Igor S.; Babayan, Laura; Volynsky, Maxim A.; Giniatullin, Rashid; Mamontov, Oleg V.

    2016-01-01

    Assessment of the cardiovascular parameters using noncontact video-based or imaging photoplethysmography (IPPG) is usually considered as inaccurate because of strong influence of motion artefacts. To optimize this technique we performed a simultaneous recording of electrocardiogram and video frames of the face for 36 healthy volunteers. We found that signal disturbances originate mainly from the stochastically enhanced dichroic notch caused by endogenous cardiovascular mechanisms, with smaller contribution of the motion artefacts. Our properly designed algorithm allowed us to increase accuracy of the pulse-transit-time measurement and visualize propagation of the pulse wave in the facial region. Thus, the accurate measurement of the pulse wave parameters with this technique suggests a sensitive approach to assess local regulation of microcirculation in various physiological and pathological states. PMID:28018731

  11. Sensitive and accurate identification of protein–DNA binding events in ChIP-chip assays using higher order derivative analysis

    PubMed Central

    Barrett, Christian L.; Cho, Byung-Kwan

    2011-01-01

    Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353

  12. Decontextualized language input and preschoolers' vocabulary development.

    PubMed

    Rowe, Meredith L

    2013-11-01

    This article discusses the importance of using decontextualized language, or language that is removed from the here and now including pretend, narrative, and explanatory talk, with preschool children. The literature on parents' use of decontextualized language is reviewed and results of a longitudinal study of parent decontextualized language input in relation to child vocabulary development are explained. The main findings are that parents who provide their preschool children with more explanations and narrative utterances about past or future events in the input have children with larger vocabularies 1 year later, even with quantity of parent input and child prior vocabulary skill controlled. Recommendations for how to engage children in decontextualized language conversations are provided.

  13. On Markov parameters in system identification

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Longman, Richard W.

    1991-01-01

    A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.

  14. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  15. How accurately can 21cm tomography constrain cosmology?

    NASA Astrophysics Data System (ADS)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  16. An update of input instructions to TEMOD

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The theory and operation of a FORTRAN 4 computer code, designated as TEMOD, used to calcuate tubular thermoelectric generator performance is described in WANL-TME-1906. The original version of TEMOD was developed in 1969. A description is given of additions to the mathematical model and an update of the input instructions to the code. Although the basic mathematical model described in WANL-TME-1906 has remained unchanged, a substantial number of input/output options were added to allow completion of module performance parametrics as required in support of the compact thermoelectric converter system technology program.

  17. Prediction of States of Discrete Systems with Unknown Input of the Model Using Compensation

    NASA Astrophysics Data System (ADS)

    Smagin, V. I.

    2017-01-01

    The problem of state prediction for linear dynamic systems with discrete time is considered in the presence of unknown input and inaccurately specified parameters in the model. An algorithm with compensation for the constant component and estimation of the unknown variable input component by the least squares method is suggested. Results of statistical simulation are presented. The algorithm can be used for solving problems of processing information obtained as a result of observations over physical processes.

  18. Effects of bias in solar radiation inputs on ecosystem model performance

    NASA Astrophysics Data System (ADS)

    Asao, Shinichi; Sun, Zhibin; Gao, Wei

    2015-09-01

    Solar radiation inputs drive many processes in terrestrial ecosystem models. The processes (e.g. photosynthesis) account for most of the fluxes of carbon and water cycling in the models. It is thus clear that errors in solar radiation inputs cause key model outputs to deviate from observations, parameters to become suboptimal, and model predictions to loose confidence. However, errors in solar radiation inputs are unavoidable for most model predictions since models are often run with observations with spatial or / and temporal gaps. As modeled processes are non-linear and interacting with each other, it is unclear how much confidence most model predictions merits without examining the effects of those errors on the model performance. In this study, we examined the effects using a terrestrial ecosystem model, DayCent. DayCent was parameterized for annual grassland in California with six years of daily eddy covariance data totaling 15,337 data points. Using observed solar radiation values, we introduced bias at four different levels. We then simultaneously calibrated 48 DayCent parameters through inverse modeling using the PEST parameter estimation software. The bias in solar radiation inputs affected the calibration only slightly and preserved model performance. Bias slightly worsened simulations of water flux, but did not affect simulations of CO2 fluxes. This arose from distinct parameter set for each bias level, and the parameter sets were surprisingly unconstrained by the extensive observations. We conclude that ecosystem models perform relatively well even with substantial bias in solar radiation inputs. However, model parameters and predictions warrant skepticism because model parameters can accommodate biases in input data despite extensive observations.

  19. An efficient and accurate model of the coax cable feeding structure for FEM simulations

    NASA Technical Reports Server (NTRS)

    Gong, Jian; Volakis, John L.

    1995-01-01

    An efficient and accurate coax cable feed model is proposed for microstrip or cavity-backed patch antennas in the context of a hybrid finite element method (FEM). A TEM mode at the cavity-cable junction is assumed for the FEM truncation and system excitation. Of importance in this implementation is that the cavity unknowns are related to the model fields by enforcing an equipotential condition rather than field continuity. This scheme proved quite accurate and may be applied to other decomposed systems as a connectivity constraint. Comparisons of our predictions with input impedance measurements are presented and demonstrate the substantially improved accuracy of the proposed model.

  20. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  1. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  2. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    NASA Astrophysics Data System (ADS)

    Liao, Qifeng; Lin, Guang

    2016-07-01

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  3. Investigation of Input Signal Curve Effect on Formed Pulse of Hydraulic-Powered Pulse Machine

    NASA Astrophysics Data System (ADS)

    Novoseltseva, M. V.; Masson, I. A.; Pashkov, E. N.

    2016-04-01

    Well drilling machines should have as high efficiency factor as it is possible. This work proposes factors that are affected by change of input signal pulse curve. A series of runs are conducted on mathematical model of hydraulic-powered pulse machine. From this experiment, interrelations between input pulse curve and construction parameters are found. Results of conducted experiment are obtained with the help of the mathematical model, which is created in Simulink Matlab. Keywords - mathematical modelling; impact machine; output signal amplitude; input signal curve.

  4. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    SciTech Connect

    Liao, Qifeng; Lin, Guang

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  5. Field measurement of moisture-buffering model inputs for residential buildings

    SciTech Connect

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the only unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.

  6. Field measurement of moisture-buffering model inputs for residential buildings

    DOE PAGES

    Woods, Jason; Winkler, Jon

    2016-02-05

    Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less

  7. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  8. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  9. A new and accurate continuum description of moving fronts

    NASA Astrophysics Data System (ADS)

    Johnston, S. T.; Baker, R. E.; Simpson, M. J.

    2017-03-01

    Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.

  10. Accurate measurement of streamwise vortices using dual-plane PIV

    NASA Astrophysics Data System (ADS)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  11. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  12. Comprehensible Input and Second Language Acquisition: What Is the Relationship?

    ERIC Educational Resources Information Center

    Loschky, Lester

    1994-01-01

    Examined the influence of input and interactional modifications on second-language acquisition, assigning 41 learners of Japanese to 1 of 3 experimental groups: (1) unmodified input with no interaction; (2) premodified input with no interaction; and (3) unmodified input with the chance for negotiated input. Results indicated that comprehension was…

  13. Experimental Studies of Nuclear Physics Input for γ -Process Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Scholz, Philipp; Heim, Felix; Mayer, Jan; Netterdon, Lars; Zilges, Andreas

    The predictions of reaction rates for the γ process in the scope of the Hauser-Feshbach statistical model crucially depend on nuclear physics input-parameters as optical-model potentials (OMP) or γ -ray strength functions. Precise cross-section measurements at astrophysically relevant energies help to constrain adopted models and, therefore, to reduce the uncertainties in the theoretically predicted reaction rates. During the last years, several cross-sections of charged-particle induced reactions on heavy nuclei have been measured at the University of Cologne. Either by means of the in-beam method at the HORUS γ -ray spectrometer or the activation technique using the Cologne Clover Counting Setup, total and partial cross-sections could be used to further constrain different models for nuclear physics input-parameters. It could be shown that modifications on the α -OMP in the case of the 112Sn(α , γ ) reaction also improve the description of the recently measured cross sections of the 108Cd(α , γ ) and 108Cd(α , n) reaction and other reactions as well. Partial cross-sections of the 92Mo(p, γ ) reaction were used to improve the γ -strength function model in 93Tc in the same way as it was done for the 89Y(p, γ ) reaction.

  14. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  15. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

    SciTech Connect

    Hamrick, Todd

    2011-01-01

    Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to compute the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.

  16. Input Shaping enhanced Active Disturbance Rejection Control for a twin rotor multi-input multi-output system (TRMS).

    PubMed

    Yang, Xiaoyan; Cui, Jianwei; Lao, Dazhong; Li, Donghai; Chen, Junhui

    2016-05-01

    In this paper, a composite control based on Active Disturbance Rejection Control (ADRC) and Input Shaping is presented for TRMS with two degrees of freedom (DOF). The control tasks consist of accurately tracking desired trajectories and obtaining disturbance rejection in both horizontal and vertical planes. Due to un-measurable states as well as uncertainties stemming from modeling uncertainty and unknown disturbance torques, ADRC is employed, and feed-forward Input Shaping is used to improve the dynamical response. In the proposed approach, because the coupling effects are maintained in controller derivation, there is no requirement to decouple the TRMS into horizontal and vertical subsystems, which is usually performed in the literature. Finally, the proposed method is implemented on the TRMS platform, and the results are compared with those of PID and ADRC in a similar structure. The experimental results demonstrate the effectiveness of the proposed method. The operation of the controller allows for an excellent set-point tracking behavior and disturbance rejection with system nonlinearity and complex coupling conditions.

  17. Input and Intake in Language Acquisition

    ERIC Educational Resources Information Center

    Gagliardi, Ann C.

    2012-01-01

    This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…

  18. 7 CFR 3430.907 - Stakeholder input.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... ADMINISTRATIVE PROVISIONS New Era Rural Technology Competitive Grants Program § 3430.907 Stakeholder input. NIFA...: (a) Community college(s). (b) Advanced technological center(s), located in rural area, for...

  19. 7 CFR 3430.907 - Stakeholder input.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... ADMINISTRATIVE PROVISIONS New Era Rural Technology Competitive Grants Program § 3430.907 Stakeholder input. NIFA...: (a) Community college(s). (b) Advanced technological center(s), located in rural area, for...

  20. 7 CFR 3430.907 - Stakeholder input.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... ADMINISTRATIVE PROVISIONS New Era Rural Technology Competitive Grants Program § 3430.907 Stakeholder input. NIFA...: (a) Community college(s). (b) Advanced technological center(s), located in rural area, for...

  1. Multiple Input Microcantilever Sensor with Capacitive Readout

    SciTech Connect

    Britton, C.L., Jr.; Brown, G.M.; Bryan, W.L.; Clonts, L.G.; DePriest, J.C.; Emergy, M.S.; Ericson, M.N.; Hu, Z.; Jones, R.L.; Moore, M.R.; Oden, P.I.; Rochelle, J.M.; Smith, S.F.; Threatt, T.D.; Thundat, T.; Turner, G.W.; Warmack, R.J.; Wintenberg, A.L.

    1999-03-11

    A surface-micromachined MEMS process has been used to demonstrate multiple-input chemical sensing using selectively coated cantilever arrays. Combined hydrogen and mercury-vapor detection was achieved with a palm-sized, self-powered module with spread-spectrum telemetry reporting.

  2. Input-Based Incremental Vocabulary Instruction

    ERIC Educational Resources Information Center

    Barcroft, Joe

    2012-01-01

    This fascinating presentation of current research undoes numerous myths about how we most effectively learn new words in a second language. In clear, reader-friendly text, the author details the successful approach of IBI vocabulary instruction, which emphasizes the presentation of target vocabulary as input early on and the incremental (gradual)…

  3. Treatments of Precipitation Inputs to Hydrologic Models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...

  4. Soil Organic Carbon Input from Urban Turfgrasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon sequ...

  5. Soil Organic Carbon Input from Urban Turfgrasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon (C) input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon ...

  6. Young Children's Use of Microcomputer Input Devices.

    ERIC Educational Resources Information Center

    King, John; Alloway, Nola

    1993-01-01

    Reports on a study of the ability of preschoolers and first, second, and third graders to use three computer input devices: a joystick, a mouse, and a keyboard. For all grade levels, the mouse offered the greatest ease of use in manipulating icons, followed by the joystick and the keyboard. No effect for gender was found. (Contains 30 references.)…

  7. Preschooler's Use of Microcomputers and Input Devices.

    ERIC Educational Resources Information Center

    King, John; Alloway, Nola

    1992-01-01

    Describes a study that measured preschoolers' use of microcomputers in the following areas: (1) efficiency of use of input devices, including the keyboard, the joystick, and the mouse; (2) use during free-play activities, including interaction with the microcomputer and with each other; and (3) gender differences. (40 references) (LRW)

  8. Input, Interaction and Output: An Overview

    ERIC Educational Resources Information Center

    Gass, Susan; Mackey, Alison

    2006-01-01

    This paper presents an overview of what has come to be known as the "Interaction Hypothesis," the basic tenet of which is that through input and interaction with interlocutors, language learners have opportunities to notice differences between their own formulations of the target language and the language of their conversational…

  9. Adaptive random testing with combinatorial input domain.

    PubMed

    Huang, Rubing; Chen, Jinfu; Lu, Yansheng

    2014-01-01

    Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure-detection capability and has been widely applied in different scenarios, such as numerical programs, some object-oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART-CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART-CID generally performs better than RT, with respect to different evaluation metrics.

  10. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  11. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  12. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  13. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  14. Calibration of a distributed flood forecasting model with input uncertainty using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Li, Mingliang; Yang, Dawen; Chen, Jinsong; Hubbard, Susan S.

    2012-08-01

    In the process of calibrating distributed hydrological models, accounting for input uncertainty is important, yet challenging. In this study, we develop a Bayesian model to estimate parameters associated with a geomorphology-based hydrological model (GBHM). The GBHM model uses geomorphic characteristics to simplify model structure and physically based methods to represent hydrological processes. We divide the observed discharge into low- and high-flow data, and use the first-order autoregressive model to describe their temporal dependence. We consider relative errors in rainfall as spatially distributed variables and estimate them jointly with the GBHM parameters. The joint posterior probability distribution is explored using Markov chain Monte Carlo methods, which include Metropolis-Hastings, delay rejection adaptive Metropolis, and Gibbs sampling methods. We evaluate the Bayesian model using both synthetic and field data sets. The synthetic case study demonstrates that the developed method generally is effective in calibrating GBHM parameters and in estimating their associated uncertainty. The calibration ignoring input errors has lower accuracy and lower reliability compared to the calibration that includes estimation of the input errors, especially under model structure uncertainty. The field case study shows that calibration of GBHM parameters under complex field conditions remains a challenge. Although jointly estimating input errors and GBHM parameters improves the continuous ranked probability score and the consistency of the predictive distribution with the observed data, the improvement is incremental. To better calibrate parameters in a distributed model, such as GBHM here, we need to develop a more complex model and incorporate much more information.

  15. An Accurate, Simplified Model Intrabeam Scattering

    SciTech Connect

    Bane, Karl LF

    2002-05-23

    Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.

  16. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  17. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  18. Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2001-01-01

    The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be

  19. Supergranular Parameters

    NASA Astrophysics Data System (ADS)

    Udayashankar, Paniveni

    2016-07-01

    I study the complexity of supergranular cells using intensity patterns from Kodaikanal solar observatory. The chaotic and turbulent aspect of the solar supergranulation can be studied by examining the interrelationships amongst the parameters characterizing supergranular cells namely size, horizontal flow field, lifetime and physical dimensions of the cells and the fractal dimension deduced from the size data. The findings are supportive of Kolmogorov's theory of turbulence. The Data consists of visually identified supergranular cells, from which a fractal dimension 'D' for supergranulation is obtained according to the relation P α AD/2 where 'A' is the area and 'P' is the perimeter of the supergranular cells. I find a fractal dimension close to about 1.3 which is consistent with that for isobars and suggests a possible turbulent origin. The cell circularity shows a dependence on the perimeter with a peak around (1.1-1.2) x 105 m. The findings are supportive of Kolmogorov's theory of turbulence.

  20. Application of Voice Recognition Input to Decision Support Systems

    DTIC Science & Technology

    1988-12-01

    Support System (GDSS) Talkwriter Human Computer Interface Voice Input Individual Decision Support System (IDSS) Voice Input/Output Man Machine Voice ... Interface Voice Processing Natural Language Voice Input Voice Recognition Natural Language Accessed Voice Recognizer Speech Entry Voice Vocabulary

  1. CADDIS Volume 2. Sources, Stressors and Responses: Urbanization - Wastewater Inputs

    EPA Pesticide Factsheets

    Intro to wastewater inputs associated with urbanization, overview of combined sewer overflows, overview of how wastewater inputs can contribute to enrichment or eutrophication, overview of how wastewater inputs can affect reproduction by stream fauna.

  2. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    NASA Astrophysics Data System (ADS)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  3. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  4. Spectroscopic parameters for solar-type stars with moderate-to-high rotation. New parameters for ten planet hosts

    NASA Astrophysics Data System (ADS)

    Tsantaki, M.; Sousa, S. G.; Santos, N. C.; Montalto, M.; Delgado-Mena, E.; Mortier, A.; Adibekyan, V.; Israelian, G.

    2014-10-01

    Context. Planetary studies demand precise and accurate stellar parameters as input for inferring the planetary properties. Different methods often provide different results that could lead to biases in the planetary parameters. Aims: In this work, we present a refinement of the spectral synthesis technique designed to treat fast rotating stars better. This method is used to derive precise stellar parameters, namely effective temperature, surface gravity, metallicity, and rotational velocity. The procedure is tested for FGK stars with low and moderate-to-high rotation rates. Methods: The spectroscopic analysis is based on the spectral synthesis package Spectroscopy Made Easy (SME), which assumes Kurucz model atmospheres in LTE. The line list where the synthesis is conducted is comprised of iron lines, and the atomic data are derived after solar calibration. Results: The comparison of our stellar parameters shows good agreement with literature values, both for slowly and for fast rotating stars. In addition, our results are on the same scale as the parameters derived from the iron ionization and excitation method presented in our previous works. We present new atmospheric parameters for 10 transiting planet hosts as an update to the SWEET-Cat catalog. We also re-analyze their transit light curves to derive new updated planetary properties. Based on observations collected at the La Silla Observatory, ESO (Chile) with the FEROS spectrograph at the 2.2 m telescope (ESO runs ID 089.C-0444(A), 088.C-0892(A)) and with the HARPS spectrograph at the 3.6 m telescope (ESO runs ID 072.C-0488(E), 079.C-0127(A)); at the Observatoire de Haute-Provence (OHP, CNRS/OAMP), France, with the SOPHIE spectrograph at the 1.93 m telescope and at the Observatoire Midi-Pyrénées (CNRS), France, with the NARVAL spectrograph at the 2 m Bernard Lyot Telescope (Run ID L131N11).Appendix A is available in electronic form at http://www.aanda.org

  5. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    SciTech Connect

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.

    2015-09-22

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  6. Program BTLS Technical Description Selection of Input Data, and Running Instructions

    DTIC Science & Technology

    1982-01-01

    7TZI . „, running the computer program. DD , j"„ 73 1473 EDITION OF I NOV « 5 IS OBSOLETE 3 r r 3 S/K C 105- 0 I «- 6601 I...Range Curves 4 3 Ray Path For Total Internal Reflection 7 4 Ray Path Showing Angle Designations 8 5 Diagram Showing Range, Depth, and Apparent...Frequency Dependence 84 TABLES PAGE 1 Geoparameter Inputs 5 2 Inversion Parameter Inputs 6 3 Sound Speed Profile Parameters from Hamilton 80

  7. Visual parameter optimisation for biomedical image processing

    PubMed Central

    2015-01-01

    Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538

  8. APPLICABILITY OF A ACCUMULATED DAMAGE PARAMETER METHOD ON SOIL LIQUEFACTION DUE TOSEVERAL EARTHQUAKES

    NASA Astrophysics Data System (ADS)

    Izawa, Jun; Tanoue, Kazuya; Murono, Yoshitaka

    Severe soil liquefaction due to long duration earthquake with low acceleration occurred at Tokyo Bay area in the 2011 off the Pacific coast of Tohoku Earthquake. This phenomenon clearly shows that soil liquefaction is affected by properties of input waves. This paper describes effect of wave properties of earthquake on liquefaction using Effective Stress analysis with some earthquakes. Analytical result showedthat almost the same pore water pressure was observed due to both long durationearthquake with max acceleration of 150Gal and typical inland active fault earthquake with 891Gal. Additionally, lique-faction potentials for each earthquake were evaluated by simple judgment with accumulated damage parameter, which is used for design of railway structuresin Japan. As a result, it was found that accurate liquefaction resistance on large cyclic area is necessaryto evaluate liquefaction potential due to long duration earthquake with low acceleration with simple judgment with accumulated damage parameter.

  9. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  10. SimSphere model sensitivity analysis towards establishing its use for deriving key parameters characterising land surface interactions

    NASA Astrophysics Data System (ADS)

    Petropoulos, G. P.; Griffiths, H. M.; Carlson, T. N.; Ioannou-Katidis, P.; Holt, T.

    2014-09-01

    Being able to accurately estimate parameters characterising land surface interactions is currently a key scientific priority due to their central role in the Earth's global energy and water cycle. To this end, some approaches have been based on utilising the synergies between land surface models and Earth observation (EO) data to retrieve relevant parameters. One such model is SimSphere, the use of which is currently expanding, either as a stand-alone application or synergistically with EO data. The present study aimed at exploring the effect of changing the atmospheric sounding profile on the sensitivity of key variables predicted by this model assuming different probability distribution functions (PDFs) for its inputs/outputs. To satisfy this objective and to ensure consistency and comparability to analogous studies conducted previously on the model, a sophisticated, cutting-edge sensitivity analysis (SA) method adopting Bayesian theory was implemented on SimSphere. Our results did not show dramatic changes in the nature or ranking of influential model inputs in comparison to previous studies. Model outputs examined using SA were sensitive to a small number of the inputs; a significant amount of first-order interactions between the inputs was also found, suggesting strong model coherence. Results showed that the assumption of different PDFs for the model inputs/outputs did not have an important bearing on mapping the most responsive model inputs and interactions, but only the absolute SA measures. This study extends our understanding of SimSphere's structure and further establishes its coherence and correspondence to that of a natural system's behaviour. Consequently, the present work represents a significant step forward in the global efforts on SimSphere verification, especially those focusing on the development of global operational products from the model synergy with EO data.

  11. Modal Parameter Identification of a Flexible Arm System

    NASA Technical Reports Server (NTRS)

    Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard

    1998-01-01

    In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.

  12. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  13. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  14. A More Accurate Measurement of the {sup 28}Si Lattice Parameter

    SciTech Connect

    Massa, E. Sasso, C. P.; Mana, G.; Palmisano, C.

    2015-09-15

    In 2011, a discrepancy between the values of the Planck constant measured by counting Si atoms and by comparing mechanical and electrical powers prompted a review, among others, of the measurement of the spacing of {sup 28}Si (220) lattice planes, either to confirm the measured value and its uncertainty or to identify errors. This exercise confirmed the result of the previous measurement and yields the additional value d{sub 220} = 192 014 711.98(34) am having a reduced uncertainty.

  15. Accurate Zero Parameter Correlation Energy Functional Obtained from the Homogeneous Electron Gas with an Energy Gap

    NASA Astrophysics Data System (ADS)

    Krieger, J. B.; Chen, Jiqiang; Iafrate, G. J.; Savin, A.

    1998-03-01

    We have obtained an analytic approximation to E_c(r_g, ζ,G) where G is an energy gap separating the occupied and unoccupied states of a homogeneous electron gas for ζ=3D0 and ξ=3D1. When G=3D0, E_c(r_g, ζ) reduces to the usual LSD result. This functional is employed in calculating correlation energies for unpolarized atoms and ions for Z <= 18 by taking G[n]=3D1/8|nabla ln n|^2, which reduces to the ionization energy in the large r limit in an exact Kohn-Sham (KS) theory. The resulting functional is self-interaction-corrected employing a method which is invariant under a unitary transformation. We find that the application of this approach to the calculation of the Ec functional reduces the error in the LSD result by more than 95%. When the value of G is approximately corrected to include the effect of higher lying unoccupied localized states, the resulting values of Ec are within a few percent of the exact results.

  16. Virtual input device with diffractive optical element

    NASA Astrophysics Data System (ADS)

    Wu, Ching Chin; Chu, Chang Sheng

    2005-02-01

    As a portable device, such as PDA and cell phone, a small size build in virtual input device is more convenient for complex input demand. A few years ago, a creative idea called 'virtual keyboard' is announced, but up to now there's still no mass production method for this idea. In this paper we'll show the whole procedure of making a virtual keyboard. First of all is the HOE (Holographic Optical Element) design of keyboard image which yields a fan angle about 30 degrees, and then use the electron forming method to copy this pattern in high precision. And finally we can product this element by inject molding. With an adaptive lens design we can get a well correct keyboard image in distortion and a wilder fan angle about 70 degrees. With a batter alignment of HOE pattern lithography, we"re sure to get higher diffraction efficiency.

  17. XBox Input -Version 1.0

    SciTech Connect

    2012-10-03

    Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: • Robot arms - Provides joint control and several tool control schemes • UGV's - Provides translational and rotational commands for "skid-steer" vehicles • Pan-tilt units - Provides several modes of control including velocity, position, and point-tracking • Head-mounted displays (HMO)- Controls the viewpoint of a HMO • Umbra frames - Controls the position andorientation of an Umbra posrot object • Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.

  18. Multimodal interfaces with voice and gesture input

    SciTech Connect

    Milota, A.D.; Blattner, M.M.

    1995-07-20

    The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.

  19. Neuroprosthetics and the science of patient input.

    PubMed

    Benz, Heather L; Civillico, Eugene F

    2017-01-01

    Safe and effective neuroprosthetic systems are of great interest to both DARPA and CDRH, due to their innovative nature and their potential to aid severely disabled populations. By expanding what is possible in human-device interaction, these devices introduce new potential benefits and risks. Therefore patient input, which is increasingly important in weighing benefits and risks, is particularly relevant for this class of devices. FDA has been a significant contributor to an ongoing stakeholder conversation about the inclusion of the patient voice, working collaboratively to create a new framework for a patient-centered approach to medical device development. This framework is evolving through open dialogue with researcher and patient communities, investment in the science of patient input, and policymaking that is responsive to patient-centered data throughout the total product life cycle. In this commentary, we will discuss recent developments in patient-centered benefit-risk assessment and their relevance to the development of neural prosthetic systems.

  20. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  1. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  2. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  3. Extracting accurate temperatures of molten basalts from non-contact thermal infrared radiance data

    NASA Astrophysics Data System (ADS)

    Fontanella, N. R.; Ramsey, M. S.; Lee, R.

    2013-12-01

    The eruptive and emplacement temperature of a lava flow relates important information on parameters such as the composition, rheology, and emplacement processes. It can also serve as a critical input into flow cooling and propagation models used for hazard prediction. One of the most common ways to determine temperatures of active lava flows is to use non-contact thermal infrared (TIR) measurements, either from ground-based radiometers and cameras or air and space-based remote sensing instruments. These temperature measurements assume a fixed value for the lava emissivity in order to solve the Planck Equation for temperature. The research presented here examines the possibility of variable emissivity in a material's molten state and the effect it has on deriving accurate surface temperature. Emplacement of a pahoehoe lava lobe at Kilauea volcano, Hawaii was captured with high spatial resolution/high frame rate TIR video in order to study this phenomenon. The data show the appearance of molten lava at a breakout point until it cools to form a glassy crust that begins to fold. Emissivity was adjusted sequentially along linear transects from a starting value of 1.0 to lower values until the TIR temperature matched the known temperature measured with a thermocouple. Below an emissivity of ~0.89, temperatures of the molten lava rose above the known lava temperature. This value suggests a decrease in emissivity with a change of state and is likely due to changes in the atomic bond structure of the melt. We have also recently completed the first ever calibrated laboratory-based emissivity measurements of molten basalts, and these high spectral resolution data confirm the field-based estimates. In contrast to rhyolites, basalts appear to display a less dramatic change between their glassy and molten spectra due to their higher melting and glass transition temperatures and the quick formation time of the crust. Therefore, the change in emissivity for molten rhyolite could

  4. Sensory synergy as environmental input integration

    PubMed Central

    Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo

    2015-01-01

    The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler. PMID:25628523

  5. Generalized Input-Output Inequality Systems

    SciTech Connect

    Liu Yingfan Zhang Qinghong

    2006-09-15

    In this paper two types of generalized Leontief input-output inequality systems are introduced. The minimax properties for a class of functions associated with the inequalities are studied. Sufficient and necessary conditions for the inequality systems to have solutions are obtained in terms of the minimax value. Stability analysis for the solution set is provided in terms of upper semi-continuity and hemi-continuity of set-valued maps.

  6. Aortic Input Impedance during Nitroprusside Infusion

    PubMed Central

    Pepine, Carl J.; Nichols, W. W.; Curry, R. C.; Conti, C. Richard

    1979-01-01

    Beneficial effects of nitroprusside infusion in heart failure are purportedly a result of decreased afterload through “impedance” reduction. To study the effect of nitroprusside on vascular factors that determine the total load opposing left ventricular ejection, the total aortic input impedance spectrum was examined in 12 patients with heart failure (cardiac index <2.0 liters/min per m2 and left ventricular end diastolic pressure >20 mm Hg). This input impedance spectrum expresses both mean flow (resistance) and pulsatile flow (compliance and wave reflections) components of vascular load. Aortic root blood flow velocity and pressure were recorded continuously with a catheter-tip electromagnetic velocity probe in addition to left ventricular pressure. Small doses of nitroprusside (9-19 μg/min) altered the total aortic input impedance spectrum as significant (P < 0.05) reductions in both mean and pulsatile components were observed within 60-90 s. With these acute changes in vascular load, left ventricular end diastolic pressure declined (44%) and stroke volume increased (20%, both P < 0.05). Larger nitroprusside doses (20-38 μg/min) caused additional alteration in the aortic input impedance spectrum with further reduction in left ventricular end diastolic pressure and increase in stroke volume but no additional changes in the impedance spectrum or stroke volume occurred with 39-77 μg/min. Improved ventricular function persisted when aortic pressure was restored to control values with simultaneous phenylephrine infusion in three patients. These data indicate that nitroprusside acutely alters both the mean and pulsatile components of vascular load to effect improvement in ventricular function in patients with heart failure. The evidence presented suggests that it may be possible to reduce vascular load and improve ventricular function independent of aortic pressure reduction. PMID:457874

  7. Minimizing structural vibrations with Input Shaping (TM)

    NASA Technical Reports Server (NTRS)

    Singhose, Bill; Singer, Neil

    1995-01-01

    A new method for commanding machines to move with increased dynamic performance was developed. This method is an enhanced version of input shaping, a patented vibration suppression algorithm. This technique intercepts a command input to a system command that moves the mechanical system with increased performance and reduced residual vibration. This document describes many advanced methods for generating highly optimized shaping sequences which are tuned to particular systems. The shaping sequence is important because it determines the trade off between move/settle time of the system and the insensitivity of the input shaping algorithm to variations or uncertainties in the machine which can be controlled. For example, a system with a 5 Hz resonance that takes 1 second to settle can be improved to settle instantaneously using a 0.2 shaping sequence (thus improving settle time by a factor of 5). This system could vary by plus or minus 15% in its natural frequency and still have no apparent vibration. However, the same system shaped with a 0.3 second shaping sequence could tolerate plus or minus 40% or more variation in natural frequency. This document describes how to generate sequences that maximize performance, sequences that maximize insensitivity, and sequences that trade off between the two. Several software tools are documented and included.

  8. Light inputs shape the Arabidopsis circadian system.

    PubMed

    Wenden, Bénédicte; Kozma-Bognár, László; Edwards, Kieron D; Hall, Anthony J W; Locke, James C W; Millar, Andrew J

    2011-05-01

    The circadian clock is a fundamental feature of eukaryotic gene regulation that is emerging as an exemplar genetic sub-network for systems biology. The circadian system in Arabidopsis plants is complex, in part due to its phototransduction pathways, which are themselves under circadian control. We therefore analysed two simpler experimental systems. Etiolated seedlings entrained by temperature cycles showed circadian rhythms in the expression of genes that are important for the clock mechanism, but only a restricted set of downstream target genes were rhythmic in microarray assays. Clock control of phototransduction pathways remained robust across a range of light inputs, despite the arrhythmic transcription of light-signalling genes. Circadian interactions with light signalling were then analysed using a single active photoreceptor. Phytochrome A (phyA) is expected to be the only active photoreceptor that can mediate far-red (FR) light input to the circadian clock. Surprisingly, rhythmic gene expression was profoundly altered under constant FR light, in a phyA-dependent manner, resulting in high expression of evening genes and low expression of morning genes. Dark intervals were required to allow high-amplitude rhythms across the transcriptome. Clock genes involved in this response were identified by mutant analysis, showing that the EARLY FLOWERING 4 gene is a likely target and mediator of the FR effects. Both experimental systems illustrate how profoundly the light input pathways affect the plant circadian clock, and provide strong experimental manipulations to understand critical steps in the plant clock mechanism.

  9. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L0) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (kc) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models.

  10. Dimensionless parameters for lidar performance characterization

    NASA Astrophysics Data System (ADS)

    Comerón, Adolfo; Agishev, Ravil R.

    2014-10-01

    A set of three dimensionless parameters is proposed to characterize lidar systems. Two of them are based on an asymptotic approximation of the output signal-to-noise ratio as a function of the input optical power reaching the photoreceiver when there is no background radiation. Of these, one is defined as the ratio between the input signal power level coming from a reference range in a reference atmosphere (reference power level) and the input power level that would produce a reference output signal-to-noise ratio if the photoreceiver operated always in signal-shot noise limited regime. The other is defined as the ratio between the reference power level and the input power level for which the signal-induced shot noise power equals the receiver noise power. A third parameter, defined as the ratio between the background optical power at the photoreceiver input and the reference power level, quantifies the effect of background radiation. With these three parameters a good approximation to the output signal-to-noise ratio of the lidar can be calculated as a function of the power reduction with respect to the power reaching the photodetector in the reference situation. These parameters can also be used to compare and rank the performance of different systems.

  11. Accurate taxonomic assignment of short pyrosequencing reads.

    PubMed

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  12. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  13. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  14. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  15. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    SciTech Connect

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  16. Extremely accurate sequential verification of RELAP5-3D

    SciTech Connect

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.

  17. Extremely accurate sequential verification of RELAP5-3D

    DOE PAGES

    Mesina, George L.; Aumiller, David L.; Buschman, Francis X.

    2015-11-19

    Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less

  18. Visualization of parameter space for image analysis.

    PubMed

    Pretorius, A Johannes; Bray, Mark-Anthony P; Carpenter, Anne E; Ruddle, Roy A

    2011-12-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.

  19. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay

    2005-01-01

    The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.

  20. Augmented input reveals word deafness in a man with frontotemporal dementia

    PubMed Central

    Gibbons, Chris; Oken, Barry; Fried-Oken, Melanie

    2012-01-01

    We describe a 57 year old, right handed, English speaking man initially diagnosed with progressive aphasia. Language assessment revealed inconsistent performance in key areas. Expressive language was reduced to a few short, perseverative phrases. Speech was severely apraxic. Primary modes of communication included gesture, pointing, gaze, physical touch and leading. Responses were 100% accurate when he was provided with written words, with random or inaccurate responses for strictly auditory/verbal input. When instructions to subsequent neuropsychological tests were written instead of spoken, performance improved markedly. A comprehensive audiology assessment revealed no hearing impairment. Neuroimaging was unremarkable. Neurobehavioral evaluation utilizing written input led to diagnoses of word deafness and frontotemporal dementia, resulting in very different management. We highlight the need for alternative modes of language input for assessment and treatment of patients with language comprehension symptoms. PMID:22425725

  1. Augmented input reveals word deafness in a man with frontotemporal dementia.

    PubMed

    Gibbons, Chris; Oken, Barry; Fried-Oken, Melanie

    2012-01-01

    We describe a 57 year old, right handed, English speaking man initially diagnosed with progressive aphasia. Language assessment revealed inconsistent performance in key areas. Expressive language was reduced to a few short, perseverative phrases. Speech was severely apraxic. Primary modes of communication included gesture, pointing, gaze, physical touch and leading. Responses were 100% accurate when he was provided with written words, with random or inaccurate responses for strictly auditory/verbal input. When instructions to subsequent neuropsychological tests were written instead of spoken, performance improved markedly. A comprehensive audiology assessment revealed no hearing impairment. Neuroimaging was unremarkable. Neurobehavioral evaluation utilizing written input led to diagnoses of word deafness and frontotemporal dementia, resulting in very different management. We highlight the need for alternative modes of language input for assessment and treatment of patients with language comprehension symptoms.

  2. How the type of input function affects the dynamic response of conducting polymer actuators

    NASA Astrophysics Data System (ADS)

    Xiang, Xingcan; Alici, Gursel; Mutlu, Rahim; Li, Weihua

    2014-10-01

    There has been a growing interest in smart actuators typified by conducting polymer actuators, especially in their (i) fabrication, modeling and control with minimum external data and (ii) applications in bio-inspired devices, robotics and mechatronics. Their control is a challenging research problem due to the complex and nonlinear properties of these actuators, which cannot be predicted accurately. Based on an input-shaping technique, we propose a new method to improve the conducting polymer actuators’ command-following ability, while minimizing their electric power consumption. We applied four input functions with smooth characteristics to a trilayer conducting polymer actuator to experimentally evaluate its command-following ability under an open-loop control strategy and a simulated feedback control strategy, and, more importantly, to quantify how the type of input function affects the dynamic response of this class of actuators. We have found that the four smooth inputs consume less electrical power than sharp inputs such as a step input with discontinuous higher-order derivatives. We also obtained an improved transient response performance from the smooth inputs, especially under the simulated feedback control strategy, which we have proposed previously [X Xiang, R Mutlu, G Alici, and W Li, 2014 “Control of conducting polymer actuators without physical feedback: simulated feedback control approach with particle swarm optimization’, Journal of Smart Materials and Structure, 23]. The idea of using a smooth input command, which results in lower power consumption and better control performance, can be extended to other smart actuators. Consuming less electrical energy or power will have a direct effect on enhancing the operational life of these actuators.

  3. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  4. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons

    PubMed Central

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-01-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter—describing somatic integration—and the spike-history filter—accounting for spike-frequency adaptation—dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations. PMID:26907675

  5. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters

    PubMed Central

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830

  6. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.

    PubMed

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.

  7. Simple PID parameter tuning method based on outputs of the closed loop system

    NASA Astrophysics Data System (ADS)

    Han, Jianda; Zhu, Zhiqiang; Jiang, Ziya; He, Yuqing

    2016-05-01

    Most of the existing PID parameters tuning methods are only effective with pre-known accurate system models, which often require some strict identification experiments and thus infeasible for many complicated systems. Actually, in most practical engineering applications, it is desirable for the PID tuning scheme to be directly based on the input-output response of the closed-loop system. Thus, a new parameter tuning scheme for PID controllers without explicit mathematical model is developed in this paper. The paper begins with a new frequency domain properties analysis of the PID controller. After that, the definition of characteristic frequency for the PID controller is given in order to study the mathematical relationship between the PID parameters and the open-loop frequency properties of the controlled system. Then, the concepts of M-field and θ-field are introduced, which are then used to explain how the PID control parameters influence the closed-loop frequency-magnitude property and its time responses. Subsequently, the new PID parameter tuning scheme, i.e., a group of tuning rules, is proposed based on the preceding analysis. Finally, both simulations and experiments are conducted, and the results verify the feasibility and validity of the proposed methods. This research proposes a PID parameter tuning method based on outputs of the closed loop system.

  8. Integrative neural networks model for prediction of sediment rating curve parameters for ungauged basins

    NASA Astrophysics Data System (ADS)

    Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.

    2015-12-01

    One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.

  9. Aerodynamic Parameter Estimation for the X-43A (Hyper-X) from Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Derry, Stephen D.; Smith, Mark S.

    2005-01-01

    Aerodynamic parameters were estimated based on flight data from the third flight of the X-43A hypersonic research vehicle, also called Hyper-X. Maneuvers were flown using multiple orthogonal phase-optimized sweep inputs applied as simultaneous control surface perturbations at Mach 8, 7, 6, 5, 4, and 3 during the vehicle descent. Aerodynamic parameters, consisting of non-dimensional longitudinal and lateral stability and control derivatives, were estimated from flight data at each Mach number. Multi-step inputs at nearly the same flight conditions were also flown to assess the prediction capability of the identified models. Prediction errors were found to be comparable in magnitude to the modeling errors, which indicates accurate modeling. Aerodynamic parameter estimates were plotted as a function of Mach number, and compared with estimates from the pre-flight aerodynamic database, which was based on wind-tunnel tests and computational fluid dynamics. Agreement between flight estimates and values computed from the aerodynamic database was excellent overall.

  10. Multiphoton catalysis with coherent state input: nonclassicality and decoherence

    NASA Astrophysics Data System (ADS)

    Hu, Li-Yun; Wu, Jia-Ni; Liao, Zeyang; Zubairy, M. Suhail

    2016-09-01

    We propose a scheme to generate a new kind of non-Gaussian state—the Laguerre polynomial excited coherent state (LPECS)—by using multiphoton catalysis with coherent state input. The nonclassical properties of the LPECS are studied in terms of nonclassical depth, Mandel’s parameter, second-order correlation, quadrature squeezing, and the negativity of the Wigner function (WF). It is found that the LPECS is highly nonclassical and its nonclassicality depends on the amplitude of the coherent state, the catalysis photon number, and the parameters of the unbalanced beam splitter (BS). In particular, the maximum degree of squeezing can be enhanced by increasing the catalysis photon number. In addition, we examine the effect of decoherence using the WF, which shows that the negative region, the characteristic time of decoherence, and the structure of the WF are affected by catalysis photon number and the parameters of the unbalanced BS. Our work provides general analysis on how to prepare polynomial quantum states, which may be useful in the fields of quantum information and quantum computation.

  11. Robust tracking control for an air-breathing hypersonic vehicle with input constraints

    NASA Astrophysics Data System (ADS)

    Gao, Gang; Wang, Jinzhi; Wang, Xianghua

    2014-12-01

    The focus of this paper is on the design and simulation of robust tracking control for an air-breathing hypersonic vehicle (AHV), which is affected by high nonlinearity, uncertain parameters and input constraints. The linearisation method is employed for the longitudinal AHV model about a specific trim condition, and then considering the additive uncertainties of three parameters, the linearised model is just in the form of affine parameter dependence. From this point, the linear parameter-varying method is applied to design the desired controller. The poles for the closed-loop system of the linearised model are placed into a desired vertical strip, and the quadratic stability of the closed-loop system is guaranteed. Input constraints of the AHV are addressed by additional linear matrix inequalities. Finally, the designed controller is evaluated on the nonlinear AHV model and simulation results demonstrate excellent tracking performance with good robustness.

  12. Inhibitory control in mind and brain 2.0: Blocked-input models of saccadic countermanding

    PubMed Central

    Logan, Gordon D.; Yamaguchi, Motonori; Schall, Jeffrey D.; Palmeri, Thomas J.

    2015-01-01

    The interactive race model of saccadic countermanding assumes that response inhibition results from an interaction between a go unit, identified with gaze-shifting neurons, and a stop unit, identified with gaze-holding neurons, in which activation of the stop unit inhibits the growth of activation in the go unit to prevent it from reaching threshold. The interactive race model accounts for behavioral data and predicts physiological data in monkeys performing the stop-signal task. We propose an alternative model that assumes that response inhibition results from blocking the input to the go unit. We show that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately. We extend the models to address the steady-state fixation period before the go stimulus is presented and find that the blocked-input model fits better than the interactive race model. We consider a model in which fixation activity is boosted when a stop signal occurs and find that it fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited. We discuss the alternative linking propositions that connect computational models to neural mechanisms, the lessons to be learned from model mimicry, and generalization from countermanding saccades to countermanding other kinds of responses. PMID:25706403

  13. Inhibitory control in mind and brain 2.0: blocked-input models of saccadic countermanding.

    PubMed

    Logan, Gordon D; Yamaguchi, Motonori; Schall, Jeffrey D; Palmeri, Thomas J

    2015-04-01

    The interactive race model of saccadic countermanding assumes that response inhibition results from an interaction between a go unit, identified with gaze-shifting neurons, and a stop unit, identified with gaze-holding neurons, in which activation of the stop unit inhibits the growth of activation in the go unit to prevent it from reaching threshold. The interactive race model accounts for behavioral data and predicts physiological data in monkeys performing the stop-signal task. We propose an alternative model that assumes that response inhibition results from blocking the input to the go unit. We show that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately. We extend the models to address the steady-state fixation period before the go stimulus is presented and find that the blocked-input model fits better than the interactive race model. We consider a model in which fixation activity is boosted when a stop signal occurs and find that it fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited. We discuss the alternative linking propositions that connect computational models to neural mechanisms, the lessons to be learned from model mimicry, and generalization from countermanding saccades to countermanding other kinds of responses.

  14. Efficient determination of accurate atomic polarizabilities for polarizeable embedding calculations

    PubMed Central

    Schröder, Heiner

    2016-01-01

    We evaluate embedding potentials, obtained via various methods, used for polarizable embedding computations of excitation energies of para‐nitroaniline in water and organic solvents as well as of the green fluorescent protein. We found that isotropic polarizabilities derived from DFTD3 dispersion coefficients correlate well with those obtained via the LoProp method. We show that these polarizabilities in conjunction with appropriately derived point charges are in good agreement with calculations employing static multipole moments up to quadrupoles and anisotropic polarizabilities for both computed systems. The (partial) use of these easily‐accessible parameters drastically reduces the computational effort to obtain accurate embedding potentials especially for proteins. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27317509

  15. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  16. Quality metric for accurate overlay control in <20nm nodes

    NASA Astrophysics Data System (ADS)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  17. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  18. Accurate colon residue detection algorithm with partial volume segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.

    2004-05-01

    Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.

  19. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  20. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  1. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  2. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  3. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  4. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  5. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  6. Accurate radio positions with the Tidbinbilla interferometer

    NASA Technical Reports Server (NTRS)

    Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.

    1979-01-01

    The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.

  7. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  8. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  9. How many dark energy parameters?

    SciTech Connect

    Linder, Eric V.; Huterer, Dragan

    2005-05-16

    For exploring the physics behind the accelerating universe a crucial question is how much we can learn about the dynamics through next generation cosmological experiments. For example, in defining the dark energy behavior through an effective equation of state, how many parameters can we realistically expect to tightly constrain? Through both general and specific examples (including new parametrizations and principal component analysis) we argue that the answer is 42 - no, wait, two. Cosmological parameter analyses involving a measure of the equation of state value at some epoch (e.g., w_0) and a measure of the change in equation of state (e.g., w') are therefore realistic in projecting dark energy parameter constraints. More elaborate parametrizations could have some uses (e.g., testing for bias or comparison with model features), but do not lead to accurately measured dark energy parameters.

  10. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  11. Accurate energy bands calculated by the hybrid quasiparticle self-consistent GW method implemented in the ecalj package

    NASA Astrophysics Data System (ADS)

    Deguchi, Daiki; Sato, Kazunori; Kino, Hiori; Kotani, Takao

    2016-05-01

    We have recently implemented a new version of the quasiparticle self-consistent GW (QSGW) method in the ecalj package released at http://github.com/tkotani/ecalj. Since the new version of the ecalj package is numerically stable and more accurate than the previous versions, we can perform calculations easily without being bothered with tuning input parameters. Here we examine its ability to describe energy band properties, e.g., band-gap energy, eigenvalues at special points, and effective mass, for a variety of semiconductors and insulators. We treat C, Si, Ge, Sn, SiC (in 2H, 3C, and 4H structures), (Al, Ga, In) × (N, P, As, Sb), (Zn, Cd, Mg) × (O, S, Se, Te), SiO2, HfO2, ZrO2, SrTiO3, PbS, PbTe, MnO, NiO, and HgO. We propose that a hybrid QSGW method, where we mix 80% of QSGW and 20% of LDA, gives universally good agreement with experiments for these materials.

  12. Research on input shaping algorithm for rapid positioning of ultra-precision dual-stage

    NASA Astrophysics Data System (ADS)

    Song, Fazhi; Wang, Yan; Chen, Xinglin; He, Ping

    2015-08-01

    As a high-precision servo motion platform, the dual-stage lithographic system uses lots of long-stroke air-bearing linear motors to achieve rapid positioning. Residual vibration, resulting from direct drive, almost zero damping, parallel decoupling structure and high velocity, leads to too long settling time and is one of the key factors in slowing the speed of positioning. To suppress the residual vibration and realize the high positioning precision in shorter settling time, this paper designs feedforward controller with input shaping algorithm for the rotary motor. Traditional input shaper is sensitive to system models and it is very difficult to get the parameters. A parameter self-learning method based on PSO(Particle Swarm Optimization) is proposed in this paper. The simulation of the system is performed by MATLAB/Simulation. The experimental results indicate that the input shaping algorithm proposed in this paper brings about significant reduction in the positioning time of the dual-stage.

  13. A circuit design for multi-inputs stateful OR gate

    NASA Astrophysics Data System (ADS)

    Chen, Qiao; Wang, Xiaoping; Wan, Haibo; Yang, Ran; Zheng, Jian

    2016-09-01

    The in situ logic operation on memristor memory has attracted researchers' attention. In this brief, a new circuit structure that performs a stateful OR logic operation is proposed. When our OR logic is operated in series with other logic operations (IMP, AND), only two voltages should to be changed while three voltages are necessary in the previous one-step OR logic operation. In addition, this circuit structure can be extended to multi-inputs OR operation to perfect the family of logic operations on memristive memory in nanocrossbar based networks. The proposed OR gate can enable fast logic operation, reduce the number of required memristors and the sequential steps. Through analysis and simulation, the feasibility of OR operation is demonstrated and the appropriate parameters are obtained.

  14. Modelling cool star spectra with inadequate input physics

    NASA Astrophysics Data System (ADS)

    Lind, Karin

    2015-08-01

    The analysis of cool star spectra has a century-long successful history, but the recent explosion in the quality and quantity of spectra has made it clear to us that the state-of-the-art analysis do not do justice to the information content of the data and cannot extract stellar parameters and element abundances to the accuracy that the broader context requires. To progress, the long-standing assumption of local thermodynamic equilibrium in the line formation must be lifted, a development that is hindered by gaps in our knowledge of radiative and collisional transition rates. I will exemplify how stellar abundances are affected by missing input physics and discuss various calibration techniques that have been used to circumvent the problem.

  15. Load support system analysis high speed input pinion configuration

    NASA Technical Reports Server (NTRS)

    Gassel, S. S.; Pirvics, J.

    1979-01-01

    An analysis and a series of computerized calculations were carried out to explore competing prototype design concepts of a shaft and two taper-roller bearings systems to support the high-speed input pinion of an advanced commercial helicopter transmission. The results were used to evaluate designs both for a straddle arrangement where the pinion gear is located between the bearings and for a cantilever arrangement where the pinion is outboard of the two bearings. Effects of varying parameters including applied gear load, preload, wall thickness, interference fits, bearing spacing and pinion gear location on system rigidity, load distribution and bearing rating life were assessed. A comparison of the bearing load distributions for these designs demonstrated that the straddle more equally distributes both radial and axial loads. The performance of these designs over a range of shaft rotational speeds, with lubrication and friction effects included, is also discussed.

  16. Influence of proprioceptive input on parkinsonian tremor.

    PubMed

    Spiegel, Jörg; Fuss, Gerhard; Krick, Christoph; Schimrigk, Klaus; Dillmann, Ulrich

    2002-01-01

    Previous studies have shown a modification of parkinsonian tremor (PT) by proprioceptive input induced by passive joint movements. The authors investigated the impact of electrically evoked proprioceptive input on PT. In eight patients with PT they recorded surface EMG from the opponens pollicis muscle, and forearm extensors and flexors. Rhythmic electrical stimulation was applied to the ipsilateral median nerve at the wrist using a submaximal stimulus intensity and stimulus frequencies between two stimuli per second and five stimuli per second. The tremor frequency did not adapt to the stimulus frequency. Tremor frequency of parkinsonian resting tremor increased significantly in the directly stimulated opponens pollicis muscle (mean +/- standard deviation, 4.35 +/- 0.64 Hz without stimulation versus 4.53 +/- 0.68 Hz with stimulation; P < 0.05, paired t-test), the not directly stimulated forearm muscles (4.90 +/- 0.72 Hz versus 5.18 +/- 0.73 Hz, P < 0.001), and the upper arm muscles (5.13 +/- 0.61 Hz versus 5.36 +/- 0.68 Hz, P < 0.01). Furthermore, the parkinsonian postural tremor accelerated significantly during ipsilateral median nerve stimulation (5.31 +/- 0.99 Hz versus 5.44 +/- 1.03 Hz, P < 0.05). Parkinsonian resting tremor in the forearm muscles also accelerated significantly during ipsilateral ulnar nerve stimulation (4.85 +/- 0.57 Hz versus 5.05 +/- 0.65 Hz, P < 0.05). Contralateral median nerve stimulation had no significant effect. These results suggest a close interaction between proprioceptive input and PT generation.

  17. Using Focused Regression for Accurate Time-Constrained Scaling of Scientific Applications

    SciTech Connect

    Barnes, B; Garren, J; Lowenthal, D; Reeves, J; de Supinski, B; Schulz, M; Rountree, B

    2010-01-28

    Many large-scale clusters now have hundreds of thousands of processors, and processor counts will be over one million within a few years. Computational scientists must scale their applications to exploit these new clusters. Time-constrained scaling, which is often used, tries to hold total execution time constant while increasing the problem size along with the processor count. However, complex interactions between parameters, the processor count, and execution time complicate determining the input parameters that achieve this goal. In this paper we develop a novel gray-box, focused median prediction errors are less than 13%. regression-based approach that assists the computational scientist with maintaining constant run time on increasing processor counts. Combining application-level information from a small set of training runs, our approach allows prediction of the input parameters that result in similar per-processor execution time at larger scales. Our experimental validation across seven applications showed that median prediction errors are less than 13%.

  18. Lattices of processes in graphs with inputs

    SciTech Connect

    Shakhbazyan, K.V.

    1995-09-01

    This article is a continuation of others work, presenting a detailed analysis of finite lattices of processes in graphs with input nodes. Lattices of processes in such graphs are studied by representing the lattices in the form of an algebra of pairs. We define the algebra of pairs somewhat generalizing the definition. Let K and D be bounded distributive lattices. A sublattice {delta} {contained_in} K x D is called an algebra of pairs if for all K {element_of} K we have (K, 1{sub D}) {element_of} {delta} and for all d {element_of} D we have (O{sub K}).

  19. Input data to run Landis-II

    USGS Publications Warehouse

    DeJager, Nathan R.

    2017-01-01

    The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location of each mapcode, b) Cohort_ages, which includes the ages for each tree species-cohort within each mapcode, c) Ecoregions, which consist of different regions of soils and climate, d) Ecoregion_codes, which define the ecoregions, and e) Species_Params, which link the potential establishment and growth rates for each species with each ecoregion.

  20. Intelligent Graph Layout Using Many Users' Input.

    PubMed

    Yuan, Xiaoru; Che, Limei; Hu, Yifan; Zhang, Xin

    2012-12-01

    In this paper, we propose a new strategy for graph drawing utilizing layouts of many sub-graphs supplied by a large group of people in a crowd sourcing manner. We developed an algorithm based on Laplacian constrained distance embedding to merge subgraphs submitted by different users, while attempting to maintain the topological information of the individual input layouts. To facilitate collection of layouts from many people, a light-weight interactive system has been designed to enable convenient dynamic viewing, modification and traversing between layouts. Compared with other existing graph layout algorithms, our approach can achieve more aesthetic and meaningful layouts with high user preference.