Accurate Stellar Parameters for Exoplanet Host Stars
NASA Astrophysics Data System (ADS)
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Fundamentals, accuracy and input parameters of frost heave prediction models
NASA Astrophysics Data System (ADS)
Schellekens, Fons Jozef
In this thesis, the frost heave knowledge of physical geographers and soil physicists, a detailed description of the frost heave process, methods to determine soil parameters, and analysis of the spatial variability of these soil parameters are connected to the expertise of civil engineers and mathematicians in the (computer) modelling of the process. A description is given of observations of frost heave in laboratory experiments and in the field. Frost heave modelling is made accessible by a detailed description of the main principles of frost heave modelling in a language which can be understood by persons who do not have a thorough mathematical background. Two examples of practical one-dimensional frost heave prediction models are described: a model developed by Wang (1994) and a model developed by Nixon (1991). Advantages, limitations and some improvements of these models are described. It is suggested that conventional frost heave prediction using estimated extreme input parameters may be improved by using locally measured input parameters. The importance of accurate input parameters in frost heave prediction models is demonstrated in a case study using the frost heave models developed by Wang and Nixon. Methods to determine the input parameters are discussed, concluding with a suite of methods, some of which are new, to determine the input parameters of frost heave prediction models from very basic grain size parameters. The spatial variability of the required input parameters is analysed using data obtained along the Norman Wells-Zama oil pipeline at Norman Wells, NWT, located in the transition between discontinuous and continuous permafrost regions at the northern end of Canada's northernmost oil pipeline. A method based on spatial variability analysis of the input parameters in frost heave models is suggested to optimize the improvement that arises from adequate sampling, while minimizing the costs of obtaining field data. A series of frost heave
Cui, Yunfeng; Bai, Jing
2005-01-01
Liver kinetic study of [^{18}F]2-fluoro-2-deoxy-D-glucose (FDG) metabolism in human body is an important tool for functional modeling and glucose metabolic rate estimation. In general, the arterial blood time-activity curve (TAC) and the tissue TAC are required as the input and output functions for the kinetic model. For liver study, however, the arterial-input may be not consistent with the actual model input because the liver has a dual blood supply from the hepatic artery (HA) and the portal vein (PV) to the liver. In this study, the result of model parameter estimation using dual-input function is compared with that using arterial-input function. First, a dynamic positron emission tomography (PET) experiment is performed after injection of FDG into the human body. The TACs of aortic blood, PV blood, and five regions of interest (ROIs) in liver are obtained from the PET image. Then, the dual-input curve is generated by calculating weighted sum of both the arterial and PV input curves. Finally, the five liver ROIs' kinetic parameters are estimated with arterial-input and dual-input functions respectively. The results indicate that the two methods provide different parameter estimations and the dual-input function may lead to more accurate parameter estimation.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
Uniform electron gas for transition metals: Input parameters
Rose, J.H. ); Shore, H.B. )
1993-12-15
Input parameters are reported for the theory of ideal metals, a uniform electron-gas model of the elemental transition metals. These input parameters, the electron density, and the bonding valence,'' have been given previously for the 3[ital d] and 4[ital d] series of transition metals. Here, we extend our work based on recent calculations of Sigalas [ital et] [ital al]. [Phys. Rev. B 45, 5777 (1992)] to include the 5[ital d] series. We have also calculated the cohesive energies of the 5[ital d] transition metals using the theory of ideal metals with these parameters. The calculations agree with experiment to within [plus minus]25%.
Agricultural and Environmental Input Parameters for the Biosphere Model
Kaylie Rasmuson; Kurt Rautenstrauch
2003-06-20
This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.
Accurate Parameter Estimation for Unbalanced Three-Phase System
Chen, Yuan
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter
2016-04-01
Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.
Environmental Transport Input Parameters for the Biosphere Model
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Burley, Casey L.; Marcolini, Michael A.
1994-01-01
Rotor noise prediction codes predict the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predicitions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and cyclic flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the chordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of Central Processing Unit (CPU) time necessary for the various approximations.
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Marcolini, Michael A.; Burley, Casey L.
1991-01-01
The noise prediction code WOPWOP predicts the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predictions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the cordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of CPU time necessary for the various approximations.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C_{7}H_{10}O_{2}, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Agricultural and Environmental Input Parameters for the Biosphere Model
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Inhalation Exposure Input Parameters for the Biosphere Model
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Arterial Input Function Placement for Accurate CT Perfusion Map Construction in Acute Stroke
Ferreira, Rafael M.; Lev, Michael H.; Goldmakher, Gregory V.; Kamalian, Shahmir; Schaefer, Pamela W.; Furie, Karen L.; Gonzalez, R. Gilberto; Sanelli, Pina C.
2013-01-01
OBJECTIVE The objective of our study was to evaluate the effect of varying arterial input function (AIF) placement on the qualitative and quantitative CT perfusion parameters. MATERIALS AND METHODS Retrospective analysis of CT perfusion data was performed on 14 acute stroke patients with a proximal middle cerebral artery (MCA) clot. Cerebral blood flow (CBF), cerebral blood volume (CBV), and mean transit time (MTT) maps were constructed using a systematic method by varying only the AIF placement in four positions relative to the MCA clot including proximal and distal to the clot in the ipsilateral and contralateral hemispheres. Two postprocessing software programs were used to evaluate the effect of AIF placement on perfusion parameters using a delay-insensitive deconvolution method compared with a standard deconvolution method. RESULTS One hundred sixty-eight CT perfusion maps were constructed for each software package. Both software programs generated a mean CBF at the infarct core of < 12 mL/100 g/min and a mean CBV of < 2 mL/100 g for AIF placement proximal to the clot in the ipsilateral hemisphere and proximal and distal to the clot in the contralateral hemisphere. For AIF placement distal to the clot in the ipsilateral hemisphere, the mean CBF significantly increased to 17.3 mL/100 g/min with delay-insensitive software and to 19.4 mL/100 g/min with standard software (p < 0.05). The mean MTT was significantly decreased for this AIF position. Furthermore, this AIF position yielded qualitatively different parametric maps, being most pronounced with MTT and CBF. Overall, CBV was least affected by AIF location. CONCLUSION For postprocessing of accurate quantitative CT perfusion maps, laterality of the AIF location is less important than avoiding AIF placement distal to the clot as detected on CT angiography. This pitfall is less severe with deconvolution-based software programs using a delay-insensitive technique than with those using a standard deconvolution
A convenient and accurate parallel Input/Output USB device for E-Prime.
Canto, Rosario; Bufalari, Ilaria; D'Ausilio, Alessandro
2011-03-01
Psychological and neurophysiological experiments require the accurate control of timing and synchrony for Input/Output signals. For instance, a typical Event-Related Potential (ERP) study requires an extremely accurate synchronization of stimulus delivery with recordings. This is typically done via computer software such as E-Prime, and fast communications are typically assured by the Parallel Port (PP). However, the PP is an old and disappearing technology that, for example, is no longer available on portable computers. Here we propose a convenient USB device enabling parallel I/O capabilities. We tested this device against the PP on both a desktop and a laptop machine in different stress tests. Our data demonstrate the accuracy of our system, which suggests that it may be a good substitute for the PP with E-Prime.
Direct computation of parameters for accurate polarizable force fields
Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.
2014-11-21
We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.
Environmental Transport Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values
Inhalation Exposure Input Parameters for the Biosphere Model
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the
Inhalation Exposure Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-09-24
This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the
Soil-related Input Parameters for the Biosphere Model
A. J. Smith
2003-07-02
This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash
Soil-Related Input Parameters for the Biosphere Model
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
A Study on the Effect of Input Parameters on Springback Prediction Accuracy
NASA Astrophysics Data System (ADS)
Han, Y. S.; Yang, W. H.; Choi, K. Y.; Kim, B. H.
2011-08-01
In this study, it is considered the input parameters in springback simulation affect factors to use member part by Taguchi's method into six-sigma tool on the basis of experiment for acquiring much more accurate springback prediction in Pamstamp2G. The best combination of input parameters for higher springback prediction accuracy is determined to the fender part as the one is applied for member part. The cracks and wrinkles in drawing and flanging operation must be removed for predicting the higher springback in accuracy. The compensation of springback on the basis of simulation is carried out. It is concluded that 95% of accuracy for springback prediction in dimension is secured as comparing with tryout panel.
A generalized multiple-input, multiple-output modal parameter estimation algorithm
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Blair, M. A.
1984-01-01
A new method for experimental determination of the modal parameters of a structure is presented. The method allows for multiple input forces to be applied simultaneously, and for an arbitrary number of acceleration response measurements to be employed. These data are used to form the equations of motion for a damped linear elastic structure. The modal parameters are then obtained through an eigenvalue technique. In conjunction with the development of the equations, an extensive computer simulation study was performed. The results of the study show a marked improvement in the mode shape identification for closely-spaced modes as the number of applied forces is increased. Also demonstrated is the influence of noise on the method's ability to identify accurate modal parameters. Here again, an increase in the number of exciters leads to a significant improvement in the identified parameters.
Cheng, C. L.; Gragg, M. J.; Perfect, E.; White, Mark D.; Lemiszki, P. J.; McKay, L. D.
2013-08-24
Numerical simulations are widely used in feasibility studies for geologic carbon sequestration. Accurate estimates of petrophysical parameters are needed as inputs for these simulations. However, relatively few experimental values are available for CO2-brine systems. Hence, a sensitivity analysis was performed using the STOMP numerical code for supercritical CO2 injected into a model confined deep saline aquifer. The intrinsic permeability, porosity, pore compressibility, and capillary pressure-saturation/relative permeability parameters (residual liquid saturation, residual gas saturation, and van Genuchten alpha and m values) were varied independently. Their influence on CO2 injection rates and costs were determined and the parameters were ranked based on normalized coefficients of variation. The simulations resulted in differences of up to tens of millions of dollars over the life of the project (i.e., the time taken to inject 10.8 million metric tons of CO2). The two most influential parameters were the intrinsic permeability and the van Genuchten m value. Two other parameters, the residual gas saturation and the residual liquid saturation, ranked above the porosity. These results highlight the need for accurate estimates of capillary pressure-saturation/relative permeability parameters for geologic carbon sequestration simulations in addition to measurements of porosity and intrinsic permeability.
Clinically accurate fetal ECG parameters acquired from maternal abdominal sensors
CLIFFORD, Gari; SAMENI, Reza; WARD, Mr. Jay; ROBINSON, Julian; WOLFBERG, Adam J.
2011-01-01
OBJECTIVE To evaluate the accuracy of a novel system for measuring fetal heart rate and ST-segment changes using non-invasive electrodes on the maternal abdomen. STUDY DESIGN Fetal ECGs were recorded using abdominal sensors from 32 term laboring women who had a fetal scalp electrode (FSE) placed for a clinical indication. RESULTS Good quality data for FHR estimation was available in 91.2% of the FSE segments, and 89.9% of the abdominal electrode segments. The root mean square (RMS) error between the FHR data calculated by both methods over all processed segments was 0.36 beats per minute. ST deviation from the isoelectric point ranged from 0 to 14.2% of R-wave amplitude. The RMS error between the ST change calculated by both methods averaged over all processed segments was 3.2%. CONCLUSION FHR and ST change acquired from the maternal abdomen is highly accurate and on average is clinically indistinguishable from FHR and ST change calculated using FSE data. PMID:21514560
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Sprung, J.L.; Jow, H-N ); Rollstin, J.A. ); Helton, J.C. )
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D. ); Amos, C.N. ); Helton, J. ); Boyd, G. )
1992-06-01
In support of the Nuclear Regulatory Commission's (NRC's) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Optimal input design for aircraft parameter estimation using dynamic programming principles
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Morelli, Eugene A.
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
The application of input shaping to a system with varying parameters
NASA Technical Reports Server (NTRS)
Magee, David P.; Book, Wayne J.
1991-01-01
The original input shaping technique, that was developed in earlier work, is summarized, and a different definition for residual vibration is proposed. The new definition gives better insight into the ability of the input shaping method to reduce vibration. The extension of input shaping to a system with varying parameters, e.g., natural frequency, is discussed, and the effect of these variations is shown to induce vibration. A modified command shaping technique is developed to eliminate this unwanted motion.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
NASA Astrophysics Data System (ADS)
Faybishenko, B.; McCurley, R. D.; Wang, J. Y.
2004-12-01
To assess, via numerical simulation, the effect of 12 uncertain input parameters (characterizing soil and rock properties and boundary [meteorological] conditions), on net infiltration uncertainty, the Latin Hypercube Sampling (LHS) technique (a modified Monte Carlo approach using a form of stratified sampling) was used. Each uncertain input parameter is presented using a probability distribution function, characterizing the epistemic uncertainty (which arises from the lack of knowledge about parameters-an uncertainty that can be reduced as new information becomes available). One hundred LHS realizations (using the code LHS V2.50 developed at Sandia National Laboratories) of the uncertain input parameters were used to simulate the net infiltration over the Yucca Mountain repository footprint. Simulations were carried out using the code INFIL VA-2.a1 (a modified USGS code INFIL V2.0). The results of simulations were then used to determine the net infiltration probability distribution function. According to theoretical considerations, for 12 uncertain input parameters, from 15 to 36 realizations using the LHS technique should be sufficient to get meaningful results. In this presentation, we will show that the theoretical considerations may significantly underestimate the required number of realizations for the evaluation of the correlation between the net infiltration and uncertain input parameters. We will demonstrate that the calculated net infiltration rate (presented as a probability distribution function) oscillates as a function of simulation runs, and that the correlation between net infiltration rate and the uncertain input parameters depends on the number of simulation runs. For example, the correlation coefficient between the soil (or rock) permeability and net infiltration stabilizes only after 60-80 realizations. The results of the correlation analysis show that the correlation to net infiltration is highest for precipitation, bedrock permeability
Kinion, D; Clarke, J
2008-01-24
The scattering parameters of an amplifier based on a dc Superconducting QUantum Interference Device (SQUID) are directly measured at 4.2 K. The results can be described using an equivalent circuit model of the fundamental resonance of the microstrip resonator which forms the input of the amplifier. The circuit model is used to determine the series capacitance required for critical coupling of the microstrip to the input circuit.
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
A new algorithm for importance analysis of the inputs with distribution parameter uncertainty
NASA Astrophysics Data System (ADS)
Li, Luyi; Lu, Zhenzhou
2016-10-01
Importance analysis is aimed at finding the contributions by the inputs to the uncertainty in a model output. For structural systems involving inputs with distribution parameter uncertainty, the contributions by the inputs to the output uncertainty are governed by both the variability and parameter uncertainty in their probability distributions. A natural and consistent way to arrive at importance analysis results in such cases would be a three-loop nested Monte Carlo (MC) sampling strategy, in which the parameters are sampled in the outer loop and the inputs are sampled in the inner nested double-loop. However, the computational effort of this procedure is often prohibitive for engineering problem. This paper, therefore, proposes a newly efficient algorithm for importance analysis of the inputs in the presence of parameter uncertainty. By introducing a 'surrogate sampling probability density function (SS-PDF)' and incorporating the single-loop MC theory into the computation, the proposed algorithm can reduce the original three-loop nested MC computation into a single-loop one in terms of model evaluation, which requires substantially less computational effort. Methods for choosing proper SS-PDF are also discussed in the paper. The efficiency and robustness of the proposed algorithm have been demonstrated by results of several examples.
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
NASA Astrophysics Data System (ADS)
Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.
2015-09-01
Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.
Input parameters to codes which analyze LMFBR wire-wrapped bundles
Hawley, J.T.; Chan, Y.N.; Todreas, N.E.
1980-12-01
This report provides a current summary of recommended values of key input parameters required by ENERGY code analysis of LMFBR wire wrapped bundles. This data is based on the interpretation of experimental results from the MIT and other available laboratory programs.
Capote, R. , E-Mail: r.capotenoy@iaea.org; Herman, M.; Oblozinsky, P.; Young, P.G.; Goriely, S.; Belgya, T.; Ignatyuk, A.V.; Koning, A.J.; Hilaire, S.; Plujko, V.A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M.B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V.M.; Reffo, G.
2009-12-15
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
Capote, R.; Herman, M.; Capote,R.; Herman,M.; Oblozinsky,P.; Young,P.G.; Goriely,S.; Belgy,T.; Ignatyuk,A.V.; Koning,A.J.; Hilaire,S.; Pljko,V.A.; Avrigeanu,M.; Bersillon,O.; Chadwick,M.B.; Fukahori,T.; Ge, Zhigang; Han,Yinl,; Kailas,S.; Kopecky,J.; Maslov,V.M.; Reffo,G.; Sin,M.; Soukhovitskii,E.Sh.; Talou,P
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Smith, Z. K.; Steenburgh, R.; Fry, C. D.; Dryer, M.
2009-12-01
Predictions of interplanetary shock arrivals at Earth are important to space weather because they are often followed by geomagnetic disturbances that disrupt human technologies. The success of numerical simulation predictions depends on the codes and on the inputs obtained from solar observations. The inputs are usually divided into the more slowly varying background solar wind, onto which short-duration solar transient events are superposed. This paper examines the dependence of the prediction success on the range of values of the solar transient inputs. These input parameters are common to many 3-D MHD codes. The predictions of the Hakamada-Akasofu-Fry version 2 (HAFv2) model were used because its predictions of shock arrivals were tested, informally in the operational environment, from 1997 to 2006. The events list and HAFv2's performance were published in a series of three papers. The third event set is used to investigate the success and accuracy of the predictions in terms of the input parameter ranges (considered individually). By defining three thresholds for the input speed, duration, and X-ray class, it is possible to categorize the prediction outcomes by these input ranges. The X-ray class gives the most successful classification. Above the highest threshold, 89% of the predictions were successful while below the lowest threshold, only 40% were successful. The accuracy, measured in terms of the time differences between the observed and predicted shock arrivals, also shows largest improvement for the X-ray class. Guidelines are presented for space weather forecasters using the HAFv2 or other interplanetary simulation models.
Subjective-probability-based scenarios for uncertain input parameters: Stratospheric ozone depletion
Hammitt, J.K.
1990-04-01
Risk analysis often depends on complex, computer-based models to describe links between policies (e.g., required emission-control equipment) and consequences (e.g., probabilities of adverse health effects). Appropriate specification of many model aspects is uncertain, including details of the model structure; transport, reaction-rate, and other parameters; and application-specific inputs such as pollutant-release rates. Because these uncertainties preclude calculation of the precise consequences of a policy, it is important to characterize the plausible range of effects. In principle, a probability distribution function for the effects can be constructed using Monte Carlo analysis, but the combinatorics of multiple uncertainties and the often high cost of model runs quickly exhaust available resources. A method to choose sets of input conditions (scenarios) that efficiently represent knowledge about the joint probability distribution of inputs is presented and applied. A simple score function approximately relating inputs to a policy-relevant output, in this case, globally averaged stratospheric ozone depletion, is developed. The probability density function for the score-function value is analytically derived from a subjective joint probability density for the inputs. Scenarios are defined by selected quantiles of the score function. Using this method, scenarios can be systematically selected in terms of the approximate probability distribution function for the output of concern, and probability intervals for the joint effect of the inputs can be readily constructed.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
NASA Astrophysics Data System (ADS)
Peng, Liang-You; Gong, Qihuang
2010-12-01
The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for
A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model
Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y
2011-10-27
Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Impacts of input parameter spatial aggregation on an agricultural nonpoint source pollution model
NASA Astrophysics Data System (ADS)
FitzHugh, T. W.; Mackay, D. S.
2000-09-01
The accuracy of agricultural nonpoint source pollution models depends in part on how well model input parameters describe the relevant characteristics of the watershed. The spatial extent of input parameter aggregation has previously been shown to have a substantial impact on model output. This study investigates this problem using the Soil and Water Assessment Tool (SWAT), a distributed-parameter agricultural nonpoint source pollution model. The primary question addressed here is: how does the size or number of subwatersheds used to partition the watershed affect model output, and what are the processes responsible for model behavior? SWAT was run on the Pheasant Branch watershed in Dane County, WI, using eight watershed delineations, each with a different number of subwatersheds. Model runs were conducted for the period 1990-1996. Streamflow and outlet sediment predictions were not seriously affected by changes in subwatershed size. The lack of change in outlet sediment is due to the transport-limited nature of the Pheasant Branch watershed and the stable transport capacity of the lower part of the channel network. This research identifies the importance of channel parameters in determining the behavior of SWAT's outlet sediment predictions. Sediment generation estimates do change substantially, dropping by 44% between the coarsest and the finest watershed delineations. This change is primarily due to the sensitivity of the runoff term in the Modified Universal Soil Loss Equation to the area of hydrologic response units (HRUs). This sensitivity likely occurs because SWAT was implemented in this study with a very detailed set of HRUs. In order to provide some insight on the scaling behavior of the model two indexes were derived using the mathematics of the model. The indexes predicted SWAT scaling behavior from the data inputs without a need for running the model. Such indexes could be useful for model users by providing a direct way to evaluate alternative models
Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters
NASA Astrophysics Data System (ADS)
Falkenberg, T. V.; Vršnak, B.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P.; Hesse, M.
2010-06-01
Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time-dependent 3-D MHD model that can simulate the propagation of cone-shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit for ambient solar wind velocity, CME density, and elongation factor, regardless of whether one's main interest is arrival time, signal shape, or signal amplitude of the ICME. We find that though ENLILv2.5b currently does not include the magnetic cloud of the ICME, it replicates the signal at L1 well in the studied event. The arrival time difference between satellite data and the ENLILv2.5b baseline run of this study is less than 30 min.
Zhang, Xuesong; Liang, Faming; Yu, Beibei; Zong, Ziliang
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associated with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.
Ming Parameter Input: Emma Model Redox Half Reaction Equation Deltag G Corrections for pH
D.M. Jolley
1998-07-23
The purpose of this calculation is to provide appropriate input parameters for use in MING V 1.0 (CSCI 300 18 V 1.0). This calculation corrects the Grogan and McKinley (1990) values for {Delta}G so that the data will function in the MING model. The Grogan and McKinley (1990) {Delta}G data are presented for a pH of 12 whereas the MING model requires that the {Delta}G be reported at standard conditions (i.e. pH of 0).
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the ^{57}Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is
Gauthier, Marianne; Pitre-Champagnat, Stéphanie; Tabarout, Farid; Leguerney, Ingrid; Polrot, Mélanie; Lassau, Nathalie
2012-01-01
AIM: To evaluate the sources of variation influencing the microvascularization parameters measured by dynamic contrast-enhanced ultrasonography (DCE-US). METHODS: Firstly, we evaluated, in vitro, the impact of the manual repositioning of the ultrasound probe and the variations in flow rates. Experiments were conducted using a custom-made phantom setup simulating a tumor and its associated arterial input. Secondly, we evaluated, in vivo, the impact of multiple contrast agent injections and of examination day, as well as the influence of the size of region of interest (ROI) associated with the arterial input function (AIF). Experiments were conducted on xenografted B16F10 female nude mice. For all of the experiments, an ultrasound scanner along with a linear transducer was used to perform pulse inversion imaging based on linear raw data throughout the experiments. Semi-quantitative and quantitative analyses were performed using two signal-processing methods. RESULTS: In vitro, no microvascularization parameters, whether semi-quantitative or quantitative, were significantly correlated (P values from 0.059 to 0.860) with the repositioning of the probe. In addition, all semi-quantitative microvascularization parameters were correlated with the flow variation while only one quantitative parameter, the tumor blood flow, exhibited P value lower than 0.05 (P = 0.004). In vivo, multiple contrast agent injections had no significant impact (P values from 0.060 to 0.885) on microvascularization parameters. In addition, it was demonstrated that semi-quantitative microvascularization parameters were correlated with the tumor growth while among the quantitative parameters, only the tissue blood flow exhibited P value lower than 0.05 (P = 0.015). Based on these results, it was demonstrated that the ROI size of the AIF had significant influence on microvascularization parameters: in the context of larger arterial ROI (from 1.17 ± 0.6 mm3 to 3.65 ± 0.3 mm3), tumor blood flow and
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
Comparisons of CAP88PC version 2.0 default parameters to site specific inputs
Lehto, M. A.; Courtney, J. C.; Charter, N.; Egan, T.
2000-03-02
The effects of varying the input for the CAP88PC Version 2.0 program on the total effective dose equivalents (TEDEs) were determined for hypothetical releases from the Hot Fuel Examination Facility (HFEF) located at the Argonne National Laboratory site on the Idaho National Engineering and Environmental Laboratory (INEEL). Values for site specific meteorological conditions and agricultural production parameters were determined for the 80 km radius surrounding the HFEF. Four nuclides, {sup 3}H, {sup 85}Kr, {sup 129}I, and {sup 137}Cs (with its short lived progeny, {sup 137m}Ba) were selected for this study; these are the radioactive materials most likely to be released from HFEF under normal or abnormal operating conditions. Use of site specific meteorological parameters of annual precipitation, average temperature, and the height of the inversion layer decreased the TEDE from {sup 137}Cs-{sup 137m}Ba up to 36%; reductions for other nuclides were less than 3%. Use of the site specific agricultural parameters reduced TEDE values between 7% and 49%, depending on the nuclide. Reductions are associated with decreased committed effective dose equivalents (CEDEs) from the ingestion pathway. This is not surprising since the HFEF is located well within the INEEL exclusion area, and the surrounding area closest to the release point is a high desert with limited agricultural diversity. Livestock and milk production are important in some counties at distances greater than 30 km from the HFEF.
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Accurate parameters for HD 209458 and its planet from HST spectrophotometry
NASA Astrophysics Data System (ADS)
del Burgo, C.; Allende Prieto, C.
2016-08-01
We present updated parameters for the star HD 209458 and its transiting giant planet. The stellar angular diameter θ=0.2254±0.0017 mas is obtained from the average ratio between the absolute flux observed with the Hubble Space Telescope and that of the best-fitting Kurucz model atmosphere. This angular diameter represents an improvement in precision of more than four times compared to available interferometric determinations. The stellar radius R⋆=1.20±0.05 R⊙ is ascertained by combining the angular diameter with the Hipparcos trigonometric parallax, which is the main contributor to its uncertainty, and therefore the radius accuracy should be significantly improved with Gaia's measurements. The radius of the exoplanet Rp=1.41±0.06 RJ is derived from the corresponding transit depth in the light curve and our stellar radius. From the model fitting, we accurately determine the effective temperature, Teff=6071±20 K, which is in perfect agreement with the value of 6070±24 K calculated from the angular diameter and the integrated spectral energy distribution. We also find precise values from recent Padova Isochrones, such as R⋆=1.20±0.06 R⊙ and Teff=6099±41 K. We arrive at a consistent picture from these methods and compare the results with those from the literature.
Hydrological Relevant Parameters from Remote Sensing - Spatial Modelling Input and Validation Basis
NASA Astrophysics Data System (ADS)
Hochschild, V.
2012-12-01
This keynote paper will demonstrate how multisensoral remote sensing data is used as spatial input for mesoscale hydrological modeling as well as for sophisticated validation purposes. The tasks of Water Resources Management are subject as well as the role of remote sensing in regional catchment modeling. Parameters derived from remote sensing discussed in this presentation will be land cover, topographical information from digital elevation models, biophysical vegetation parameters, surface soil moisture, evapotranspiration estimations, lake level measurements, determination of snow covered area, lake ice cycles, soil erosion type, mass wasting monitoring, sealed area, flash flood estimation. The actual possibilities of recent satellite and airborne systems are discussed, as well as the data integration into GIS and hydrological modeling, scaling issues and quality assessment will be mentioned. The presentation will provide an overview of own research examples from Germany, Tibet and Africa (Ethiopia, South Africa) as well as other international research activities. Finally the paper gives an outlook on upcoming sensors and concludes the possibilities of remote sensing in hydrology.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling, the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
NASA Astrophysics Data System (ADS)
Bruntt, H.; Basu, S.; Smalley, B.; Chaplin, W. J.; Verner, G. A.; Bedding, T. R.; Catala, C.; Gazzano, J.-C.; Molenda-Żakowicz, J.; Thygesen, A. O.; Uytterhoeven, K.; Hekker, S.; Huber, D.; Karoff, C.; Mathur, S.; Mosser, B.; Appourchaux, T.; Campante, T. L.; Elsworth, Y.; García, R. A.; Handberg, R.; Metcalfe, T. S.; Quirion, P.-O.; Régulo, C.; Roxburgh, I. W.; Stello, D.; Christensen-Dalsgaard, J.; Kawaler, S. D.; Kjeldsen, H.; Morris, R. L.; Quintana, E. V.; Sanderfer, D. T.
2012-06-01
We present a detailed spectroscopic study of 93 solar-type stars that are targets of the NASA/Kepler mission and provide detailed chemical composition of each target. We find that the overall metallicity is well represented by Fe lines. Relative abundances of light elements (CNO) and α elements are generally higher for low-metallicity stars. Our spectroscopic analysis benefits from the accurately measured surface gravity from the asteroseismic analysis of the Kepler light curves. The accuracy on the log g parameter is better than 0.03 dex and is held fixed in the analysis. We compare our Teff determination with a recent colour calibration of VT-KS [TYCHO V magnitude minus Two Micron All Sky Survey (2MASS) KS magnitude] and find very good agreement and a scatter of only 80 K, showing that for other nearby Kepler targets, this index can be used. The asteroseismic log g values agree very well with the classical determination using Fe I-Fe II balance, although we find a small systematic offset of 0.08 dex (asteroseismic log g values are lower). The abundance patterns of metals, α elements and the light elements (CNO) show that a simple scaling by [Fe/H] is adequate to represent the metallicity of the stars, except for the stars with metallicity below -0.3, where α-enhancement becomes important. However, this is only important for a very small fraction of the Kepler sample. We therefore recommend that a simple scaling with [Fe/H] be employed in the asteroseismic analyses of large ensembles of solar-type stars.
Ralph, Duncan K.; Matsen, Frederick A.
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM “factorization” strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
Ralph, Duncan K; Matsen, Frederick A
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
NASA Astrophysics Data System (ADS)
Kaiser, Andreas; Buchholz, Arno; Neugirg, Fabian; Schindewolf, Marcus
2016-04-01
Calanchi landscapes in central Italy have been subject to geoscientific research since many years, not exclusively but especially for questions regarding soil erosion and land degradation. Seasonal dynamics play an important role for morphological processes within the Calanchi. As in most Mediterranean landscapes also in the research site at Val d'Orcia long and dry summers are ended by heavy rainfall events in autumn. The latter contribute to most of the annual sediment output of the incised hollows and can cause damage to agricultural land and infrastructures. While research for understanding Calanco development is of high importance, the complex morphology and thus limited accessibility impedes in situ works. To still improve the understanding of morphodynamics without unnecessarily impinging natural conditions a remote sensing and erosion modelling approach was carried out in the presented work. UAV and LiDAR based very high resolution digital surface models were produced and served as an input parameter for the raster and physically based soil erosion model EROSION3D. Additionally, data on infiltration, runoff generation and sediment detachment were generated with artificial rainfall simulations - the most invasive but unavoidable method. To increase the 1 m plot length virtually to around 20 m the sediment loaded runoff water was again introduced to the plot by a reflux system. Rather elaborate logistics were required to set up the simulator on strongly inclined slopes, to establish sufficient water supply and to secure the simulator on the slope but experiments produced plausible results and valuable input data for modelling. The model results are then compared to the repeated UAV and LiDAR campaigns and the resulting digital elevation models of difference. By simulating different rainfall and moisture scenarios and implementing in situ measured weather data runoff induced processes can be distinguished from gravitational slides and rockfall.
Effective Temperatures of Selected Main-Sequence Stars with the Most Accurate Parameters
NASA Astrophysics Data System (ADS)
Soydugan, F.; Eker, Z.; Soydugan, E.; Bilir, S.; Gökçe, E. Y.; Steer, I.; Tüysüz, M.; Šenyüz, T.; Demircan, O.
2015-07-01
In this study we investigate the distributions of the properties of detached double-lined binaries (DBs) in the mass-luminosity, mass-radius, and mass-effective temperature diagrams. We have improved the classical mass-luminosity relation based on the database of DBs by Eker et al. (2014a). Based on the accurate observational data available to us we propose a method for improving the effective temperatures of eclipsing binaries with accurate mass and radius determinations.
NASA Astrophysics Data System (ADS)
Mellinger, Philippe; Döhler, Michael; Mevel, Laurent
2016-09-01
An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.
Butcher, B.M.
1997-08-01
A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.
NASA Astrophysics Data System (ADS)
Miyasato, Yoshihiko
The problem of constructing model reference adaptive H∞ control for distributed parameter systems of hyperbolic type preceded by unknown input nonlinearity such as dead zone or backlash, is considered in this paper. Distributed parameter systems are infinite dimensional processes, but the proposed control scheme is constructed from finite dimensional controllers. An adaptive inverse model is introduced to estimate and compensate the input nonlinearity. The stabilizing control signal is added to regulate the effect of spill-over terms, and it is derived as a solution of certain H∞ control problem where the residual part of the inverse model and the spill-over term are considered as external disturbances to the process.
NASA Astrophysics Data System (ADS)
Haller, Julian; Wilkens, Volker
2012-11-01
For power levels up to 200 W and sonication times up to 60 s, the electrical power, the voltage and the electrical impedance (more exactly: the ratio of RMS voltage and RMS current) have been measured for a piezocomposite high intensity therapeutic ultrasound (HITU) transducer with integrated matching network, two piezoceramic HITU transducers with external matching networks and for a passive dummy 50 Ω load. The electrical power and the voltage were measured during high power application with an inline power meter and an RMS voltage meter, respectively, and the complex electrical impedance was indirectly measured with a current probe, a 100:1 voltage probe and a digital scope. The results clearly show that the input RMS voltage and the input RMS power change unequally during the application. Hence, the indication of only the electrical input power or only the voltage as the input parameter may not be sufficient for reliable characterizations of ultrasound transducers for high power applications in some cases.
NASA Astrophysics Data System (ADS)
Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was
Ghezzi, Luan; Da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; Ge, Jian; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Cargile, Phillip; Pepper, Joshua; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Wang, Ji; and others
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T {sub eff}, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ∼ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T {sub eff}, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An
Damon, Bruce M; Heemskerk, Anneriet M; Ding, Zhaohua
2012-06-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor magnetic resonance imaging fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image data sets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8 and 15.3 m(-1)), signal-to-noise ratio (50, 75, 100 and 150) and voxel geometry (13.8- and 27.0-mm(3) voxel volume with isotropic resolution; 13.5-mm(3) volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to second-order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m(-1)), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation.
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Accurate nuclear masses from a three parameter Kohn-Sham DFT approach (BCPM)
Baldo, M.; Robledo, L. M.; Schuck, P.; Vinas, X.
2012-10-20
Given the promising features of the recently proposed Barcelona-Catania-Paris (BCP) functional [1], it is the purpose of this work to still improve on it. It is, for instance, shown that the number of open parameters can be reduced from 4-5 to 2-3, i.e. by practically a factor of two without deteriorating the results.
Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited
Fogtmann-Schulz, Alexandra; Hinrup, Brian; Van Eylen, Vincent; Christensen-Dalsgaard, Jørgen; Kjeldsen, Hans; Silva Aguirre, Víctor; Tingley, Brandon
2014-02-01
Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 is the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.
Antonioli, E G; Baggioni, F G; Grassi, G
1980-01-01
Small surface area electrodes are accused of sensing defects which were related to alterations that they induce in the endocardiac electrograms. Since several factors affect the cardiac signal coming from electrode to sensing circuit, i.e. electrode surface area, electrode-tissue interface, pacemaker input impedance and sensing amplifier pass-band, Authors present their studies performed on 252 implanted electrodes of various type. Study was carried out by connecting in parallel to the recorder a variable resistor in order to simulate different pacer input impedances. The results showed a significant reduction in RS amplitude when recorder was paralleled with resistor values lower than 40 K. Slew rates showed a similar behaviours since RS steep tract did not change his duration with load, while total QRS duration is reduced. High speed analysis has shown that the RS segment is not linear in about 40% of cases: the main tract is used for calculations. The most significant attenuations and distortions of endocardial electrogram were observed with smallest electrodes and lowest resistances parallel connected: in these cases the sensing impedance at the electrode-tissue interface appears to be between 3 to 5 K ohms. The results suggest that the most of the alledged sensing faults experienced in the past were probably due to small tip electrodes connected to low input impedance generators or to impending failure situations. The AA. conclude that the main question does not concerne a true electrode inefficiency but a wrongly chosen pacemaker-electrode combination, i.e. small tip electrode connected with old generator models. To avoid the evaluation error, it would be instrumental that the pacemaker manufacturers would specify input characteristics of their generators. So, the implanting clinician becomes able to exactly evaluate the true signal arriving to the sensing circuit by connecting in parallel with the recorder input a resistor whose value approximates the input
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Cartwright, Michael S.; Dupuis, Janae E.; Bargoil, Jessica M.; Foster, Dana C.
2015-01-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Hansen, D Flemming; Westler, William M; Kunze, Micha B A; Markley, John L; Weinhold, Frank; Led, Jens J
2012-03-14
A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal-ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal-ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for (15)N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of (15)N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of (15)N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site.
2012-01-01
A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal–ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal–ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for 15N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of 15N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of 15N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site. PMID:22329704
Rosen, I G; Luczak, Susan E; Weiss, Jordan
2014-03-15
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-01-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. PMID:25883146
NASA Astrophysics Data System (ADS)
Lamouroux, Julien; Gamache, Robert R.
2013-06-01
A model for the prediction of the vibrational dependence of CO_2 half-widths and line shifts for several broadeners, based on a modification of the model proposed by Gamache and Hartmann, is presented. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power p and a reference ro-vibrational transition. Complex Robert-Bonamy calculations were made for 24 bands for lower rotational quantum numbers J'' from 0 to 160 for N_2-, O_2-, air-, and self-collisions with CO_2. In the model a Quantum Coordinate is defined by (c_1 Δν_1 + c_2 Δν_2 + c_3 Δν_3)^p where a linear least-squares fit to the data by the model expression is made. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From these fit data, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO_2 databases to have complete information for the line shape parameters. R. R. Gamache, J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transfer. {{83}} (2004), 119. R. R. Gamache, J. Lamouroux, J. Quant. Spectrosc. Radiat. Transfer. {{117}} (2013), 93.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il.
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.
2010-07-10
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
NASA Astrophysics Data System (ADS)
Filioglou, M.; Balis, D.; Siomos, N.; Poupkou, A.; Dimopoulos, S.; Chaikovsky, A.
2016-06-01
A targeted sensitivity study of the LIRIC algorithm was considered necessary to estimate the uncertainty introduced to the volume concentration profiles, due to the arbitrary selection of user-defined input parameters. For this purpose three different tests were performed using Thessaloniki's Lidar data. Overall, tests in the selection of the regularization parameters, an upper and a lower limit test were performed. The different sensitivity tests were applied on two cases with different predominant aerosol types, a dust episode and a typical urban case.
Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D.; Murfin, W.; Amos, C.N.
1992-03-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a ``so-called`` point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Ajami, N K; Duan, Q; Sorooshian, S
2006-05-05
This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Identification of the battery state-of-health parameter from input-output pairs of time series data
NASA Astrophysics Data System (ADS)
Li, Yue; Chattopadhyay, Pritthi; Ray, Asok; Rahn, Christopher D.
2015-07-01
As a paradigm of dynamic data-driven application systems (DDDAS), this paper addresses real-time identification of the State of Health (SOH) parameter over the life span of a battery that is subjected to approximately repeated cycles of discharging/recharging current. In the proposed method, finite-length data of interest are selected via wavelet-based segmentation from the time series of synchronized input-output (i.e., current-voltage) pairs in the respective two-dimensional space. Then, symbol strings are generated by partitioning the selected segments of the input-output time series to construct a special class of probabilistic finite state automata (PFSA), called D-Markov machines. Pertinent features of the statistics of battery dynamics are extracted as the state emission matrices of these PFSA. This real-time method of SOH parameter identification relies on the divergence between extracted features. The underlying concept has been validated on (approximately periodic) experimental data, generated from a commercial-scale lead-acid battery. It is demonstrated by real-time analysis of the acquired current-voltage data on in-situ computational platforms that the proposed method is capable of distinguishing battery current-voltage dynamics at different aging stages, as an alternative to computation-intensive and electrochemistry-dependent analysis via physics-based modeling.
NASA Astrophysics Data System (ADS)
Verbrugge, S.; Colle, D.; Pickavet, M.; Demeester, P.; Pasqualini, S.; Iselt, A.; Kirstädter, A.; Hülsermann, R.; Westphal, F.-J.; Jäger, M.
2006-06-01
The availability requirements for today's networks are very high. Higher availability often comes with a higher cost. We describe several steps required for estimating the costs of realistic network scenarios. Capital expenditures (CapEx) and operational expenditures (OpEx) are classified. An activity-based approach is used to quantify the cost of the event-driven operational processes such as repair and service provisioning. We discuss activity duration and availability parameters as required input data, which are necessary for calculating the processes' costs for realistic network scenarios. The relevant availability measures for an IP-over-Optical network are described using a triplet representation with optimistic, nominal, and conservative values. The model is applied to a reference German network scenario.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the
NASA Astrophysics Data System (ADS)
Rezaei, Meisam; Seuntjens, Piet; Shahidi, Reihaneh; Joris, Ingeborg; Boënne, Wesley; Cornelis, Wim
2016-04-01
Soil hydraulic parameters, which can be derived from in situ and/or laboratory experiments, are key input parameters for modeling water flow in the vadose zone. In this study, we measured soil hydraulic properties with typical laboratory measurements and field tension infiltration experiments using Wooding's analytical solution and inverse optimization along the vertical direction within two typical podzol profiles with sand texture in a potato field. The objective was to identify proper sets of hydraulic parameters and to evaluate their relevance on hydrological model performance for irrigation management purposes. Tension disc infiltration experiments were carried out at five different depths for both profiles at consecutive negative pressure heads of 12, 6, 3 and 0.1 cm. At the same locations and depths undisturbed samples were taken to determine the water retention curve with hanging water column and pressure extractors and lab saturated hydraulic conductivity with the constant head method. Both approaches allowed to determine the Mualem-van Genuchten (MVG) hydraulic parameters (residual water content θr, saturated water content θs,, shape parameters α and n, and field or lab saturated hydraulic conductivity Kfs and Kls). Results demonstrated horizontal differences and vertical variability of hydraulic properties. Inverse optimization resulted in excellent matches between observed and fitted infiltration rates in combination with final water content at the end of the experiment, θf, using Hydrus 2D/3D. It also resulted in close correspondence of and Kfs with those from Logsdon and Jaynes' (1993) solution of Wooding's equation. The MVG parameters Kfs and α estimated from the inverse solution (θr set to zero), were relatively similar to values from Wooding's solution which were used as initial value and the estimated θs corresponded to (effective) field saturated water content θf. We found the Gardner parameter αG to be related to the optimized van
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.
NASA Astrophysics Data System (ADS)
Kazemi, Mohsen; Aghakhani, Masood; Haghshenas-Jazi, Ehsan; Behmaneshfar, Ali
2016-02-01
The aim of this paper is to optimize the depth of penetration with regard to the effect of MgO nanoparticles and welding input parameters. For this purpose, response surface methodology (RSM) with central composite rotatable design (CCRD) was used. The welding current, arc voltage, nozzle-to-plate distance, welding speed, and thickness of MgO nanoparticles were determined as the factors, and depth of penetration was considered as the response. Quadratic polynomial model was used for determining the relationship between the response and factors. A reduced model was obtained from the data which the values of R 2, R 2 (pred), and R 2 (adj) of this model were 92.05, 69.05, and 86.31 pct, respectively. Thus, this model was suitable, and it was used to determine the optimum levels of factors. The results show that the welding current, arc voltage, and nozzle-to-plate distance factors should be adjusted in high level, and welding speed and thickness of MgO nanoparticles factors should be adjusted in low level.
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Brawand, Nicholas; Vörös, Márton; Govoni, Marco; Galli, Giulia
The accurate prediction of optoelectronic properties of molecules and solids is a persisting challenge for current density functional theory (DFT) based methods. We propose a hybrid functional where the mixing fraction of exact and local exchange is determined by a non-empirical, system dependent function. This functional yields ionization potentials, fundamental and optical gaps of many, diverse systems in excellent agreement with experiments, including organic and inorganic molecules and nanocrystals. We further demonstrate that the newly defined hybrid functional gives the correct alignment between the energy level of the exemplary TTF-TCNQ donor-acceptor system. DOE-BES: DE-FG02-06ER46262.
Jiang, Bin; Guo, Hua
2016-08-01
In search for an accurate description of the dissociative chemisorption of water on the Ni(111) surface, we report a new nine-dimensional potential energy surface (PES) based on a large number of density functional theory points using the RPBE functional. Seven-dimensional quantum dynamical calculations have been carried out on the RPBE PES, followed by site averaging and lattice effect corrections, yielding sticking probabilities that are compared with both the previous theoretical results based on a PW91 PES and experiment. It is shown that the RPBE functional increases the reaction barrier, but has otherwise a minor impact on the PES topography. Better agreement with experimental results is obtained with the new PES, but the agreement is still not quantitative. Possible sources of the remaining discrepancies are discussed.
NASA Astrophysics Data System (ADS)
Marin, Andrew T.; Musselman, Kevin P.; MacManus-Driscoll, Judith L.
2013-04-01
This work shows that when a Schottky barrier is present in a photovoltaic device, such as in a device with an ITO/ZnO contact, equivalent circuit analysis must be performed with admittance spectroscopy to accurately determine the pn junction interface recombination parameters (i.e., capture cross section and density of trap states). Without equivalent circuit analysis, a Schottky barrier can produce an error of ˜4-orders of magnitude in the capture cross section and ˜50% error in the measured density of trap states. Using a solution processed ZnO/Cu2O photovoltaic test system, we apply our analysis to clearly separate the contributions of interface states at the pn junction from the Schottky barrier at the ITO/ZnO contact so that the interface state recombination parameters can be accurately characterized. This work is widely applicable to the multitude of photovoltaic devices, which use ZnO adjacent to ITO.
Kostylev, Maxim; Wilson, David
2014-01-01
Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567
NASA Astrophysics Data System (ADS)
Tao, Liang; Xinzhang, Jia; Junfeng, Chen
2009-11-01
Techniques for constructing metamodels of device parameters at BSIM3v3 level accuracy are presented to improve knowledge-based circuit sizing optimization. Based on the analysis of the prediction error of analytical performance expressions, operating point driven (OPD) metamodels of MOSFETs are introduced to capture the circuit's characteristics precisely. In the algorithm of metamodel construction, radial basis functions are adopted to interpolate the scattered multivariate data obtained from a well tailored data sampling scheme designed for MOSFETs. The OPD metamodels can be used to automatically bias the circuit at a specific DC operating point. Analytical-based performance expressions composed by the OPD metamodels show obvious improvement for most small-signal performances compared with simulation-based models. Both operating-point variables and transistor dimensions can be optimized in our nesting-loop optimization formulation to maximize design flexibility. The method is successfully applied to a low-voltage low-power amplifier.
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.
2014-11-01
The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.
Coffield, T; Patricia Lee, P
2007-01-31
The purpose of this report is to update parameters utilized in Human Health Exposure calculations and Bioaccumulation Transfer Factors utilized at SRS for Performance Assessment modeling. The reason for the update is to utilize more recent information issued, validate information currently used and correct minor inconsistencies between modeling efforts performed in SRS contiguous areas of the heavy industrialized central site usage areas called the General Separations Area (GSA). SRS parameters utilized were compared to a number of other DOE facilities and generic national/global references to establish relevance of the parameters selected and/or verify the regional differences of the southeast USA. The parameters selected were specifically chosen to be expected values along with identifying a range for these values versus the overly conservative specification of parameters for estimating an annual dose to the maximum exposed individual (MEI). The end uses are to establish a standardized source for these parameters that is up to date with existing data and maintain it via review of any future issued national references to evaluate the need for changes as new information is released. These reviews are to be added to this document by revision.
Faulkner, William B; Shaw, Bryan W; Grosch, Tom
2008-10-01
As of December 2006, the American Meteorological Society/U.S. Environmental Protection Agency (EPA) Regulatory Model with Plume Rise Model Enhancements (AERMOD-PRIME; hereafter AERMOD) replaced the Industrial Source Complex Short Term Version 3 (ISCST3) as the EPA-preferred regulatory model. The change from ISCST3 to AERMOD will affect Prevention of Significant Deterioration (PSD) increment consumption as well as permit compliance in states where regulatory agencies limit property line concentrations using modeling analysis. Because of differences in model formulation and the treatment of terrain features, one cannot predict a priori whether ISCST3 or AERMOD will predict higher or lower pollutant concentrations downwind of a source. The objectives of this paper were to determine the sensitivity of AERMOD to various inputs and compare the highest downwind concentrations from a ground-level area source (GLAS) predicted by AERMOD to those predicted by ISCST3. Concentrations predicted using ISCST3 were sensitive to changes in wind speed, temperature, solar radiation (as it affects stability class), and mixing heights below 160 m. Surface roughness also affected downwind concentrations predicted by ISCST3. AERMOD was sensitive to changes in albedo, surface roughness, wind speed, temperature, and cloud cover. Bowen ratio did not affect the results from AERMOD. These results demonstrate AERMOD's sensitivity to small changes in wind speed and surface roughness. When AERMOD is used to determine property line concentrations, small changes in these variables may affect the distance within which concentration limits are exceeded by several hundred meters. PMID:18939775
Hermosilla, Laura; Prampolini, Giacomo; Calle, Paloma; García de la Vega, José Manuel; Brancato, Giuseppe; Barone, Vincenzo
2015-01-01
A computational strategy that combines both time-dependent and time-independent approaches is exploited to accurately model molecular dynamics and solvent effects on the isotropic hyperfine coupling constants of the DMPO-H nitroxide. Our recent general force field for nitroxides derived from AMBER ff99SB is further extended to systems involving hydrogen atoms in β-positions with respect to NO. The resulting force-field has been employed in a series of classical molecular dynamics simulations, comparing the computed EPR parameters from selected molecular configurations to the corresponding experimental data in different solvents. The effect of vibrational averaging on the spectroscopic parameters is also taken into account, by second order vibrational perturbation theory involving semi-diagonal third energy derivatives together first and second property derivatives. PMID:26584116
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.
NASA Astrophysics Data System (ADS)
Ungureanu, Constantin; Rayavarapu, Raja Gopal; Manohar, Srirang; van Leeuwen, Ton G.
2009-05-01
Gold nanorods have interesting optical properties due to surface plasmon resonance effects. A variety of biomedical applications of these particles have been envisaged and feasibilities demonstrated in imaging, sensing, and therapy based on the interactions of light with these particles. In order to correctly interpret experimental data and tailor the nanorods and their environments for optimal use in these applications, simulations of the optical properties of the particles under various conditions are essential. Of various numerical methods available, the discrete dipole approximation (DDA) approach implemented in the publicly available DDSCAT code is a powerful method that had proved popular for studying gold nanorods. However, there is as yet no universal agreement on the shape used to represent the nanorods and on the dielectric function of gold required for the simulations. We systematically study the influence of these parameters on simulated results. We find large variations in the position of plasmon resonance peaks, their amplitudes, and shapes of the spectra depending on the choice of the parameters. We discuss these in the light of experimental optical extinction spectra of gold nanorods synthesized in our laboratory. We show that much care should be taken and prudence applied before DDA results be used to interpret experimental data and to help characterize nanoparticles synthesized.
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only
Optimal input design for aircraft instrumentation systematic error estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1991-01-01
A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2014-08-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals (P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms (P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale (P < 0.05). White and brown-white patched cows had significantly longer (P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower (P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different (P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2015-06-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals ( P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms ( P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale ( P < 0.05). White and brown-white patched cows had significantly longer ( P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower ( P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different ( P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
Hansen, Steffi; Henning, Andreas; Naegel, Arne; Heisig, Michael; Wittum, Gabriel; Neumann, Dirk; Kostka, Karl-Heinz; Zbytovska, Jarmila; Lehr, Claus-Michael; Schaefer, Ulrich F
2008-02-01
Mathematical modeling of skin transport is considered a valuable alternative of in-vitro and in-vivo investigations especially considering ethical and economical questions. Mechanistic diffusion models describe skin transport by solving Fick's 2nd law of diffusion in time and space; however models relying entirely on a consistent experimental data set are missing. For a two-dimensional model membrane consisting of a biphasic stratum corneum (SC) and a homogeneous epidermal/dermal compartment (DSL) methods are presented to determine all relevant input parameters. The data were generated for flufenamic acid (M(W) 281.24g/mol; logK(Oct/H2O) 4.8; pK(a) 3.9) and caffeine (M(W) 194.2g/mol; logK(Oct/H2O) -0.083; pK(a) 1.39) using female abdominal skin. K(lip/don) (lipid-donor partition coefficient) was determined in equilibration experiments with human SC lipids. K(cor/lip) (corneocyte-lipid) and K(DSL/lip) (DSL-lipid) were derived from easily available experimental data, i.e. K(SC/don) (SC-donor), K(lip/don) and K(SC/DSL) (SC-DSL) considering realistic volume fractions of the lipid and corneocyte phases. Lipid and DSL diffusion coefficients D(lip) and D(DSL) were calculated based on steady state flux. The corneocyte diffusion coefficient D(cor) is not accessible experimentally and needs to be estimated by simulation. Based on these results time-dependent stratum corneum concentration-depth profiles were simulated and compared to experimental profiles in an accompanying study.
NASA Technical Reports Server (NTRS)
Green, Sheldon; Boissoles, J.; Boulet, C.
1988-01-01
The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
Liebman, Matt; Wander, Michelle M.
2016-01-01
Plant-soil relations may explain why low-external input (LEI) diversified cropping systems are more efficient than their conventional counterparts. This work sought to identify links between management practices, soil quality changes, and root responses in a long-term cropping systems experiment in Iowa where grain yields of 3-year and 4-year LEI rotations have matched or exceeded yield achieved by a 2-year maize (Zea mays L.) and soybean (Glycine max L.) rotation. The 2-year system was conventionally managed and chisel-ploughed, whereas the 3-year and 4-year systems received plant residues and animal manures and were periodically moldboard ploughed. We expected changes in soil quality to be driven by organic matter inputs, and root growth to reflect spatial and temporal fluctuations in soil quality resulting from those additions. We constructed a carbon budget and measured soil quality indicators (SQIs) and rooting characteristics using samples taken from two depths of all crop-phases of each rotation system on multiple dates. Stocks of particulate organic matter carbon (POM-C) and potentially mineralizable nitrogen (PMN) were greater and more evenly distributed in the LEI than conventional systems. Organic C inputs, which were 58% and 36% greater in the 3-year rotation than in the 4-year and 2-year rotations, respectively, did not account for differences in SQI abundance or distribution. Surprisingly, SQIs did not vary with crop-phase or date. All biochemical SQIs were more stratified (p<0.001) in the conventionally-managed soils. While POM-C and PMN in the top 10 cm were similar in all three systems, stocks in the 10–20 cm depth of the conventional system were less than half the size of those found in the LEI systems. This distribution was mirrored by maize root length density, which was also concentrated in the top 10 cm of the conventionally managed plots and evenly distributed between depths in the LEI systems. The plow-down of organic amendments and
NASA Astrophysics Data System (ADS)
Suchomska, K.; Graczyk, D.; Smolec, R.; Pietrzyński, G.; Gieren, W.; Stȩpień, K.; Konorski, P.; Pilecki, B.; Villanova, S.; Thompson, I. B.; Górski, M.; Karczmarek, P.; Wielgórski, P.; Anderson, R. I.
2015-07-01
We have analyzed the double-lined eclipsing binary system ASAS J180057-2333.8 from the All Sky Automated Survey (ASAS) catalogue. We measure absolute physical and orbital parameters for this system based on archival V-band and I-band ASAS photometry, as well as on high-resolution spectroscopic data obtained with ESO 3.6 m/HARPS and CORALIE spectrographs. The physical and orbital parameters of the system were derived with an accuracy of about 0.5-3 per cent. The system is a very rare configuration of two bright well-detached giants of spectral types K1 and K4 and luminosity class II. The radii of the stars are R1 = 52.12 ± 1.38 and R2 = 67.63 ± 1.40 R⊙ and their masses are M1 = 4.914 ± 0.021 and M2 = 4.875 ± 0.021 M⊙. The exquisite accuracy of 0.5 per cent obtained for the masses of the components is one of the best mass determinations for giants. We derived a precise distance to the system of 2.14 ± 0.06 kpc (stat.) ± 0.05 (syst.) which places the star in the Sagittarius-Carina arm. The Galactic rotational velocity of the star is Θs = 258 ± 26 km s-1 assuming Θ0 = 238 km s-1. A comparison with PARSEC isochrones places the system at the early phase of core helium burning with an age of slightly larger than 100 million years. The effect of overshooting on stellar evolutionary tracks was explored using the MESA star code.
Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim
2016-01-01
Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192
Prediction of foF2 Disturbances Above Tokyo Using Solar Wind Input to a Neural Network
NASA Astrophysics Data System (ADS)
Uchida, H. A.; Miyake, W.; Nakamura, M.
2015-12-01
Neural network has the ability to learn the input-output relation from past data. It is often used to produce empirical prediction models of several space environmental parameters. One operational model (Nakamura, 2008) used K-index input to predict foF2 variations and storms above Tokyo. It was expected that the prediction at the disturbed situation would become more accurate when solar wind parameters are used to the inputs. Recently the availability of solar wind parameters from the Advanced Composition Explorer became longer enough to overlap one solar activity. In this study, solar wind proton velocity and IMF are used to the input to predict the foF2 disturbances above Tokyo (SW input model). The K-index input model (Nakamura, 2008) was also recreated using the same data term as the SW input model. The SW input model tends to predict better the negative disturbances, and it predicted daytime quick variations more accurate than the K-index input model. Statistical comparison of the prediction ability of those models will be discussed, and the contribution of the solar wind input parameters to the foF2 will be tested using an artificial input.
NASA Astrophysics Data System (ADS)
Joosten, S.; Pammler, K.; Silny, J.
2009-02-01
The problem of electromagnetic interference of electronic implants such as cardiac pacemakers has been well known for many years. An increasing number of field sources in everyday life and occupational environment leads unavoidably to an increased risk for patients with electronic implants. However, no obligatory national or international safety regulations exist for the protection of this patient group. The aim of this study is to find out the anatomical and physiological worst-case conditions for patients with an implanted pacemaker adjusted to unipolar sensing in external time-varying electric fields. The results of this study with 15 volunteers show that, in electric fields, variation of the interference voltage at the input of a cardiac pacemaker adds up to 200% only because of individual factors. These factors should be considered in human studies and in the setting of safety regulations.
NASA Astrophysics Data System (ADS)
Montes, D.; Caballero, J. A.; Alonso-Floriano, F. J.; Cortes Contreras, M.; Gonzalez-Alvarez, E.; Hidalgo, D.; Holgado, G.; Llamas, M.; Martinez-Rodriguez, H.; Sanz-Forcada, J.
2015-01-01
We help compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES `input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers (see the other contributions in this volume), we compile a huge pile of public data on over 2100 M dwarfs, and analyze them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master and grade students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D.; Amos, C.N.; Helton, J.; Boyd, G.
1992-06-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Input to the PRAST computer code used in the SRS probabilistic risk assessment
Kearnaghan, D.P.
1992-10-15
The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment.
Jannik, T.; Karapatakis, D.; Lee, P.; Farfan, E.
2010-08-06
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6
NASA Astrophysics Data System (ADS)
Jaunat, J.; Celle-Jeanton, H.; Huneau, F.; Dupuy, A.; Le Coustumer, P.
2013-07-01
A hydrochemical and isotopic survey of rainwater and groundwater has been carried out during almost two years on the Ursuya Mount in the northern Basque Country (southwestern France) with the aim of enhancing the understanding of the behaviour of this aquifer and more peculiarly the recharge mode of groundwater. The input signal of this aquifer is defined thanks to 112 rainwater samples. The computed meteoric water line (δD = 7.3 δ18O + 5.1; r = 0.96) is consistent with that defined in the European IAEA/WMO network stations. The weighted mean deuterium excess about 9.7‰ is really close to the value obtained for Atlantic precipitations and clearly demonstrates an oceanic major origin. The computations conducted on the chemical dataset show that the rainwater composition is controlled by four major factors: (1) a mixed source of anthropogenic pollution and crustal material; (2) a marine source; (3) an urban source; (4) an acid source. Further, the quantification of neutralizing potentials clearly revealed below cloud processes in which crustal and anthropogenic components are responsible for the neutralization of anions. Besides the major Atlantic origin of the recharge water, the chemical and isotopic content of the samples coupled with the corresponding air mass back trajectories revealed four major geographical origins of the components: (1) northwestern part of Atlantic Ocean and (2) Southwestern part of Atlantic Ocean. The oceanic influence in airmasses from these origins is highlighted by the stable isotopic content (weighted mean close to the Atlantic Ocean signature) and by the chemical concentrations dominated by sea salt elements. (3) Northern Europe with a continental influence shown by a light depletion on isotopic signal besides purely oceanic origin and a higher concentration of crustal and anthropogenic components. 4) Southeastern area (Southeastern Europe, Northern Africa and Mediterranean Sea) with an isotopic signature consistent with the
NASA Astrophysics Data System (ADS)
Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.
Wang, Yong; Goh, Wang Ling; Chai, Kevin T-C; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators. PMID:27131699
NASA Astrophysics Data System (ADS)
de la Paz, Mercedes; Gómez-Parra, Abelardo; Forja, Jesús
2008-06-01
The main objective of the present study is to assess the temporal variability of the carbonate system, and the mechanisms driving that variability, in the Rio San Pedro, a tidal creek located in the Bay of Cadiz (SW Iberian Peninsula). This shallow tidal creek is affected by effluents of organic matter and nutrients from surrounding marine fish farms. In 2004, 11 tidal samplings, seasonally distributed, were carried out for the measurement of total alkalinity (TA), pH, dissolved oxygen and Chlorophyll- a (Chl- a) using a fixed station. In addition, several longitudinal samplings were carried out both in the tidal creek and in the adjacent waters of the Bay of Cadiz, in order to obtain a spatial distribution of the carbonate parameters. Tidal mixing is the main factor controlling the dissolved inorganic carbon (DIC) variability, showing almost conservative behaviour on a tidal time scale. The amplitude of the daily oscillations of DIC, pH and chlorophyll show a high dependence on the spring-neap tide sequence, with the maximum amplitude associated with spring tides. Additionally, a marked seasonality has been found in the DIC, pH and oxygen concentrations. This seasonality seems to be related to the increase in metabolic rates with the temperature, the alternation of storm events and high evaporation rates, together with intense seasonal variability in the discharges from fish farms. In addition, the export of DIC from the Rio San Pedro to the adjacent coastal area has been evaluated using the tidal prism model, obtaining a net export of 1.05×10 10 g C yr -1.
Input/output system identification - Learning from repeated experiments
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Horta, Lucas G.; Longman, Richard W.
1990-01-01
The paper describes three approaches and possible variations for the determination of the Markov parameters for forced response data using general inputs. It is shown that, when the parameters in the solution procedure are bootstrapped, the results can be obtained very efficiently, but the errors propagate throughout all parameters. By arranging the data in a different form and using singular value decomposition, the resulting identified parameters are more accurate, in the least number of successive experiments, at the expense of a large matrix singular value decomposition. When a recursive procedure is employed, the calculations can be performed very efficiently, but the number of repetitions of the experiments is much greater for a given accuracy than for any of the previous approaches. An alternative formulation is proposed to combine the advantages of each of the approaches.
Toward an inventory of nitrogen input to the United States
Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.
2015-12-01
Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this
NASA Astrophysics Data System (ADS)
Huerta, E. A.; Gair, Jonathan R.; Brown, Duncan A.
2012-03-01
-4, and 10-1, respectively. LISA should also be able to determine the location of the source in the sky and the SMBH spin orientation to within ˜10-4 steradians. Furthermore, we show that by including conservative corrections up to 2.5PN order, systematic errors no longer dominate over statistical errors. This shows that search templates that include small body spin effects in the equations of motion up to 2.5PN order should allow us to perform accurate parameter extraction for IMRIs with typical signal-to-noise ratio ˜1000.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input
Analysis of Stochastic Response of Neural Networks with Stochastic Input
1996-10-10
Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.
ERIC Educational Resources Information Center
Berliss-Vincent, Jane; Whitford, Gigi
2002-01-01
This article presents both the factors involved in successful speech input use and the potential barriers that may suggest that other access technologies could be more appropriate for a given individual. Speech input options that are available are reviewed and strategies for optimizing use of speech recognition technology are discussed. (Contains…
NASA Technical Reports Server (NTRS)
Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette
2006-01-01
This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
Inferring Indel Parameters using a Simulation-based Approach
Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A.; Pupko, Tal
2015-01-01
In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Garrido, Nuno M; Jorge, Miguel; Queimada, António J; Gomes, José R B; Economou, Ioannis G; Macedo, Eugénia A
2011-10-14
The Gibbs energy of hydration is an important quantity to understand the molecular behavior in aqueous systems at constant temperature and pressure. In this work we review the performance of some popular force fields, namely TraPPE, OPLS-AA and Gromos, in reproducing the experimental Gibbs energies of hydration of several alkyl-aromatic compounds--benzene, mono-, di- and tri-substituted alkylbenzenes--using molecular simulation techniques. In the second part of the paper, we report a new model that is able to improve such hydration energy predictions, based on Lennard Jones parameters from the recent TraPPE-EH force field and atomic partial charges obtained from natural population analysis of density functional theory calculations. We apply a scaling factor determined by fitting the experimental hydration energy of only two solutes, and then present a simple rule to generate atomic partial charges for different substituted alkyl-aromatics. This rule has the added advantages of eliminating the unnecessary assumption of fixed charge on every substituted carbon atom and providing a simple guideline for extrapolating the charge assignment to any multi-substituted alkyl-aromatic molecule. The point charges derived here yield excellent predictions of experimental Gibbs energies of hydration, with an overall absolute average deviation of less than 0.6 kJ mol(-1). This new parameter set can also give good predictive performance for other thermodynamic properties and liquid structural information.
Laumer, Bernhard; Schuster, Fabian; Stutzmann, Martin; Bergmaier, Andreas; Dollinger, Guenther; Eickhoff, Martin
2013-06-21
Zn{sub 1-x}Mg{sub x}O epitaxial films with Mg concentrations 0{<=}x{<=}0.3 were grown by plasma-assisted molecular beam epitaxy on a-plane sapphire substrates. Precise determination of the Mg concentration x was performed by elastic recoil detection analysis. The bandgap energy was extracted from absorption measurements with high accuracy taking electron-hole interaction and exciton-phonon complexes into account. From these results a linear relationship between bandgap energy and Mg concentration is established for x{<=}0.3. Due to alloy disorder, the increase of the photoluminescence emission energy with Mg concentration is less pronounced. An analysis of the lattice parameters reveals that the epitaxial films grow biaxially strained on a-plane sapphire.
Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data
NASA Astrophysics Data System (ADS)
Lavrentiev, M.; Shchemel, A.; Simonov, K.
problem can be formally propounded this way: A distribution of various combinations of observed values should be estimated. Totality of the combinations is represented by the set of variables. The results of observations determine excerption of outputs. In the scope of the propounded problem continuous (along with its derivations) homomorphic reflec- tion of the space of hidden parameters to the space of observed parameters should be found. It allows to reconstruct lack information of the inputs when the number of the 1 inputs is not less than the number of hidden parameters and to estimate the distribution if information for synonymous prediction of unknown inputs is not sufficient. The following approach to build approximation based on the excerption is suggested: the excerption is supplemented with the hidden parameters, which are distributed uni- formly in a multidimensional limited space. Then one should find correspondence of model and observed outputs. Therefore the correspondence will provide that the best approximation is the most accurate. In the odd iterations dependence between hid- den inputs and outputs is being optimized (like the conventional problem is solved). Correspondence between tasks is changing in the case when the error is reducing and distribution of inputs remains intact. Therefore, a special transform is applied to reduce error at every iteration. If the mea- sure of distribution is constant, then the condition of transformations is simplified. Such transforms are named "canonical" or "volume invariant transforms" and, there- fore, are well known. This approach is suggested for solving main inverse task of the tsunami problem. Basing on registered tsunami in seaside and shelf to estimate parameters of tsunami's hearth. 2
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Measuring Input Thresholds on an Existing Board
NASA Technical Reports Server (NTRS)
Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.
2011-01-01
A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and
Accurate Optical Reference Catalogs
NASA Astrophysics Data System (ADS)
Zacharias, N.
2006-08-01
Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.
Instrumentation for measuring energy inputs to implements
Tompkins, F.D.; Wilhelm, L.R.
1981-01-01
A microcomputer-based instrumentation system for monitoring tractor operating parameters and energy inputs to implements was developed and mounted on a 75-power-takeoff-KW tractor. The instrumentation system, including sensors and data handling equipment, is discussed. 10 refs.
Selecting training inputs via greedy rank covering
Buchsbaum, A.L.; Santen, J.P.H. van
1996-12-31
We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.
Houe, H; Østergaard, S; Thilsing-Hansen, T; Jørgensen, R J; Larsen, T; Sørensen, J T; Agger, J F; Blom, J Y
2001-01-01
The present review analyses the documentation on incidence, diagnosis, risk factors and effects of milk fever and subclinical hypocalcaemia. It is hereby evaluated whether the existing documentation seems sufficient for further modelling in a decision support system for selection of a control strategy. Several studies have been carried out revealing an incidence of milk fever most often in the level of 5-10%. Few studies indicate that the incidence of subclinical hypocalcaemia is several times higher than milk fever. The diagnosis based on clinical or laboratory methods or based on presence of risk factors is outlined. The clinical symptoms of milk fever are highly specific and the disease level may thus be determined from recording of treatments. Diagnosis of subclinical hypocalcaemia needs to include laboratory examinations or it may be determined by multiplying the incidence of milk fever by a certain factor. From the documentation on risk factors, it is very complex to predict the incidence from the exposure level of the risk factors. Due to uncertainty, sensitivity analyses over a wide range of values for each parameter are needed. The documentation of cow characteristics, nutrition, environment and management as risk factors are described. Among cow characteristics, parity or age, body condition and production level were found to be important. Risk factors associated with nutrition included most importantly dietary cation-anion difference and calcium level whereas the importance of general feeding related factors like type of feed stuff and feeding level were less clear. Environment and management included season, climate, housing, pasturing, exercise, length of dry period and prepartum milking. Several of the parameters on environment and management were confounded among each other and therefore firm conclusions on the importance were difficult. The documentation of the effect of milk fever includes the downer cows, reproductive disorders, occurrence of
Developing Accurate Spatial Maps of Cotton Fiber Quality Parameters
Technology Transfer Automated Retrieval System (TEKTRAN)
Awareness of the importance of cotton fiber quality (Gossypium, L. sps.) has increased as advances in spinning technology require better quality cotton fiber. Recent advances in geospatial information sciences allow an improved ability to study the extent and causes of spatial variability in fiber p...
NASA Astrophysics Data System (ADS)
The Arctic Research and Policy Act (Eos, June 26, 1984, p. 412) was signed into law by President Ronald Reagan this past July. One of its objectives is to develop a 5-year research plan for the Arctic. A request for input to this plan is being issued this week to nearly 500 people in science, engineering, and industry.To promote Arctic research and to recommend research policy in the Arctic, the new law establishes a five-member Arctic Research Commission, to be appointed by the President, and establishes an Interagency Arctic Research Policy Committee, to be composed of representatives from nearly a dozen agencies having interests in the region. The commission will make policy recommendations, and the interagency committee will implement those recommendations. The National Science Foundation (NSF) has been designated as the lead agency of the interagency committee.
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
Chaudhary, Naveed Ishtiaq; Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Aslam, Muhammad Saeed
2013-01-01
A novel algorithm is developed based on fractional signal processing approach for parameter estimation of input nonlinear control autoregressive (INCAR) models. The design scheme consists of parameterization of INCAR systems to obtain linear-in-parameter models and to use fractional least mean square algorithm (FLMS) for adaptation of unknown parameter vectors. The performance analyses of the proposed scheme are carried out with third-order Volterra least mean square (VLMS) and kernel least mean square (KLMS) algorithms based on convergence to the true values of INCAR systems. It is found that the proposed FLMS algorithm provides most accurate and convergent results than those of VLMS and KLMS under different scenarios and by taking the low-to-high signal-to-noise ratio. PMID:23853538
Modeling and generating input processes
Johnson, M.E.
1987-01-01
This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.
Input in Second Language Acquisition.
ERIC Educational Resources Information Center
Gass, Susan M., Ed.; Madden, Carolyn G., Ed.
This collection of conference papers includes: "When Does Teacher Talk Work as Input?"; "Cultural Input in Second Language Learning"; "Skilled Variation in a Kindergarten Teacher's Use of Foreigner Talk"; "Teacher-Pupil Interaction in Second Language Development"; "Foreigner Talk in the University Classroom"; "Input and Interaction in the…
Intensive Input in Language Acquisition.
ERIC Educational Resources Information Center
Trimino, Andy; Ferguson, Nancy
This paper discusses the role of input as one of the universals in second language acquisition theory. Considerations include how language instructors can best organize and present input and when certain kinds of input are more important. A self-administered program evaluation exercise using relevant theoretical and methodological contributions…
Estimating nonstationary input signals from a single neuronal spike train
NASA Astrophysics Data System (ADS)
Kim, Hideaki; Shinomoto, Shigeru
2012-11-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.
Estimating nonstationary input signals from a single neuronal spike train.
Kim, Hideaki; Shinomoto, Shigeru
2012-11-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells
Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W.; Kulkarni, Jayant; Litke, Alan M.; Chichilnisky, E. J.; Simoncelli, Eero; Paninski, Liam
2013-01-01
Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations. PMID:22203465
Olivares, Alberto; Ruiz-Garcia, Gonzalo; Olivares, Gonzalo; Górriz, Juan Manuel; Ramirez, Javier
2013-01-01
Ellipsoid fitting algorithms are widely used to calibrate Magnetic Angular Rate and Gravity (MARG) sensors. These algorithms are based on the minimization of an error function that optimizes the parameters of a mathematical sensor model that is subsequently applied to calibrate the raw data. The convergence of this kind of algorithms to a correct solution is very sensitive to input data. Input calibration datasets must be properly distributed in space so data can be accurately fitted to the theoretical ellipsoid model. Gathering a well distributed set is not an easy task as it is difficult for the operator carrying out the maneuvers to keep a visual record of all the positions that have already been covered, as well as the remaining ones. It would be then desirable to have a system that gives feedback to the operator when the dataset is ready, or to enable the calibration process in auto-calibrated systems. In this work, we propose two different algorithms that analyze the goodness of the distributions by computing four different indicators. The first approach is based on a thresholding algorithm that uses only one indicator as its input and the second one is based on a Fuzzy Logic System (FLS) that estimates the calibration error for a given calibration set using a weighted combination of two indicators. Very accurate classification between valid and invalid datasets is achieved with average Area Under Curve (AUC) of up to 0.98. PMID:24013490
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.
Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam
2012-08-01
Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other. PMID:19229307
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Payne, A.C.; Breeding, R.J.; Gorham, E.D.; Brown, T.D.; Rightley, G.S.; Gregory, J.J. ); Murfin, W. ); Amos, C.N. )
1991-04-01
This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the U.S. Nuclear Regulatory Commission. The results of the Containment Loads and Molten Core/Containment Interaction Expert Panel Elicitation are presented in this part of Volume 2 of NUREG/CR-4551. The Containment Loads Expert Panel considered seven issues: (1) hydrogen phenomena at Grand Gulf; (2) hydrogen burn at vessel breach at Sequoyah; (3) BWR reactor building failure due to hydrogen; (4) Grand Gulf containment loads at vessel breach; (5) pressure increment in the Sequoyah containment at vessel breach; (6) loads at vessel breach: Surry; and (7) pressure increment in the Zion containment at vessel breach. The report begins with a brief discussion of the methods used to elicit the information from the experts. The information for each issue is then presented in five sections: (1) a brief definition of the issue, (2) a brief summary of the technical rationale supporting the distributions developed by each of the experts, (3) a brief description of the operations that the project staff performed on the raw elicitation results in order to aggregate the distributions, (4) the aggregated distributions, and (5) the individual expert elicitation summaries. The Molten Core/Containment Interaction Panel considered three issues. The results of the following two of these issues are presented in this document: (1) Peach Bottom drywell shell meltthrough; and (2) Grand Gulf pedestal erosion. 89 figs., 154 tabs.
Waite, Anthony; /SLAC
2011-09-07
Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each
Master control data handling program uses automatic data input
NASA Technical Reports Server (NTRS)
Alliston, W.; Daniel, J.
1967-01-01
General purpose digital computer program is applicable for use with analysis programs that require basic data and calculated parameters as input. It is designed to automate input data preparation for flight control computer programs, but it is general enough to permit application in other areas.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1999-01-01
A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time.
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Parametric analysis of parameters for electrical-load forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael
1997-04-01
Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.
PREVIMER : Meteorological inputs and outputs
NASA Astrophysics Data System (ADS)
Ravenel, H.; Lecornu, F.; Kerléguer, L.
2009-09-01
PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during
Input impedance of microstrip antennas
NASA Technical Reports Server (NTRS)
Deshpande, M. D.; Bailey, M. C.
1982-01-01
Using Richmond's reaction integral equation, an expression is derived for the input impedance of microstrip patch antennas excited by either a microstrip line or a coaxial probe. The effects of the finite substrate thickness, a dielectric protective cover, and associated surface waves are properly included by the use of the exact dyadic Green's function. Using the present formulation the input impedance of a rectangular microstrip antenna is determined and compared with experimental and earlier calculated results.
Nonlinear input-output systems
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Luksic, Mladen; Su, Renjeng
1987-01-01
Necessary and sufficient conditions that the nonlinear system dot-x = f(x) + ug(x) and y = h(x) be locally feedback equivalent to the controllable linear system dot-xi = A xi + bv and y = C xi having linear output are found. Only the single input and single output case is considered, however, the results generalize to multi-input and multi-output systems.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
WORM: A general-purpose input deck specification language
Jones, T.
1999-07-01
Using computer codes to perform criticality safety calculations has become common practice in the industry. The vast majority of these codes use simple text-based input decks to represent the geometry, materials, and other parameters that describe the problem. However, the data specified in input files are usually processed results themselves. For example, input decks tend to require the geometry specification in linear dimensions and materials in atom or weight fractions, while the parameter of interest might be mass or concentration. The calculations needed to convert from the item of interest to the required parameter in the input deck are usually performed separately and then incorporated into the input deck. This process of calculating, editing, and renaming files to perform a simple parameter study is tedious at best. In addition, most computer codes require dimensions to be specified in centimeters, while drawings or other materials used to create the input decks might be in other units. This also requires additional calculation or conversion prior to composition of the input deck. These additional calculations, while extremely simple, introduce a source for error in both the calculations and transcriptions. To overcome these difficulties, WORM (Write One, Run Many) was created. It is an easy-to-use programming language to describe input decks and can be used with any computer code that uses standard text files for input. WORM is available, via the Internet, at worm.lanl.gov. A user's guide, tutorials, example models, and other WORM-related materials are also available at this Web site. Questions regarding WORM should be directed to worm{at}lanl.gov.
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799
Modelling sediment input in large river basins
NASA Astrophysics Data System (ADS)
Scherer, U.
2012-04-01
Erosion and sediment redistribution play a pivotal role in the terrestrial ecosystem as they directly influence soil functions and water quality. In particular surface waters are threatened by emissions of nutrients and contaminants via erosion. The sustainable management of sediments is thus a key challenge in river basin management. Beside the planning and implementation of mitigation measures typically focusing on small and mesoscale catchments, the knowledge of sediment emissions and associated substances in large drainage basins is of utmost importance for water quality protection of large rivers and the seas. The objective of this study was thus to quantify the sediment input into the large drainage basins of Germany (Rhine, Elbe, Odra, Weser, Ems, Danube) as a basis for nutrient and contaminant emissions via erosion. The sediment input was quantified for all watersheds of Germany and added up along the flow paths of the river systems. Due to the large scale, sediment production within the watersheds was estimated based on the USLE for cultivated land and naturally covered areas and on specific erosion rates for mountainous areas without vegetation cover. To quantify the sediment delivery ratio a model approach was developed using data on calculated sediment production rates and long term sediment loads observed at monitoring stations of 13 watersheds located in different landscape regions of Germany. A variety of morphological parameters and catchment properties such as slope, drainage density, share of morphological sinks, hypsometric integral, flow distance between sediment source areas and the next stream as well as soil and land use properties were tested to explain the variation in the sediment delivery ratios for the 13 watersheds. The sediment input into streams is mainly controlled by the location of sediment source areas and the morphology along the flow pathways to surface waters. Thus, this complex interaction of spatially distributed catchment
STRUCTURAL PARAMETERS OF GALAXIES IN CANDELS
Van der Wel, A.; Chang, Yu-Yen; Rix, H.-W.; Bell, E. F.; Haeussler, B.; Hartley, W.; McGrath, E. J.; Cheung, E.; Faber, S. M.; Kocevski, D. D.; Mozena, M.; McIntosh, D. H.; Barden, M.; Ferguson, H. C.; Grogin, N. A.; Koekemoer, A. M.; Lotz, J.; Galametz, A.; Kartaltepe, J. S.; and others
2012-12-15
We present global structural parameter measurements of 109,533 unique, H{sub F160W}-selected objects from the CANDELS multi-cycle treasury program. Sersic model fits for these objects are produced with GALFIT in all available near-infrared filters (H{sub F160W}, J{sub F125W} and, for a subset, Y{sub F105W}). The parameters of the best-fitting Sersic models (total magnitude, half-light radius, Sersic index, axis ratio, and position angle) are made public, along with newly constructed point-spread functions for each field and filter. Random uncertainties in the measured parameters are estimated for each individual object based on a comparison between multiple, independent measurements of the same set of objects. To quantify systematic uncertainties, we create a mosaic with simulated galaxy images with a realistic distribution of input parameters and then process and analyze the mosaic in an identical manner as the real data. We find that accurate and precise measurements-to 10% or better-of all structural parameters can typically be obtained for galaxies with H{sub F160W} < 23, with comparable fidelity for basic size and shape measurements for galaxies to H{sub F160W} {approx} 24.5.
Strategy Guideline. Accurate Heating and Cooling Load Calculations
Burdick, Arlan
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Strategy Guideline: Accurate Heating and Cooling Load Calculations
Burdick, A.
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Guidance laws with input saturation and nonlinear robust H∞ observers.
Liao, Fei; Luo, Qiang; Ji, Haibo; Gai, Wen
2016-07-01
A novel three-dimensional law based on input-to-state stability (ISS) and nonlinear robust H∞ filtering is proposed for interception of maneuvering targets in the presence of input saturation. A dead zone operator model is introduced to design an ISS-based guidance law to guarantee robust tracking of a maneuvering target. Input saturation and system stability are considered simultaneously, and the globally input-to-state stability have been ensured in theory. Since in practice line-of-sight (LOS) rate is difficult for a pursuer to measure accurately, the nonlinear robust H∞ filtering method is utilized to estimate it. The stability analyses and performed simulation results show that the presented approach is effective. PMID:27018143
Use of WRF result as meteorological input to DNDC model for greenhouse gas flux simulation
NASA Astrophysics Data System (ADS)
Grosz, B.; Horváth, L.; Gyöngyösi, A. Z.; Weidinger, T.; Pintér, K.; Nagy, Z.; André, K.
2015-12-01
Continuous evolution of biogeochemical models developed in the past decades makes possible the more and more accurate estimation of trace and greenhouse gas fluxes of soils. Due to the detailed meteorological, soil, biological and chemical processes the modeled fluxes are getting closer and closer to the real values. For appropriate evaluation models need large amount of input data. In this paper we have investigated how to build an easily accessible meteorological input data source for biogeochemical models, as it is one of the most important input data sets that is either missing or difficult to get from meteorological networks. The DNDC ecological model was used for testing the WRF numerical weather prediction system as a potential data source. The reference dataset was built by numerical interpolation based on measured data. The average differences between the modeled output data using WRF and observed meteorological data in 2009 and 2010 are less than 3.98 ± 1.6; 8.68 ± 6.72 and 6.5 ± 2.17 per cent for CO2, N2O and CH4, respectively, for the test years. Generalization of the results for other regions is restricted, however this work encourages others to examine the applicability of WRF data instead of observed climate parameters.
Robust fault-tolerant tracking control design for spacecraft under control input saturation.
Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini
2014-07-01
In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. PMID:24751476
A quick accurate model of nozzle backflow
NASA Technical Reports Server (NTRS)
Kuharski, R. A.
1991-01-01
Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.
Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms
NASA Astrophysics Data System (ADS)
Chen, J.; Kunkel, V.; Skov, T. M.
2015-12-01
Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph
ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS
Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.
2013-08-01
We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss
Accurate documentation and wound measurement.
Hampton, Sylvie
This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.
A dual input DNA-based molecular switch.
Nesterova, Irina V; Elsiddieg, Siddieg O; Nesterov, Evgueni E
2014-11-01
We have designed and characterized a DNA-based molecular switch which processes two physiologically relevant inputs: pH (i.e. alkalinisation) and enzymatic activity, and generates a chemical output (in situ synthesized oligonucleotide). The design, based on allosteric interactions between i-motif and hairpin stem within the DNA molecule, addresses such critical physiological system parameters as molecular simplicity, tunability, orthogonality of the two input sensing domains, and compatibility with intracellular operation/delivery. PMID:25099914
Input-output dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Annoni, Jennifer; Jovanovic, Mihailo; Nichols, Joseph; Seiler, Peter
2015-11-01
The objective of this work is to obtain reduced-order models for fluid flows that can be used for control design. High-fidelity computational fluid dynamic models provide accurate characterizations of complex flow dynamics but are not suitable for control design due to their prohibitive computational complexity. A variety of methods, including proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD), can be used to extract the dominant flow structures and obtain reduced-order models. In this presentation, we introduce an extension to DMD that can handle problems with inputs and outputs. The proposed method, termed input-output dynamic mode decomposition (IODMD), utilizes a subspace identification technique to obtain models of low-complexity. We show that, relative to standard DMD, the introduction of the external forcing in IODMD provides robustness with respect to small disturbances and noise. We use the linearized Navier-Stokes equations in a channel flow to demonstrate the utility of the proposed approach and to provide a comparison with standard techniques for obtaining reduced-order dynamical representations. NSF Career Grant No. NSFCMMI-1254129.
Solar wind-magnetosphere energy input functions
Bargatze, L.F.; McPherron, R.L.; Baker, D.N.
1985-01-01
A new formula for the solar wind-magnetosphere energy input parameter, P/sub i/, is sought by applying the constraints imposed by dimensional analysis. Applying these constraints yields a general equation for P/sub i/ which is equal to rho V/sup 3/l/sub CF//sup 2/F(M/sub A/,theta) where, rho V/sup 3/ is the solar wind kinetic energy density and l/sub CF//sup 2/ is the scale size of the magnetosphere's effective energy ''collection'' region. The function F which depends on M/sub A/, the Alfven Mach number, and on theta, the interplanetary magnetic field clock angle is included in the general equation for P/sub i/ in order to model the magnetohydrodynamic processes which are responsible for solar wind-magnetosphere energy transfer. By assuming the form of the function F, it is possible to further constrain the formula for P/sub i/. This is accomplished by using solar wind data, geomagnetic activity indices, and simple statistical methods. It is found that P/sub i/ is proportional to (rho V/sup 2/)/sup 1/6/VBG(theta) where, rho V/sup 2/ is the solar wind dynamic pressure and VBG(theta) is a rectified version of the solar wind motional electric field. Furthermore, it is found that G(theta), the gating function which modulates the energy input to the magnetosphere, is well represented by a ''leaky'' rectifier function such as sin/sup 4/(theta/2). This function allows for enhanced energy input when the interplanetary magnetic field is oriented southward. This function also allows for some energy input when the interplanetary magnetic field is oriented northward. 9 refs., 4 figs.
Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.
2016-01-01
The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
Incorporating uncertainty in RADTRAN 6.0 input files.
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
2010-02-01
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine is required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389
Analog Input Data Acquisition Software
NASA Technical Reports Server (NTRS)
Arens, Ellen
2009-01-01
DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.
The advanced LIGO input optics.
Mueller, Chris L; Arain, Muzammil A; Ciani, Giacomo; DeRosa, Ryan T; Effler, Anamaria; Feldbaum, David; Frolov, Valery V; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J; Kokeyama, Keiko; Korth, William Z; Martin, Rodica M; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H; Tanner, David B; Vorvick, Cheryl; Williams, Luke F; Mueller, Guido
2016-01-01
The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design. PMID:26827334
Input in an Institutional Setting.
ERIC Educational Resources Information Center
Bardovi-Harlig, Kathleen; Hartford, Beverly S.
1996-01-01
Investigates the nature of input available to learners in the institutional setting of the academic advising session. Results indicate that evidence for the realization of speech acts, positive evidence from peers and status unequals, the effect of stereotypes, and limitations of a learner's pragmatic and grammatical competence are influential…
Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389
The advanced LIGO input optics.
Mueller, Chris L; Arain, Muzammil A; Ciani, Giacomo; DeRosa, Ryan T; Effler, Anamaria; Feldbaum, David; Frolov, Valery V; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J; Kokeyama, Keiko; Korth, William Z; Martin, Rodica M; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H; Tanner, David B; Vorvick, Cheryl; Williams, Luke F; Mueller, Guido
2016-01-01
The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.
NASA Technical Reports Server (NTRS)
Ozyazici, E. M.
1980-01-01
Module detects level changes in any of its 16 inputs, transfers changes to its outputs, and generates interrupts when changes are detected. Up to four changes-in-state per line are stored for later retrieval by controlling computer. Using standard TTL logic, module fits 19-inch rack-mounted console.
Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
SPLASH: Accurate OH maser positions
NASA Astrophysics Data System (ADS)
Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney
2013-10-01
The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Systems and methods for reconfiguring input devices
NASA Technical Reports Server (NTRS)
Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)
2012-01-01
A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.
NASA Astrophysics Data System (ADS)
Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu
2016-07-01
Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought
Input states for quantum gates
Gilchrist, A.; White, A.G.; Munro, W.J.
2003-04-01
We examine three possible implementations of nondeterministic linear optical controlled NOT gates with a view to an in-principle demonstration in the near future. To this end we consider demonstrating the gates using currently available sources, such as spontaneous parametric down conversion and coherent states, and current detectors only able to distinguish between zero and many photons. The demonstration is possible in the coincidence basis and the errors introduced by the nonoptimal input states and detectors are analyzed.
Structural response and input identification
NASA Technical Reports Server (NTRS)
Shepard, G. D.; Callahan, J. C.; Mcelman, J. A.
1981-01-01
Three major goals were delineated: (1) to develop a general method for determining the response of a structure to combined base and acoustic random excitation: (2) to develop parametric relationships to aid in the design of plates which are subjected to random force or random base excitation: (3) to develop a method to identify the individual acoustic and base input to a structure with only a limited number of measurement channels, when both types of excitation act simultaneously.
National hospital input price index.
Freeland, M S; Anderson, G; Schendler, C E
1979-01-01
The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies. PMID:10309052
National hospital input price index.
Freeland, M S; Anderson, G; Schendler, C E
1979-01-01
The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies.
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
Signalling through mechanical inputs: a coordinated process.
Zhang, Huimin; Labouesse, Michel
2012-07-01
There is growing awareness that mechanical forces - in parallel to electrical or chemical inputs - have a central role in driving development and influencing the outcome of many diseases. However, we still have an incomplete understanding of how such forces function in coordination with each other and with other signalling inputs in vivo. Mechanical forces, which are generated throughout the organism, can produce signals through force-sensitive processes. Here, we first explore the mechanisms through which forces can be generated and the cellular responses to forces by discussing several examples from animal development. We then go on to examine the mechanotransduction-induced signalling processes that have been identified in vivo. Finally, we discuss what is known about the specificity of the responses to different forces, the mechanisms that might stabilize cells in response to such forces, and the crosstalk between mechanical forces and chemical signalling. Where known, we mention kinetic parameters that characterize forces and their responses. The multi-layered regulatory control of force generation, force response and force adaptation should be viewed as a well-integrated aspect in the greater biological signalling systems.
Dynamic Input Conductances Shape Neuronal Spiking1,2
Franci, Alessio; Dethier, Julie; Sepulchre, Rodolphe
2015-01-01
Abstract Assessing the role of biophysical parameter variations in neuronal activity is critical to the understanding of modulation, robustness, and homeostasis of neuronal signalling. The paper proposes that this question can be addressed through the analysis of dynamic input conductances. Those voltage-dependent curves aggregate the concomitant activity of all ion channels in distinct timescales. They are shown to shape the current−voltage dynamical relationships that determine neuronal spiking. We propose an experimental protocol to measure dynamic input conductances in neurons. In addition, we provide a computational method to extract dynamic input conductances from arbitrary conductance-based models and to analyze their sensitivity to arbitrary parameters. We illustrate the relevance of the proposed approach for modulation, compensation, and robustness studies in a published neuron model based on data of the stomatogastric ganglion of the crab Cancer borealis. PMID:26464969
The IVS data input to ITRF2014
NASA Astrophysics Data System (ADS)
Nothnagel, Axel; Alef, Walter; Amagai, Jun; Andersen, Per Helge; Andreeva, Tatiana; Artz, Thomas; Bachmann, Sabine; Barache, Christophe; Baudry, Alain; Bauernfeind, Erhard; Baver, Karen; Beaudoin, Christopher; Behrend, Dirk; Bellanger, Antoine; Berdnikov, Anton; Bergman, Per; Bernhart, Simone; Bertarini, Alessandra; Bianco, Giuseppe; Bielmaier, Ewald; Boboltz, David; Böhm, Johannes; Böhm, Sigrid; Boer, Armin; Bolotin, Sergei; Bougeard, Mireille; Bourda, Geraldine; Buttaccio, Salvo; Cannizzaro, Letizia; Cappallo, Roger; Carlson, Brent; Carter, Merri Sue; Charlot, Patrick; Chen, Chenyu; Chen, Maozheng; Cho, Jungho; Clark, Thomas; Collioud, Arnaud; Colomer, Francisco; Colucci, Giuseppe; Combrinck, Ludwig; Conway, John; Corey, Brian; Curtis, Ronald; Dassing, Reiner; Davis, Maria; de-Vicente, Pablo; De Witt, Aletha; Diakov, Alexey; Dickey, John; Diegel, Irv; Doi, Koichiro; Drewes, Hermann; Dube, Maurice; Elgered, Gunnar; Engelhardt, Gerald; Evangelista, Mark; Fan, Qingyuan; Fedotov, Leonid; Fey, Alan; Figueroa, Ricardo; Fukuzaki, Yoshihiro; Gambis, Daniel; Garcia-Espada, Susana; Gaume, Ralph; Gaylard, Michael; Geiger, Nicole; Gipson, John; Gomez, Frank; Gomez-Gonzalez, Jesus; Gordon, David; Govind, Ramesh; Gubanov, Vadim; Gulyaev, Sergei; Haas, Ruediger; Hall, David; Halsig, Sebastian; Hammargren, Roger; Hase, Hayo; Heinkelmann, Robert; Helldner, Leif; Herrera, Cristian; Himwich, Ed; Hobiger, Thomas; Holst, Christoph; Hong, Xiaoyu; Honma, Mareki; Huang, Xinyong; Hugentobler, Urs; Ichikawa, Ryuichi; Iddink, Andreas; Ihde, Johannes; Ilijin, Gennadiy; Ipatov, Alexander; Ipatova, Irina; Ishihara, Misao; Ivanov, D. V.; Jacobs, Chris; Jike, Takaaki; Johansson, Karl-Ake; Johnson, Heidi; Johnston, Kenneth; Ju, Hyunhee; Karasawa, Masao; Kaufmann, Pierre; Kawabata, Ryoji; Kawaguchi, Noriyuki; Kawai, Eiji; Kaydanovsky, Michael; Kharinov, Mikhail; Kobayashi, Hideyuki; Kokado, Kensuke; Kondo, Tetsuro; Korkin, Edward; Koyama, Yasuhiro; Krasna, Hana; Kronschnabl, Gerhard; Kurdubov, Sergey; Kurihara, Shinobu; Kuroda, Jiro; Kwak, Younghee; La Porta, Laura; Labelle, Ruth; Lamb, Doug; Lambert, Sébastien; Langkaas, Line; Lanotte, Roberto; Lavrov, Alexey; Le Bail, Karine; Leek, Judith; Li, Bing; Li, Huihua; Li, Jinling; Liang, Shiguang; Lindqvist, Michael; Liu, Xiang; Loesler, Michael; Long, Jim; Lonsdale, Colin; Lovell, Jim; Lowe, Stephen; Lucena, Antonio; Luzum, Brian; Ma, Chopo; Ma, Jun; Maccaferri, Giuseppe; Machida, Morito; MacMillan, Dan; Madzak, Matthias; Malkin, Zinovy; Manabe, Seiji; Mantovani, Franco; Mardyshkin, Vyacheslav; Marshalov, Dmitry; Mathiassen, Geir; Matsuzaka, Shigeru; McCarthy, Dennis; Melnikov, Alexey; Michailov, Andrey; Miller, Natalia; Mitchell, Donald; Mora-Diaz, Julian Andres; Mueskens, Arno; Mukai, Yasuko; Nanni, Mauro; Natusch, Tim; Negusini, Monia; Neidhardt, Alexander; Nickola, Marisa; Nicolson, George; Niell, Arthur; Nikitin, Pavel; Nilsson, Tobias; Ning, Tong; Nishikawa, Takashi; Noll, Carey; Nozawa, Kentarou; Ogaja, Clement; Oh, Hongjong; Olofsson, Hans; Opseth, Per Erik; Orfei, Sandro; Pacione, Rosa; Pazamickas, Katherine; Petrachenko, William; Pettersson, Lars; Pino, Pedro; Plank, Lucia; Ploetz, Christian; Poirier, Michael; Poutanen, Markku; Qian, Zhihan; Quick, Jonathan; Rahimov, Ismail; Redmond, Jay; Reid, Brett; Reynolds, John; Richter, Bernd; Rioja, Maria; Romero-Wolf, Andres; Ruszczyk, Chester; Salnikov, Alexander; Sarti, Pierguido; Schatz, Raimund; Scherneck, Hans-Georg; Schiavone, Francesco; Schreiber, Ulrich; Schuh, Harald; Schwarz, Walter; Sciarretta, Cecilia; Searle, Anthony; Sekido, Mamoru; Seitz, Manuela; Shao, Minghui; Shibuya, Kazuo; Shu, Fengchun; Sieber, Moritz; Skjaeveland, Asmund; Skurikhina, Elena; Smolentsev, Sergey; Smythe, Dan; Sousa, Don; Sovers, Ojars; Stanford, Laura; Stanghellini, Carlo; Steppe, Alan; Strand, Rich; Sun, Jing; Surkis, Igor; Takashima, Kazuhiro; Takefuji, Kazuhiro; Takiguchi, Hiroshi; Tamura, Yoshiaki; Tanabe, Tadashi; Tanir, Emine; Tao, An; Tateyama, Claudio; Teke, Kamil; Thomas, Cynthia; Thorandt, Volkmar; Thornton, Bruce; Tierno Ros, Claudia; Titov, Oleg; Titus, Mike; Tomasi, Paolo; Tornatore, Vincenza; Trigilio, Corrado; Trofimov, Dmitriy; Tsutsumi, Masanori; Tuccari, Gino; Tzioumis, Tasso; Ujihara, Hideki; Ullrich, Dieter; Uunila, Minttu; Venturi, Tiziana; Vespe, Francesco; Vityazev, Veniamin; Volvach, Alexandr; Vytnov, Alexander; Wang, Guangli; Wang, Jinqing; Wang, Lingling; Wang, Na; Wang, Shiqiang; Wei, Wenren; Weston, Stuart; Whitney, Alan; Wojdziak, Reiner; Yatskiv, Yaroslav; Yang, Wenjun; Ye, Shuhua; Yi, Sangoh; Yusup, Aili; Zapata, Octavio; Zeitlhoefler, Reinhard; Zhang, Hua; Zhang, Ming; Zhang, Xiuzhong; Zhao, Rongbing; Zheng, Weimin; Zhou, Ruixian; Zubko, Nataliya
2015-01-01
Very Long Baseline Interferometry (VLBI) is a primary space-geodetic technique for determining precise coordinates on the Earth, for monitoring the variable Earth rotation and orientation with highest precision, and for deriving many other parameters of the Earth system. The International VLBI Service for Geodesy and Astrometry (IVS, http://ivscc.gsfc.nasa.gov/) is a service of the International Association of Geodesy (IAG) and the International Astronomical Union (IAU). The datasets published here are the results of individual Very Long Baseline Interferometry (VLBI) sessions in the form of normal equations in SINEX 2.0 format (http://www.iers.org/IERS/EN/Organization/AnalysisCoordinator/SinexFormat/sinex.html, the SINEX 2.0 description is attached as pdf) provided by IVS as the input for the next release of the International Terrestrial Reference System (ITRF): ITRF2014. This is a new version of the ITRF2008 release (Bockmann et al., 2009). For each session/ file, the normal equation systems contain elements for the coordinate components of all stations having participated in the respective session as well as for the Earth orientation parameters (x-pole, y-pole, UT1 and its time derivatives plus offset to the IAU2006 precession-nutation components dX, dY (https://www.iau.org/static/resolutions/IAU2006_Resol1.pdf). The terrestrial part is free of datum. The data sets are the result of a weighted combination of the input of several IVS Analysis Centers. The IVS contribution for ITRF2014 is described in Bachmann et al (2015), Schuh and Behrend (2012) provide a general overview on the VLBI method, details on the internal data handling can be found at Behrend (2013).
Monte Carlo parameter studies and uncertainty analyses with MCNP5
Brown, F. B.; Sweezy, J. E.; Hayes, R. B.
2004-01-01
A software tool called mcnp-pstudy has been developed to automate the setup, execution, and collection of results from a series of MCNPS Monte Carlo calculations. This tool provides a convenient means of performing parameter studies, total uncertainty analyses, parallel job execution on clusters, stochastic geometry modeling, and other types of calculations where a series of MCNPS jobs must be performed with varying problem input specifications. Monte Carlo codes are being used for a wide variety of applications today due to their accurate physical modeling and the speed of today's computers. In most applications for design work, experiment analysis, and benchmark calculations, it is common to run many calculations, not just one, to examine the effects of design tolerances, experimental uncertainties, or variations in modeling features. We have developed a software tool for use with MCNP5 to automate this process. The tool, mcnp-pstudy, is used to automate the operations of preparing a series of MCNP5 input files, running the calculations, and collecting the results. Using this tool, parameter studies, total uncertainty analyses, or repeated (possibly parallel) calculations with MCNP5 can be performed easily. Essentially no extra user setup time is required beyond that of preparing a single MCNP5 input file.
Rapid Airplane Parametric Input Design (RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.
1995-01-01
RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool
Estimation of time-dependent input from neuronal membrane potential.
Kobayashi, Ryota; Shinomoto, Shigeru; Lansky, Petr
2011-12-01
The set of firing rates of the presynaptic excitatory and inhibitory neurons constitutes the input signal to the postsynaptic neuron. Estimation of the time-varying input rates from intracellularly recorded membrane potential is investigated here. For that purpose, the membrane potential dynamics must be specified. We consider the Ornstein-Uhlenbeck stochastic process, one of the most common single-neuron models, with time-dependent mean and variance. Assuming the slow variation of these two moments, it is possible to formulate the estimation problem by using a state-space model. We develop an algorithm that estimates the paths of the mean and variance of the input current by using the empirical Bayes approach. Then the input firing rates are directly available from the moments. The proposed method is applied to three simulated data examples: constant signal, sinusoidally modulated signal, and constant signal with a jump. For the constant signal, the estimation performance of the method is comparable to that of the traditionally applied maximum likelihood method. Further, the proposed method accurately estimates both continuous and discontinuous time-variable signals. In the case of the signal with a jump, which does not satisfy the assumption of slow variability, the robustness of the method is verified. It can be concluded that the method provides reliable estimates of the total input firing rates, which are not experimentally measurable. PMID:21919789
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Remote sensing inputs to water demand modeling
NASA Technical Reports Server (NTRS)
Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.
1975-01-01
In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.
Tanabe, Shuichi; Nakagawa, Hiroshi; Watanabe, Tomoyuki; Minami, Hidemi; Kano, Manabu; Urbanetz, Nora A
2016-09-10
Designing efficient, robust process parameters in drug product manufacturing is important to assure a drug's critical quality attributes. In this research, an efficient, novel procedure for a coating process parameter setting was developed, which establishes a prediction model for setting suitable input process parameters by utilizing prior manufacturing knowledge for partial least squares regression (PLSR). In the proposed procedure, target values or ranges of the output parameters are first determined, including tablet moisture content, spray mist condition, and mechanical stress on tablets. Following the preparation of predictive models relating input process parameters to corresponding output parameters, optimal input process parameters are determined using these models so that the output parameters hold within the target ranges. In predicting the exhaust air temperature output parameter, which reflects the tablets' moisture content, PLSR was employed based on prior measured data (such as batch records of other products rather than design of experiments), leading to minimal new experiments. The PLSR model was revealed to be more accurate at predicting the exhaust air temperature than a conventional semi-empirical thermodynamic model. A commercial scale verification demonstrated that the proposed process parameter setting procedure enabled assurance of the quality of tablet appearance without any trial-and-error experiments.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Estimating Photometric Redshifts with Artificial Neural Networks and Multi-Parameters
NASA Astrophysics Data System (ADS)
Li, Li-Li; Zhang, Yan-Xia; Zhao, Yong-Heng; Yang, Da-Wei
2007-06-01
We calculate photometric redshifts from the Sloan Digital Sky Survey Data Release 2 (SDSS DR2) Galaxy Sample using artificial neural networks (ANNs). Different input sets based on various parameters (e.g. magnitude, color index, flux information) are explored. Mainly, parameters from broadband photometry are utilized and their performances in redshift prediction are compared. While any parameter may be easily incorporated in the input, our results indicate that using the dereddened magnitudes often produces more accurate photometric redshifts than using the Petrosian magnitudes or model magnitudes as input, but the model magnitudes are superior to the Petrosian magnitudes. Also, better performance results when more effective parameters are used in the training set. The method is tested on a sample of 79 346 galaxies from the SDSS DR2. When using 19 parameters based on the dereddened magnitudes, the rms error in redshift estimation is σz = 0.020184. The ANN is highly competitive tool compared to the traditional template-fitting methods when a large and representative training set is available.
Accurate SHAPE-directed RNA structure determination
Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.
2009-01-01
Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441
NASA Astrophysics Data System (ADS)
Vörös, Jozef
2016-07-01
The paper deals with the parameter identification of cascade nonlinear dynamic systems with noninvertible piecewise linear input nonlinearities and backlash output nonlinearities. Application of the key term separation principle provides special expressions for the corresponding nonlinear model description that are linear in parameters. A least squares based iterative technique allows estimation of all the model parameters based on measured input/output data. Simulation studies illustrate the feasibility of proposed identification method.
Modeling the Meteoroid Input Function at Mid-Latitude Using Meteor Observations by the MU Radar
NASA Technical Reports Server (NTRS)
Pifko, Steven; Janches, Diego; Close, Sigrid; Sparks, Jonathan; Nakamura, Takuji; Nesvorny, David
2012-01-01
The Meteoroid Input Function (MIF) model has been developed with the purpose of understanding the temporal and spatial variability of the meteoroid impact in the atmosphere. This model includes the assessment of potential observational biases, namely through the use of empirical measurements to characterize the minimum detectable radar cross-section (RCS) for the particular High Power Large Aperture (HPLA) radar utilized. This RCS sensitivity threshold allows for the characterization of the radar system s ability to detect particles at a given mass and velocity. The MIF has been shown to accurately predict the meteor detection rate of several HPLA radar systems, including the Arecibo Observatory (AO) and the Poker Flat Incoherent Scatter Radar (PFISR), as well as the seasonal and diurnal variations of the meteor flux at various geographic locations. In this paper, the MIF model is used to predict several properties of the meteors observed by the Middle and Upper atmosphere (MU) radar, including the distributions of meteor areal density, speed, and radiant location. This study offers new insight into the accuracy of the MIF, as it addresses the ability of the model to predict meteor observations at middle geographic latitudes and for a radar operating frequency in the low VHF band. Furthermore, the interferometry capability of the MU radar allows for the assessment of the model s ability to capture information about the fundamental input parameters of meteoroid source and speed. This paper demonstrates that the MIF is applicable to a wide range of HPLA radar instruments and increases the confidence of using the MIF as a global model, and it shows that the model accurately considers the speed and sporadic source distributions for the portion of the meteoroid population observable by MU.
Aggregate input-output models of neuronal populations.
Saxena, Shreya; Schieber, Marc H; Thakor, Nitish V; Sarma, Sridevi V
2012-07-01
An extraordinary amount of electrophysiological data has been collected from various brain nuclei to help us understand how neural activity in one region influences another region. In this paper, we exploit the point process modeling (PPM) framework and describe a method for constructing aggregate input-output (IO) stochastic models that predict spiking activity of a population of neurons in the "output" region as a function of the spiking activity of a population of neurons in the "input" region. We first build PPMs of each output neuron as a function of all input neurons, and then cluster the output neurons using the model parameters. Output neurons that lie within the same cluster have the same functional dependence on the input neurons. We first applied our method to simulated data, and successfully uncovered the predetermined relationship between the two regions. We then applied our method to experimental data to understand the input-output relationship between motor cortical neurons and 1) somatosensory and 2) premotor cortical neurons during a behavioral task. Our aggregate IO models highlighted interesting physiological dependences including relative effects of inhibition/excitation from input neurons and extrinsic factors on output neurons.
NASA Technical Reports Server (NTRS)
Fox, Geoffrey C.; Ou, Chao-Wei
1997-01-01
The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.
Control of Nonholonomic Systems Using Discrete-valued Inputs
NASA Astrophysics Data System (ADS)
Ishikawa, Masato; Ashida, Shouji; Sugie, Toshiharu
This paper proposes a new control algorithm for a class of nonholonomic systems using ON/OFF-type discrete-valued control inputs. Our approach is based on second-order approximation of the principle of holonomy and its iteration with parameter updating, which is intended to be tolerant of severe inaccuracy of the control inputs. The proposed method is applied to attitude control of 2D and 3D free-flying robots and a wheeled mobile robot to demonstrate its effectiveness. Simulation results show its robustness against the model uncertainty and unmodeled dynamics such as non-Chaplygin type structure.
Identification of an object by input and output spectral characteristics
NASA Technical Reports Server (NTRS)
Redko, S. F.; Ushkalov, V. F.
1973-01-01
The problem is discussed of identification of a linear object of known structure, the movement of which is described by a system of differential equations of the type y = Ay + Bu, where y is an n-dimensional output vector, u is an m-dimensional vector of stationary, random disturbances (inputs), A and B are matrices of unknown parameters of the dimension, n x n and n x m, respectively. The spectral and reciprocal spectral densities of the inputs and outputs are used as the initial information on the object.
Repositioning Recitation Input in College English Teaching
ERIC Educational Resources Information Center
Xu, Qing
2009-01-01
This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.
Estimating Building Simulation Parameters via Bayesian Structure Learning
Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards
2013-01-01
Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.
Input Devices for Young Handicapped Children.
ERIC Educational Resources Information Center
Morris, Karen
The versatility of the computer can be expanded considerably for young handicapped children by using input devices other than the typewriter-style keyboard. Input devices appropriate for young children can be classified into four categories: alternative keyboards, contact switches, speech input devices, and cursor control devices. Described are…
Effects of Auditory Input in Individuation Tasks
ERIC Educational Resources Information Center
Robinson, Christopher W.; Sloutsky, Vladimir M.
2008-01-01
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Lee, F. C.
1984-01-01
Problems caused by input filter interaction and conventional input filter design techniques are discussed. The concept of feedforward control is modeled with an input filter and a buck regulator. Experimental measurement and comparison to the analytical predictions is carried out. Transient response and the use of a feedforward loop to stabilize the regulator system is described. Other possible applications for feedforward control are included.
Textual Enhancement of Input: Issues and Possibilities
ERIC Educational Resources Information Center
Han, ZhaoHong; Park, Eun Sung; Combs, Charles
2008-01-01
The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…
7 CFR 3430.15 - Stakeholder input.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.15 Section 3430.15... ADMINISTRATIVE PROVISIONS Pre-award: Solicitation and Application § 3430.15 Stakeholder input. Section 103(c)(2... programs. NIFA will provide instructions for submission of stakeholder input in the RFA. NIFA will...
7 CFR 3430.15 - Stakeholder input.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.15 Section 3430.15... ADMINISTRATIVE PROVISIONS Pre-award: Solicitation and Application § 3430.15 Stakeholder input. Section 103(c)(2... programs. NIFA will provide instructions for submission of stakeholder input in the RFA. NIFA will...
7 CFR 3430.15 - Stakeholder input.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998... RFAs for competitive programs. CSREES will provide instructions for submission of stakeholder input...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of...
Biogenic inputs to ocean mixing.
Katija, Kakani
2012-03-15
Recent studies have evoked heated debate about whether biologically generated (or biogenic) fluid disturbances affect mixing in the ocean. Estimates of biogenic inputs have shown that their contribution to ocean mixing is of the same order as winds and tides. Although these estimates are intriguing, further study using theoretical, numerical and experimental techniques is required to obtain conclusive evidence of biogenic mixing in the ocean. Biogenic ocean mixing is a complex problem that requires detailed understanding of: (1) marine organism behavior and characteristics (i.e. swimming dynamics, abundance and migratory behavior), (2) mechanisms utilized by swimming animals that have the ability to mix stratified fluids (i.e. turbulence and fluid drift) and (3) knowledge of the physical environment to isolate contributions of marine organisms from other sources of mixing. In addition to summarizing prior work addressing the points above, observations on the effect of animal swimming mode and body morphology on biogenic fluid transport will also be presented. It is argued that to inform the debate on whether biogenic mixing can contribute to ocean mixing, our studies should focus on diel vertical migrators that traverse stratified waters of the upper pycnocline. Based on our understanding of mixing mechanisms, body morphologies, swimming modes and body orientation, combined with our knowledge of vertically migrating populations of animals, it is likely that copepods, krill and some species of gelatinous zooplankton and fish have the potential to be strong sources of biogenic mixing. PMID:22357597
Input calibration for negative originals
NASA Astrophysics Data System (ADS)
Tuijn, Chris
1995-04-01
One of the major challenges in the prepress environment consists of controlling the electronic color reproduction process such that a perfect match of any original can be realized. Whether this goal can be reached depends on many factors such as the dynamic range of the input device (scanner, camera), the color gamut of the output device (dye sublimation printer, ink-jet printer, offset), the color management software etc. The characterization of the color behavior of the peripheral devices is therefore very important. Photographs and positive transparents reflect the original scene pretty well; for negative originals, however, there is no obvious link to either the original scene or a particular print of the negative under consideration. In this paper, we establish a method to scan negatives and to convert the scanned data to a calibrated RGB space, which is known colorimetrically. This method is based on the reconstruction of the original exposure conditions (i.e., original scene) which generated the negative. Since the characteristics of negative film are quite diverse, a special calibration is required for each combination of scanner and film type.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible.
Volgushev, Maxim; Ilin, Vladimir; Stevenson, Ian H.
2015-01-01
Accurately describing synaptic interactions between neurons and how interactions change over time are key challenges for systems neuroscience. Although intracellular electrophysiology is a powerful tool for studying synaptic integration and plasticity, it is limited by the small number of neurons that can be recorded simultaneously in vitro and by the technical difficulty of intracellular recording in vivo. One way around these difficulties may be to use large-scale extracellular recording of spike trains and apply statistical methods to model and infer functional connections between neurons. These techniques have the potential to reveal large-scale connectivity structure based on the spike timing alone. However, the interpretation of functional connectivity is often approximate, since only a small fraction of presynaptic inputs are typically observed. Here we use in vitro current injection in layer 2/3 pyramidal neurons to validate methods for inferring functional connectivity in a setting where input to the neuron is controlled. In experiments with partially-defined input, we inject a single simulated input with known amplitude on a background of fluctuating noise. In a fully-defined input paradigm, we then control the synaptic weights and timing of many simulated presynaptic neurons. By analyzing the firing of neurons in response to these artificial inputs, we ask 1) How does functional connectivity inferred from spikes relate to simulated synaptic input? and 2) What are the limitations of connectivity inference? We find that individual current-based synaptic inputs are detectable over a broad range of amplitudes and conditions. Detectability depends on input amplitude and output firing rate, and excitatory inputs are detected more readily than inhibitory. Moreover, as we model increasing numbers of presynaptic inputs, we are able to estimate connection strengths more accurately and detect the presence of connections more quickly. These results illustrate the
COSMIC/NASTRAN Free-field Input
NASA Technical Reports Server (NTRS)
Chan, G. C.
1984-01-01
A user's guide to the COSMIC/NASTRAN free field input for the Bulk Data section of the NASTRAN program is proposed. The free field input is designed to be user friendly and the user is not forced out of the computer system due to input errors. It is easy to use, with only a few simple rules to follow. A stand alone version of the COSMIC/NASTRAN free field input is also available. The use of free field input is illustrated by a number of examples.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
A highly accurate heuristic algorithm for the haplotype assembly problem
2013-01-01
Background Single nucleotide polymorphisms (SNPs) are the most common form of genetic variation in human DNA. The sequence of SNPs in each of the two copies of a given chromosome in a diploid organism is referred to as a haplotype. Haplotype information has many applications such as gene disease diagnoses, drug design, etc. The haplotype assembly problem is defined as follows: Given a set of fragments sequenced from the two copies of a chromosome of a single individual, and their locations in the chromosome, which can be pre-determined by aligning the fragments to a reference DNA sequence, the goal here is to reconstruct two haplotypes (h1, h2) from the input fragments. Existing algorithms do not work well when the error rate of fragments is high. Here we design an algorithm that can give accurate solutions, even if the error rate of fragments is high. Results We first give a dynamic programming algorithm that can give exact solutions to the haplotype assembly problem. The time complexity of the algorithm is O(n × 2t × t), where n is the number of SNPs, and t is the maximum coverage of a SNP site. The algorithm is slow when t is large. To solve the problem when t is large, we further propose a heuristic algorithm on the basis of the dynamic programming algorithm. Experiments show that our heuristic algorithm can give very accurate solutions. Conclusions We have tested our algorithm on a set of benchmark datasets. Experiments show that our algorithm can give very accurate solutions. It outperforms most of the existing programs when the error rate of the input fragments is high. PMID:23445458
Turn customer input into innovation.
Ulwick, Anthony W
2002-01-01
It's difficult to find a company these days that doesn't strive to be customer-driven. Too bad, then, that most companies go about the process of listening to customers all wrong--so wrong, in fact, that they undermine innovation and, ultimately, the bottom line. What usually happens is this: Companies ask their customers what they want. Customers offer solutions in the form of products or services. Companies then deliver these tangibles, and customers just don't buy. The reason is simple--customers aren't expert or informed enough to come up with solutions. That's what your R&D team is for. Rather, customers should be asked only for outcomes--what they want a new product or service to do for them. The form the solutions take should be up to you, and you alone. Using Cordis Corporation as an example, this article describes, in fine detail, a series of effective steps for capturing, analyzing, and utilizing customer input. First come indepth interviews, in which a moderator works with customers to deconstruct a process or activity in order to unearth "desired outcomes." Addressing participants' comments one at a time, the moderator rephrases them to be both unambiguous and measurable. Once the interviews are complete, researchers then compile a comprehensive list of outcomes that participants rank in order of importance and degree to which they are satisfied by existing products. Finally, using a simple mathematical formula called the "opportunity calculation," researchers can learn the relative attractiveness of key opportunity areas. These data can be used to uncover opportunities for product development, to properly segment markets, and to conduct competitive analysis.
An input shaping controller enabling cranes to move without sway
Singer, N.; Singhose, W.; Kriikku, E.
1997-06-01
A gantry crane at the Savannah River Technology Center was retrofitted with an Input Shaping controller. The controller intercepts the operator`s pendant commands and modifies them in real time so that the crane is moved without residual sway in the suspended load. Mechanical components on the crane were modified to make the crane suitable for the anti-sway algorithm. This paper will describe the required mechanical modifications to the crane, as well as, a new form of Input Shaping that was developed for use on the crane. Experimental results are presented which demonstrate the effectiveness of the new process. Several practical considerations will be discussed including a novel (patent pending) approach for making small, accurate moves without residual oscillations.
NASA Astrophysics Data System (ADS)
Linares, R.; Godinez, H. C.; Vittaldev, V.
2014-12-01
Recent events in space, including the collision of Russia's Cosmos 2251 satellite with Iridium 33 and China's Feng Yun 1C anti-satellite demonstration, have stressed the capabilities of the Space Surveillance Network and its ability to provide accurate and actionable impact probability estimates. In particular low-Earth orbiting satellites are heavily influenced by upper atmospheric density, due to drag, which is very difficult to model accurately. This work focuses on the generalized Polynomial Chaos (gPC) technique for Uncertainty Quantification (UQ) in physics-based atmospheric models. The advantage of the gPC approach is that it can efficiently model non-Gaussian probability distribution functions (pdfs). The gPC approach is used to perform UQ on future atmospheric conditions. A number of physics-based models are used as test cases, including GITM and TIE-GCM, and the gPC is shown to have good performance in modeling non-Gaussian pdfs. Los Alamos National Laboratory (LANL) has established a research effort, called IMPACT (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), to improve impact assessment via improved physics-based modeling. A number of atmospheric models exist which can be classified as either empirical or physics-based. Physics-based models can be used to provide a forward prediction which is required for accurate collision assessments. As part of this effort, accurate and consistent UQ is required for the atmospheric models used. One of the primary sources of uncertainty is input parameter uncertainty. These input parameters, which include F10.7, AP, and solar wind parameters, are measured constantly. In turn, these measurements are used to provide a prediction for future parameter values. Therefore, the uncertainty of the atmospheric model forecast, due to potential error in the input parameters, must be correctly characterized to estimate orbital uncertainty. Internal model parameters that model how the atmosphere is
Asymmetric focusing study from twin input power couplers using realistic rf cavity field maps
NASA Astrophysics Data System (ADS)
Gulliford, Colwyn; Bazarov, Ivan; Belomestnykh, Sergey; Shemelin, Valery
2011-03-01
Advanced simulation codes now exist that can self-consistently solve Maxwell’s equations for the combined system of an rf cavity and a beam bunch. While these simulations are important for a complete understanding of the beam dynamics in rf cavities, they require significant time and computing power. These techniques are therefore not readily included in real time simulations useful to the beam physicist during beam operations. Thus, there exists a need for a simplified algorithm which simulates realistic cavity fields significantly faster than self-consistent codes, while still incorporating enough of the necessary physics to ensure accurate beam dynamics computation. To this end, we establish a procedure for producing realistic field maps using lossless cavity eigenmode field solvers. This algorithm incorporates all relevant cavity design and operating parameters, including beam loading from a nonrelativistic beam. The algorithm is then used to investigate the asymmetric quadrupolelike focusing produced by the input couplers of the Cornell ERL injector cavity for a variety of beam and operating parameters.
NASA Astrophysics Data System (ADS)
Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.
2011-12-01
In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.
NEFDS contamination model parameter estimation of powder contaminated surfaces
NASA Astrophysics Data System (ADS)
Gibbs, Timothy J.; Messinger, David W.
2016-05-01
Hyperspectral signatures of powdered contaminated surfaces are challenging to characterize due to intimate mixing between materials. Most radiometric models have diﬃculties in recreating these signatures due to non-linear interactions between particles with diﬀerent physical properties. The Nonconventional Exploitation Factors Data System (NEFDS) Contamination Model is capable of recreating longwave hyperspectral signatures at any contamination mixture amount, but only for a limited selection of materials currently in the database. A method has been developed to invert the NEFDS model and perform parameter estimation on emissivity measurements from a variety of powdered materials on substrates. This model was chosen for its potential to accurately determine contamination coverage density as a parameter in the inverted model. Emissivity data were measured using a Designs and Prototypes fourier transform infrared spectrometer model 102 for diﬀerent levels of contamination. Temperature emissivity separation was performed to convert data from measure radiance to estimated surface emissivity. Emissivity curves were then input into the inverted model and parameters were estimated for each spectral curve. A comparison of measured data with extrapolated model emissivity curves using estimated parameter values assessed performance of the inverted NEFDS contamination model. This paper will present the initial results of the experimental campaign and the estimated surface coverage parameters.
The role of the input scale in parton distribution analyses
Pedro Jimenez-Delgado
2012-08-01
A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, M.; Fienen, M.N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
MODFLOW-Style parameters in underdetermined parameter estimation.
D'Oria, Marco; Fienen, Michael N
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. PMID:21352210
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Input estimation from measured structural response
Harvey, Dustin; Cross, Elizabeth; Silva, Ramon A; Farrar, Charles R; Bement, Matt
2009-01-01
This report will focus on the estimation of unmeasured dynamic inputs to a structure given a numerical model of the structure and measured response acquired at discrete locations. While the estimation of inputs has not received as much attention historically as state estimation, there are many applications where an improved understanding of the immeasurable input to a structure is vital (e.g. validating temporally varying and spatially-varying load models for large structures such as buildings and ships). In this paper, the introduction contains a brief summary of previous input estimation studies. Next, an adjoint-based optimization method is used to estimate dynamic inputs to two experimental structures. The technique is evaluated in simulation and with experimental data both on a cantilever beam and on a three-story frame structure. The performance and limitations of the adjoint-based input estimation technique are discussed.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R
2014-01-01
We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.
Input apparatus for dynamic signature verification systems
EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.
1978-01-01
The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.
Input characterization of a shock test strructure.
Hylok, J. E.; Groethe, M. A.; Maupin, R. D.
2004-01-01
Often in experimental work, measuring input forces and pressures is a difficult and sometimes impossible task. For one particular shock test article, its input sensitivity required a detailed measurement of the pressure input. This paper discusses the use of a surrogate mass mock test article to measure spatial and temporal variations of the shock input within and between experiments. Also discussed will be the challenges and solutions in making some of the high speed transient measurements. The current input characterization work appears as part of the second phase in an extensive model validation project. During the first phase, the system under analysis displayed sensitivities to the shock input's qualitative and quantitative (magnitude) characteristics. However, multiple shortcomings existed in the characterization of the input. First, the experimental measurements of the input were made on a significantly simplified structure only, and the spatial fidelity of the measurements was minimal. Second, the sensors used for the pressure measurement contained known errors that could not be fully quantified. Finally, the measurements examined only one input pressure path (from contact with the energetic material). Airblast levels from the energetic materials were unknown. The result was a large discrepancy between the energy content in the analysis and experiments.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott; Galley, Chad; Hemberger, Daniel; Scheel, Mark; Schmidt, Patricia; Smith, Rory; SXS Collaboration Collaboration
2016-03-01
We are now in the advanced detector era of gravitational wave astronomy, and the merger of two black holes (BHs) is one of the most promising sources of gravitational waves that could be detected on earth. To infer the BH masses and spins, the observed signal must be compared to waveforms predicted by general relativity for millions of binary configurations. Numerical relativity (NR) simulations can produce accurate waveforms, but are prohibitively expensive to use for parameter estimation. Other waveform models are fast enough but may lack accuracy in portions of the parameter space. Numerical relativity surrogate models attempt to rapidly predict the results of a NR code with a small or negligible modeling error, after being trained on a set of input waveforms. Such surrogate models are ideal for parameter estimation, as they are both fast and accurate, and have already been built for the case of non-spinning BHs. Using 250 input waveforms, we build a surrogate model for waveforms from the Spectral Einstein Code (SpEC) for a subspace of precessing systems.
NASA Astrophysics Data System (ADS)
Löw, Fabian; Duveiller, Grégory
2013-10-01
Mapping the spatial distribution of crops has become a fundamental input for agricultural production monitoring using remote sensing. However, the multi-temporality that is often necessary to accurately identify crops and to monitor crop growth generally comes at the expense of coarser observation supports, and can lead to increasingly erroneous class allocations caused by mixed pixels. For a given application like crop classification, the spatial resolution requirement (e.g. in terms of a maximum tolerable pixel size) differs considerably over different landscapes. To analyse the spatial resolution requirements for accurate crop identification via image classification, this study builds upon and extends a conceptual framework established in a previous work1. This framework allows defining quantitatively the spatial resolution requirements for crop monitoring based on simulating how agricultural landscapes, and more specifically the fields covered by a crop of interest, are seen by instruments with increasingly coarser resolving power. The concept of crop specific pixel purity, defined as the degree of homogeneity of the signal encoded in a pixel with respect to the target crop type, is used to analyse how mixed the pixels can be (as they become coarser), without undermining their capacity to describe the desired surface properties. In this case, this framework has been steered towards answering the question: "What is the spatial resolution requirement for crop identification via supervised image classification, in particular minimum and coarsest acceptable pixel sizes, and how do these requirements change over different landscapes?" The framework is applied over four contrasting agro-ecological landscapes in Middle Asia. Inputs to the experiment were eight multi-temporal images from the RapidEye sensor, the simulated pixel sizes range from 6.5 m to 396.5 m. Constraining parameters for crop identification were defined by setting thresholds for classification
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations.
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-Lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is. PMID:27368786
An articulated statistical shape model for accurate hip joint segmentation.
Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian
2009-01-01
In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159
CIGALEMC: GALAXY PARAMETER ESTIMATION USING A MARKOV CHAIN MONTE CARLO APPROACH WITH CIGALE
Serra, Paolo; Amblard, Alexandre; Temi, Pasquale; Im, Stephen; Noll, Stefan
2011-10-10
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the astrophysical parameter space using a modified version of the publicly available code Code Investigating GALaxy Emission (CIGALE). The original CIGALE builds a grid of theoretical spectral energy distribution (SED) models and fits to photometric fluxes from ultraviolet to infrared to put constraints on parameters related to both formation and evolution of galaxies. Such a grid-based method can lead to a long and challenging parameter extraction since the computation time increases exponentially with the number of parameters considered and results can be dependent on the density of sampling points, which must be chosen in advance for each parameter. MCMC methods, on the other hand, scale approximately linearly with the number of parameters, allowing a faster and more accurate exploration of the parameter space by using a smaller number of efficiently chosen samples. We test our MCMC version of the code CIGALE (called CIGALEMC) with simulated data. After checking the ability of the code to retrieve the input parameters used to build the mock sample, we fit theoretical SEDs to real data from the well-known and -studied Spitzer Infrared Nearby Galaxy Survey sample. We discuss constraints on the parameters and show the advantages of our MCMC sampling method in terms of accuracy of the results and optimization of CPU time.
McLain, Jean E T; Williams, Clinton F
2008-09-01
As the reuse of municipal wastewater escalates worldwide as a means to extend increasingly limited water supplies, accurate monitoring of water quality parameters, including Escherichia coli (E. coli), increases in importance. Chromogenic media are often used for detection of E. coli in environmental samples, but the presence of unique levels of organic and inorganic compounds alters reclaimed water chemistry, potentially hindering E. coli detection using enzyme-based chromogenic technology. Over seven months, we monitored E. coli levels using m-Coli Blue 24 broth in a constructed wetland filled with tertiary-treated municipal effluent. No E. coli were isolated in the wetland source waters, but E. coli, total coliforms, and heterotrophic bacteria increased dramatically within the wetland on all sampling dates, most probably due to fecal inputs from resident wildlife populations. Confirmatory testing of isolates presumptive for E. coli revealed a 41% rate of false-positive identification using m-Coli Blue 24 broth over seven months. Seasonal differences were evident, as false-positive rates averaged 35% in summer, but rose sharply to 75% in the late fall and winter. Corrected E. coli levels were significantly correlated with electrical conductivity, indicating that water chemistry may be controlling bacterial survival within the wetland. This is the first study to report that accuracy of chromogenic media for microbial enumeration in reclaimed water may show strong seasonal differences, and highlights the importance of validation of microbiological results from chromogenic media for accurate analysis of reclaimed water quality.
Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator
Asaad, Sameh W.; Kapur, Mohit
2016-03-15
A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.
The Input Hypothesis: An Inside Look.
ERIC Educational Resources Information Center
Higgs, Theodore V.
1985-01-01
Summarizes and discusses Krashen's "input hypothesis" as presented in his "Principles and Practice in Second Language Acquisition." Suggests that the input hypothesis fails to account convincingly for arrested second language acquisition in an acquisition-rich environment and that it is not directly applicable to U.S. high school and university…
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND..., requests for input and/or Web site), as well as through a notice in the Federal Register, from...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
Input Effects within a Constructionist Framework
ERIC Educational Resources Information Center
Boyd, Jeremy K.; Goldberg, Adele E.
2009-01-01
Constructionist approaches to language hypothesize that grammar can be learned from the input using domain-general mechanisms. This emphasis has engendered a great deal of research--exemplified in the present issue--that seeks to illuminate the ways in which input-related factors can both drive and constrain constructional acquisition. In this…
Managing Input during Assistive Technology Product Design
ERIC Educational Resources Information Center
Choi, Young Mi
2011-01-01
Many different sources of input are available to assistive technology innovators during the course of designing products. However, there is little information on which ones may be most effective or how they may be efficiently utilized within the design process. The aim of this project was to compare how three types of input--from simulation tools,…
Modality of Input and Vocabulary Acquisition
ERIC Educational Resources Information Center
Sydorenko, Tetyana
2010-01-01
This study examines the effect of input modality (video, audio, and captions, i.e., on-screen text in the same language as audio) on (a) the learning of written and aural word forms, (b) overall vocabulary gains, (c) attention to input, and (d) vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian…
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Statistical identification of effective input variables. [SCREEN
Vaurio, J.K.
1982-09-01
A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.
Input, innateness, and induction in language acquisition.
Morgan, J L
1990-11-01
Input and innateness compliment one another in language acquisition. Children exposed to different languages acquire different languages. Children's language experience, however, underdetermines the grammars that they acquire; the constraints that are not supplied by input must be available endogenously, and the ultimate origin of these endogenous contributions to acquisition may be traced to the biology of the mind. To the extent that assumptions of innateness encourage greater explicitness in the formulation of theories of acquisition, they should be welcomed. Excessively powerful assumptions of innateness may not be subject to empirical disconfirmation, however. Therefore, attention should be devoted to the development of a theory of language input, particularly with regard to identifying invariants of input. In combination with a linguistic theory providing an account of the endstate of acquisition, a theory of input would permit the deduction of properties of the mind that underlie the acquisition of language.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Input space-dependent controller for multi-hazard mitigation
NASA Astrophysics Data System (ADS)
Cao, Liang; Laflamme, Simon
2016-04-01
Semi-active and active structural control systems are advanced mechanical devices and systems capable of high damping performance, ideal for mitigation of multi-hazards. The implementation of these devices within structural systems is still in its infancy, because of the complexity in designing a robust closed-loop control system that can ensure reliable and high mitigation performance. Particular challenges in designing a controller for multi-hazard mitigation include: 1) very large uncertainties on dynamic parameters and unknown excitations; 2) limited measurements with probabilities of sensor failure; 3) immediate performance requirements; and 4) unavailable sets of input-output during design. To facilitate the implementation of structural control systems, a new type of controllers with high adaptive capabilities is proposed. It is based on real-time identification of an embedding that represents the essential dynamics found in the input space, or in the sensors measurements. This type of controller is termed input-space dependent controllers (ISDC). In this paper, the principle of ISDC is presented, their stability and performance derived analytically for the case of harmonic inputs, and their performance demonstrated in the case of different types of hazards. Results show the promise of this new type of controller at mitigating multi-hazards by 1) relying on local and limited sensors only; 2) not requiring prior evaluation or training; and 3) adapting to systems non-stationarities.
Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights
NASA Astrophysics Data System (ADS)
Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato
2016-07-01
Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
The Effects of Input-Based Practice on Pragmatic Development of Requests in L2 Chinese
ERIC Educational Resources Information Center
Li, Shuai
2012-01-01
This study examined the effects of input-based practice on developing accurate and speedy requests in second-language Chinese. Thirty learners from intermediate-level Chinese classes were assigned to an intensive training group (IT), a regular training group (RT), and a control group. The IT and the RT groups practiced using four Chinese…
Reactive nitrogen inputs to US lands and waterways: how certain are we about sources and fluxes?
An overabundance of reactive nitrogen (N) as a result of anthropogenic activities has led to multiple human health and environmental concerns. Efforts to address these concerns require an accurate accounting of N inputs. Here, we present a novel synthesis of data describing N inp...
Set Theory Applied to Uniquely Define the Inputs to Territorial Systems in Emergy Analyses
The language of set theory can be utilized to represent the emergy involved in all processes. In this paper we use set theory in an emergy evaluation to ensure an accurate representation of the inputs to territorial systems. We consider a generic territorial system and we describ...
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.
Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan
2015-09-22
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture
Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan
2015-01-01
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681
Modified chemiluminescent NO analyzer accurately measures NOX
NASA Technical Reports Server (NTRS)
Summers, R. L.
1978-01-01
Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.
García, Paul S; Wright, Terrence M; Cunningham, Ian R; Calabrese, Ronald L
2008-09-01
Previously we presented a quantitative description of the spatiotemporal pattern of inhibitory synaptic input from the heartbeat central pattern generator (CPG) to segmental motor neurons that drive heartbeat in the medicinal leech and the resultant coordination of CPG interneurons and motor neurons. To begin elucidating the mechanisms of coordination, we explore intersegmental and side-to-side coordination in an ensemble model of all heart motor neurons and their known synaptic inputs and electrical coupling. Model motor neuron intrinsic properties were kept simple, enabling us to determine the extent to which input and electrical coupling acting together can account for observed coordination in the living system in the absence of a substantive contribution from the motor neurons themselves. The living system produces an asymmetric motor pattern: motor neurons on one side fire nearly in synchrony (synchronous), whereas on the other they fire in a rear-to-front progression (peristaltic). The model reproduces the general trends of intersegmental and side-to-side phase relations among motor neurons, but the match with the living system is not quantitatively accurate. Thus realistic (experimentally determined) inputs do not produce similarly realistic output in our model, suggesting that motor neuron intrinsic properties may contribute to their coordination. By varying parameters that determine electrical coupling, conduction delays, intraburst synaptic plasticity, and motor neuron excitability, we show that the most important determinant of intersegmental and side-to-side phase relations in the model was the spatiotemporal pattern of synaptic inputs, although phasing was influenced significantly by electrical coupling. PMID:18579654
NASA Astrophysics Data System (ADS)
Verma, Kuldeep; Hanasoge, Shravan; Bhattacharya, Jishnu; Antia, H. M.; Krishnamurthi, Ganapathy
2016-10-01
The advent of space-based observatories such as Convection, Rotation and planetary Transits (CoRoT) and Kepler has enabled the testing of our understanding of stellar evolution on thousands of stars. Evolutionary models typically require five input parameters, the mass, initial helium abundance, initial metallicity, mixing length (assumed to be constant over time), and the age to which the star must be evolved. Some of these parameters are also very useful in characterizing the associated planets and in studying Galactic archaeology. How to obtain these parameters from observations rapidly and accurately, specifically in the context of surveys of thousands of stars, is an outstanding question, one that has eluded straightforward resolution. For a given star, we typically measure the effective temperature and surface metallicity spectroscopically and low-degree oscillation frequencies through space observatories. Here we demonstrate that statistical learning, using artificial neural networks, is successful in determining the evolutionary parameters based on spectroscopic and seismic measurements. Our trained networks show robustness over a broad range of parameter space, and critically, are entirely computationally inexpensive and fully automated. We analyse the observations of a few stars using this method and the results compare well to inferences obtained using other techniques. This method is both computationally cheap and inferentially accurate, paving the way for analysing the vast quantities of stellar observations from past, current, and future missions.
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Network dynamics for optimal compressive-sensing input-signal recovery.
Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David
2014-10-01
By using compressive sensing (CS) theory, a broad class of static signals can be reconstructed through a sequence of very few measurements in the framework of a linear system. For networks with nonlinear and time-evolving dynamics, is it similarly possible to recover an unknown input signal from only a small number of network output measurements? We address this question for pulse-coupled networks and investigate the network dynamics necessary for successful input signal recovery. Determining the specific network characteristics that correspond to a minimal input reconstruction error, we are able to achieve high-quality signal reconstructions with few measurements of network output. Using various measures to characterize dynamical properties of network output, we determine that networks with highly variable and aperiodic output can successfully encode network input information with high fidelity and achieve the most accurate CS input reconstructions. For time-varying inputs, we also find that high-quality reconstructions are achievable by measuring network output over a relatively short time window. Even when network inputs change with time, the same optimal choice of network characteristics and corresponding dynamics apply as in the case of static inputs.
Multi-input distributed classifiers for synthetic genetic circuits.
Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail
2015-01-01
For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry.
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
Can Appraisers Rate Work Performance Accurately?
ERIC Educational Resources Information Center
Hedge, Jerry W.; Laue, Frances J.
The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
Scaling of global input-output networks
NASA Astrophysics Data System (ADS)
Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming
2016-06-01
Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.
2014-12-01
Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Stein's neuronal model with pooled renewal input.
Rajdl, Kamil; Lansky, Petr
2015-06-01
The input of Stein's model of a single neuron is usually described by using a Poisson process, which is assumed to represent the behaviour of spikes pooled from a large number of presynaptic spike trains. However, such a description of the input is not always appropriate as the variability cannot be separated from the intensity. Therefore, we create and study Stein's model with a more general input, a sum of equilibrium renewal processes. The mean and variance of the membrane potential are derived for this model. Using these formulas and numerical simulations, the model is analyzed to study the influence of the input variability on the properties of the membrane potential and the output spike trains. The generalized Stein's model is compared with the original Stein's model with Poissonian input using the relative difference of variances of membrane potential at steady state and the integral square error of output interspike intervals. Both of the criteria show large differences between the models for input with high variability. PMID:25910437
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
Input/output system for multiprocessors
Bernick, D.L.; Chan, K.K.; Chan, W.M.; Dan, Y.F.; Hoang, D.M.; Hussain, Z.; Iswandhi, G.I.; Korpi, J.E.; Sanner, M.W.; Zwangerman, J.A.
1989-04-11
A device controller is described, comprising: a first port-input/output controller coupled to a first input/output channel bus; and a second port-input/output controlled coupled to a second input/output channel bus; each of the first and second port-input/output controllers having: a first ownership latch means for granting shared ownership of the device controller to a first host processor to provide a first data path on a first I/O channel through the first port I/O controller between the first host processor and any peripheral, and at least a second ownership latch means operative independently of the first ownership latch means for granting shared ownership of the device controller to a second host processor independently of the first port input/output controller to provide a second data path on a second I/O channel through the second port I/O controller between the second host processor and any peripheral devices coupled to the device controller.
Accurate and occlusion-robust multi-view stereo
NASA Astrophysics Data System (ADS)
Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.
2015-11-01
This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Videometric terminal guidance method and system for UAV accurate landing
NASA Astrophysics Data System (ADS)
Zhou, Xiang; Lei, Zhihui; Yu, Qifeng; Zhang, Hongliang; Shang, Yang; Du, Jing; Gui, Yang; Guo, Pengyu
2012-06-01
We present a videometric method and system to implement terminal guidance for Unmanned Aerial Vehicle(UAV) accurate landing. In the videometric system, two calibrated cameras attached to the ground are used, and a calibration method in which at least 5 control points are applied is developed to calibrate the inner and exterior parameters of the cameras. Cameras with 850nm spectral filter are used to recognize a 850nm LED target fixed on the UAV which can highlight itself in images with complicated background. NNLOG (normalized negative laplacian of gaussian) operator is developed for automatic target detection and tracking. Finally, 3-D position of the UAV with high accuracy can be calculated and transfered to control system to direct UAV accurate landing. The videometric system can work in the rate of 50Hz. Many real flight and static accuracy experiments demonstrate the correctness and veracity of the method proposed in this paper, and they also indicate the reliability and robustness of the system proposed in this paper. The static accuracy experiment results show that the deviation is less-than 10cm when target is far from the cameras and lessthan 2cm in 100m region. The real flight experiment results show that the deviation from DGPS is less-than 20cm. The system implement in this paper won the first prize in the AVIC Cup-International UAV Innovation Grand Prix, and it is the only one that achieved UAV accurate landing without GPS or DGPS.
Barrett, Christian L.; Cho, Byung-Kwan
2011-01-01
Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353
NASA Astrophysics Data System (ADS)
Udayashankar, Paniveni
2016-07-01
I study the complexity of supergranular cells using intensity patterns from Kodaikanal solar observatory. The chaotic and turbulent aspect of the solar supergranulation can be studied by examining the interrelationships amongst the parameters characterizing supergranular cells namely size, horizontal flow field, lifetime and physical dimensions of the cells and the fractal dimension deduced from the size data. The findings are supportive of Kolmogorov's theory of turbulence. The Data consists of visually identified supergranular cells, from which a fractal dimension 'D' for supergranulation is obtained according to the relation P α AD/2 where 'A' is the area and 'P' is the perimeter of the supergranular cells. I find a fractal dimension close to about 1.3 which is consistent with that for isobars and suggests a possible turbulent origin. The cell circularity shows a dependence on the perimeter with a peak around (1.1-1.2) x 105 m. The findings are supportive of Kolmogorov's theory of turbulence.
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.
Puzzarini, Cristina
2015-11-25
The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed.
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
Hamrick, Todd
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to compute the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Preparation and accurate measurement of pure ozone.
Janssen, Christof; Simone, Daniela; Guinet, Mickaël
2011-03-01
Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Line gas sampling system ensures accurate analysis
Not Available
1992-06-01
Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-04-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Six axis force feedback input device
NASA Technical Reports Server (NTRS)
Ohm, Timothy (Inventor)
1998-01-01
The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
NASA Astrophysics Data System (ADS)
Tsantaki, M.; Sousa, S. G.; Santos, N. C.; Montalto, M.; Delgado-Mena, E.; Mortier, A.; Adibekyan, V.; Israelian, G.
2014-10-01
Context. Planetary studies demand precise and accurate stellar parameters as input for inferring the planetary properties. Different methods often provide different results that could lead to biases in the planetary parameters. Aims: In this work, we present a refinement of the spectral synthesis technique designed to treat fast rotating stars better. This method is used to derive precise stellar parameters, namely effective temperature, surface gravity, metallicity, and rotational velocity. The procedure is tested for FGK stars with low and moderate-to-high rotation rates. Methods: The spectroscopic analysis is based on the spectral synthesis package Spectroscopy Made Easy (SME), which assumes Kurucz model atmospheres in LTE. The line list where the synthesis is conducted is comprised of iron lines, and the atomic data are derived after solar calibration. Results: The comparison of our stellar parameters shows good agreement with literature values, both for slowly and for fast rotating stars. In addition, our results are on the same scale as the parameters derived from the iron ionization and excitation method presented in our previous works. We present new atmospheric parameters for 10 transiting planet hosts as an update to the SWEET-Cat catalog. We also re-analyze their transit light curves to derive new updated planetary properties. Based on observations collected at the La Silla Observatory, ESO (Chile) with the FEROS spectrograph at the 2.2 m telescope (ESO runs ID 089.C-0444(A), 088.C-0892(A)) and with the HARPS spectrograph at the 3.6 m telescope (ESO runs ID 072.C-0488(E), 079.C-0127(A)); at the Observatoire de Haute-Provence (OHP, CNRS/OAMP), France, with the SOPHIE spectrograph at the 1.93 m telescope and at the Observatoire Midi-Pyrénées (CNRS), France, with the NARVAL spectrograph at the 2 m Bernard Lyot Telescope (Run ID L131N11).Appendix A is available in electronic form at http://www.aanda.org
Effects of bias in solar radiation inputs on ecosystem model performance
NASA Astrophysics Data System (ADS)
Asao, Shinichi; Sun, Zhibin; Gao, Wei
2015-09-01
Solar radiation inputs drive many processes in terrestrial ecosystem models. The processes (e.g. photosynthesis) account for most of the fluxes of carbon and water cycling in the models. It is thus clear that errors in solar radiation inputs cause key model outputs to deviate from observations, parameters to become suboptimal, and model predictions to loose confidence. However, errors in solar radiation inputs are unavoidable for most model predictions since models are often run with observations with spatial or / and temporal gaps. As modeled processes are non-linear and interacting with each other, it is unclear how much confidence most model predictions merits without examining the effects of those errors on the model performance. In this study, we examined the effects using a terrestrial ecosystem model, DayCent. DayCent was parameterized for annual grassland in California with six years of daily eddy covariance data totaling 15,337 data points. Using observed solar radiation values, we introduced bias at four different levels. We then simultaneously calibrated 48 DayCent parameters through inverse modeling using the PEST parameter estimation software. The bias in solar radiation inputs affected the calibration only slightly and preserved model performance. Bias slightly worsened simulations of water flux, but did not affect simulations of CO2 fluxes. This arose from distinct parameter set for each bias level, and the parameter sets were surprisingly unconstrained by the extensive observations. We conclude that ecosystem models perform relatively well even with substantial bias in solar radiation inputs. However, model parameters and predictions warrant skepticism because model parameters can accommodate biases in input data despite extensive observations.
Computer Generated Inputs for NMIS Processor Verification
J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly
2001-06-29
Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999.
Decontextualized language input and preschoolers' vocabulary development.
Rowe, Meredith L
2013-11-01
This article discusses the importance of using decontextualized language, or language that is removed from the here and now including pretend, narrative, and explanatory talk, with preschool children. The literature on parents' use of decontextualized language is reviewed and results of a longitudinal study of parent decontextualized language input in relation to child vocabulary development are explained. The main findings are that parents who provide their preschool children with more explanations and narrative utterances about past or future events in the input have children with larger vocabularies 1 year later, even with quantity of parent input and child prior vocabulary skill controlled. Recommendations for how to engage children in decontextualized language conversations are provided.
The input optics of Advanced LIGO
NASA Astrophysics Data System (ADS)
Tanner, D. B.; Arain, M. A.; Ciani, G.; Feldbaum, D.; Fulda, P.; Gleason, J.; Goetz, R.; Heintze, M.; Martin, R. M.; Mueller, C. L.; Williams, L. F.; Mueller, G.; Quetschke, V.; Korth, W. Z.; Reitze, D. H.; Derosa, R. T.; Effler, A.; Kokeyama, K.; Frolov, V. V.; Mullavey, A.; Poeld, J.
2016-03-01
The Input Optics (IO) of advanced LIGO will be described. The IO consists of all the optics between the laser and the power recycling mirror. The scope of the IO includes the following hardware: phase modulators, power control, input mode cleaner, an in-vacuum Faraday isolator, and mode matching telescopes. The IO group has developed and characterized RTP-based phase modulators capable of operation at 180 W cw input power. In addition, the Faraday isolator is compensated for depolarization and thermal lensing effects up to the same power and is capable of achieving greater than 40 dB isolation. This research has been supported by the NSF through Grants PHY-1205512 and PHY-1505598. LIGO-G1600067.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input
Are accurate equation of state parameters important in Richtmyer-Meshkov instabilities?
Cloutman, L D
1999-08-01
The Richtmyer-Meshkov instability is a classical fluid dynamical instability that has been extensively studied to help understand turbulent mixing. A recent numerical simulation of a shock tube experiment with an air-SF6 interface and a weak shock (Mach 1.2) used the ideal gas equation of state for air and an artificially low temperature as a surrogate for the correct SF6 gas physics. We have run a similar problem with both the correct gas physics and three versions of the air surrogate to understand the errors thereby introduced. We find that for the weakly driven single-mode case considered here, the instability amplitude is not affected, the interface location is affected only slightly, but the thermodynamic states are quite different. This result is not surprising because the flow far from the shock waves is essentially incompressible.
Are accurate equation of state parameters important in Richtmyer-Meshkov instabilities
Cloutman, L D
1999-08-01
The Richtmyer-Meshkov instability is a classical fluid dynamical instability that has been extensively studied to help understand turbulent mixing. A recent numerical simulation of a shock tube experiment with an air-SF6 interface and a weak shock (Mach 1.2) used the ideal gas equation of state for air and an artificially low temperature as a surrogate for the correct SF6 gas physics. We have run a similar problem with both the correct gas physics and three versions of the air surrogate to understand the errors thereby introduced. We find that for the weakly driven single-mode case considered here, the instability amplitude is not affected, the interface location is affected only slightly, but the thermodynamic states are quite different. This result is not surprising because the flow far from the shock waves is essentially incompressible.
NASA Astrophysics Data System (ADS)
Krieger, J. B.; Chen, Jiqiang; Iafrate, G. J.; Savin, A.
1998-03-01
We have obtained an analytic approximation to E_c(r_g, ζ,G) where G is an energy gap separating the occupied and unoccupied states of a homogeneous electron gas for ζ=3D0 and ξ=3D1. When G=3D0, E_c(r_g, ζ) reduces to the usual LSD result. This functional is employed in calculating correlation energies for unpolarized atoms and ions for Z <= 18 by taking G[n]=3D1/8|nabla ln n|^2, which reduces to the ionization energy in the large r limit in an exact Kohn-Sham (KS) theory. The resulting functional is self-interaction-corrected employing a method which is invariant under a unitary transformation. We find that the application of this approach to the calculation of the Ec functional reduces the error in the LSD result by more than 95%. When the value of G is approximately corrected to include the effect of higher lying unoccupied localized states, the resulting values of Ec are within a few percent of the exact results.
A More Accurate Measurement of the {sup 28}Si Lattice Parameter
Massa, E. Sasso, C. P.; Mana, G.; Palmisano, C.
2015-09-15
In 2011, a discrepancy between the values of the Planck constant measured by counting Si atoms and by comparing mechanical and electrical powers prompted a review, among others, of the measurement of the spacing of {sup 28}Si (220) lattice planes, either to confirm the measured value and its uncertainty or to identify errors. This exercise confirmed the result of the previous measurement and yields the additional value d{sub 220} = 192 014 711.98(34) am having a reduced uncertainty.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
NASA Astrophysics Data System (ADS)
Madhusudhan, Nikku; Freedman, R.; Tennyson, J.
2013-06-01
Recent advancements in exoplanet observations are placing unprecedented constraints on the physical and chemical properties of exoplanetary atmospheres. Statistically significant constraints have been placed on the abundances of atomic and molecular species, elemental abundance ratios, temperature profiles, energy circulation, presence of hazes/clouds, and non-equilibrium chemistry, in several exoplanetary atmospheres, including gas giants, ice giants, as well as super-Earths, over a wide temperature range. The chemical constraints have also motivated new paradigms for classifying exoplanets and new efforts to constraint their formation conditions. Central to all interpretations of exoplanet spectra, however, is the accuracy of fundamental inputs in the models, primarily, the atomic and molecular opacities, which are derived from laboratory experiments and/or ab initio numerical calculations. In this talk, we will review the state-of-the-art in atomic and molecular line-lists as applied to studies of exoplanetary atmospheres. We will discuss examples where advances in laboratory astrophysics, experimental and computational, have addressed important problems in the area of exoplanetary atmospheres, as well as outstanding questions requiring new experiments and/or theoretical calculations. For example, recent studies are suggesting that high-temperature line-lists of hydrocarbons (CH4, C2H2, HCN, etc.), and several metal hydrides, in addition to refined line-lists of several well-studied molecules, are important to accurately interpret exoplanetary spectra. We will highlight several fundamental questions in the area that require new efforts in laboratory astrophysics. Besides their importance in interpreting observations with current instruments, the refined parameters are also critical in the assessment of future facilities for exoplanet characterization, such as JWST, GMT, etc.
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Accurate crop classification using hierarchical genetic fuzzy rule-based systems
NASA Astrophysics Data System (ADS)
Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.
2014-10-01
This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.
An update of input instructions to TEMOD
NASA Technical Reports Server (NTRS)
1973-01-01
The theory and operation of a FORTRAN 4 computer code, designated as TEMOD, used to calcuate tubular thermoelectric generator performance is described in WANL-TME-1906. The original version of TEMOD was developed in 1969. A description is given of additions to the mathematical model and an update of the input instructions to the code. Although the basic mathematical model described in WANL-TME-1906 has remained unchanged, a substantial number of input/output options were added to allow completion of module performance parametrics as required in support of the compact thermoelectric converter system technology program.
Input/Output Subroutine Library Program
NASA Technical Reports Server (NTRS)
Collier, James B.
1988-01-01
Efficient, easy-to-use program moved easily to different computers. Purpose of NAVIO, Input/Output Subroutine Library, provides input/output package of software for FORTRAN programs that is portable, efficient, and easy to use. Implemented as hierarchy of libraries. At bottom is very small library containing only non-portable routines called "I/O Kernel." Design makes NAVIO easy to move from one computer to another, by simply changing kernel. NAVIO appropriate for software system of almost any size wherein different programs communicate through files.
Visual parameter optimisation for biomedical image processing
2015-01-01
Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538
Investigation of Input Signal Curve Effect on Formed Pulse of Hydraulic-Powered Pulse Machine
NASA Astrophysics Data System (ADS)
Novoseltseva, M. V.; Masson, I. A.; Pashkov, E. N.
2016-04-01
Well drilling machines should have as high efficiency factor as it is possible. This work proposes factors that are affected by change of input signal pulse curve. A series of runs are conducted on mathematical model of hydraulic-powered pulse machine. From this experiment, interrelations between input pulse curve and construction parameters are found. Results of conducted experiment are obtained with the help of the mathematical model, which is created in Simulink Matlab. Keywords - mathematical modelling; impact machine; output signal amplitude; input signal curve.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
NASA Astrophysics Data System (ADS)
Liao, Qifeng; Lin, Guang
2016-07-01
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization.
NASA Astrophysics Data System (ADS)
Izawa, Jun; Tanoue, Kazuya; Murono, Yoshitaka
Severe soil liquefaction due to long duration earthquake with low acceleration occurred at Tokyo Bay area in the 2011 off the Pacific coast of Tohoku Earthquake. This phenomenon clearly shows that soil liquefaction is affected by properties of input waves. This paper describes effect of wave properties of earthquake on liquefaction using Effective Stress analysis with some earthquakes. Analytical result showedthat almost the same pore water pressure was observed due to both long durationearthquake with max acceleration of 150Gal and typical inland active fault earthquake with 891Gal. Additionally, lique-faction potentials for each earthquake were evaluated by simple judgment with accumulated damage parameter, which is used for design of railway structuresin Japan. As a result, it was found that accurate liquefaction resistance on large cyclic area is necessaryto evaluate liquefaction potential due to long duration earthquake with low acceleration with simple judgment with accumulated damage parameter.
Field measurement of moisture-buffering model inputs for residential buildings
Woods, Jason; Winkler, Jon
2016-02-05
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Partially connected feedforward neural networks structured by input types.
Kang, Sanggil; Isik, Can
2005-01-01
This paper proposes a new method to model partially connected feedforward neural networks (PCFNNs) from the identified input type (IT) which refers to whether each input is coupled with or uncoupled from other inputs in generating output. The identification is done by analyzing input sensitivity changes as amplifying the magnitude of inputs. The sensitivity changes of the uncoupled inputs are not correlated with the variation on any other input, while those of the coupled inputs are correlated with the variation on any one of the coupled inputs. According to the identified ITs, a PCFNN can be structured. Each uncoupled input does not share the neurons in the hidden layer with other inputs in order to contribute to output in an independent manner, while the coupled inputs share the neurons with one another. After deriving the mathematical input sensitivity analysis for each IT, several experiments, as well as a real example (blood pressure (BP) estimation), are described to demonstrate how well our method works.
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
Three-input majority logic gate and multiple input logic circuit based on DNA strand displacement.
Li, Wei; Yang, Yang; Yan, Hao; Liu, Yan
2013-06-12
In biomolecular programming, the properties of biomolecules such as proteins and nucleic acids are harnessed for computational purposes. The field has gained considerable attention due to the possibility of exploiting the massive parallelism that is inherent in natural systems to solve computational problems. DNA has already been used to build complex molecular circuits, where the basic building blocks are logic gates that produce single outputs from one or more logical inputs. We designed and experimentally realized a three-input majority gate based on DNA strand displacement. One of the key features of a three-input majority gate is that the three inputs have equal priority, and the output will be true if any of the two inputs are true. Our design consists of a central, circular DNA strand with three unique domains between which are identical joint sequences. Before inputs are introduced to the system, each domain and half of each joint is protected by one complementary ssDNA that displays a toehold for subsequent displacement by the corresponding input. With this design the relationship between any two domains is analogous to the relationship between inputs in a majority gate. Displacing two or more of the protection strands will expose at least one complete joint and return a true output; displacing none or only one of the protection strands will not expose a complete joint and will return a false output. Further, we designed and realized a complex five-input logic gate based on the majority gate described here. By controlling two of the five inputs the complex gate can realize every combination of OR and AND gates of the other three inputs.
Yang, Xiaoyan; Cui, Jianwei; Lao, Dazhong; Li, Donghai; Chen, Junhui
2016-05-01
In this paper, a composite control based on Active Disturbance Rejection Control (ADRC) and Input Shaping is presented for TRMS with two degrees of freedom (DOF). The control tasks consist of accurately tracking desired trajectories and obtaining disturbance rejection in both horizontal and vertical planes. Due to un-measurable states as well as uncertainties stemming from modeling uncertainty and unknown disturbance torques, ADRC is employed, and feed-forward Input Shaping is used to improve the dynamical response. In the proposed approach, because the coupling effects are maintained in controller derivation, there is no requirement to decouple the TRMS into horizontal and vertical subsystems, which is usually performed in the literature. Finally, the proposed method is implemented on the TRMS platform, and the results are compared with those of PID and ADRC in a similar structure. The experimental results demonstrate the effectiveness of the proposed method. The operation of the controller allows for an excellent set-point tracking behavior and disturbance rejection with system nonlinearity and complex coupling conditions. PMID:26922492
Yang, Xiaoyan; Cui, Jianwei; Lao, Dazhong; Li, Donghai; Chen, Junhui
2016-05-01
In this paper, a composite control based on Active Disturbance Rejection Control (ADRC) and Input Shaping is presented for TRMS with two degrees of freedom (DOF). The control tasks consist of accurately tracking desired trajectories and obtaining disturbance rejection in both horizontal and vertical planes. Due to un-measurable states as well as uncertainties stemming from modeling uncertainty and unknown disturbance torques, ADRC is employed, and feed-forward Input Shaping is used to improve the dynamical response. In the proposed approach, because the coupling effects are maintained in controller derivation, there is no requirement to decouple the TRMS into horizontal and vertical subsystems, which is usually performed in the literature. Finally, the proposed method is implemented on the TRMS platform, and the results are compared with those of PID and ADRC in a similar structure. The experimental results demonstrate the effectiveness of the proposed method. The operation of the controller allows for an excellent set-point tracking behavior and disturbance rejection with system nonlinearity and complex coupling conditions.
NASA Astrophysics Data System (ADS)
Comparetto, Gary M.; Foose, William A.
The accuracy of the Gaussian approximation technique (GAT) is evaluated. The GAT involves representing the signal ensemble input into a nonlinear device, such as a hard limiter, as a Gaussian process. The results demonstrate that if the number of signals is greater than five and the power of each of the Gaussian approximated input signals contain less than 20 percent of the total power, then the results obtained by approximating the input signal ensemble by a Gaussian process will accurately represent actual system performance. It is also shown that the GAT is quite accurate as long as the ratio of Gaussian approximated input signal power to total signal power is no more than 50 percent.
Extracting accurate temperatures of molten basalts from non-contact thermal infrared radiance data
NASA Astrophysics Data System (ADS)
Fontanella, N. R.; Ramsey, M. S.; Lee, R.
2013-12-01
The eruptive and emplacement temperature of a lava flow relates important information on parameters such as the composition, rheology, and emplacement processes. It can also serve as a critical input into flow cooling and propagation models used for hazard prediction. One of the most common ways to determine temperatures of active lava flows is to use non-contact thermal infrared (TIR) measurements, either from ground-based radiometers and cameras or air and space-based remote sensing instruments. These temperature measurements assume a fixed value for the lava emissivity in order to solve the Planck Equation for temperature. The research presented here examines the possibility of variable emissivity in a material's molten state and the effect it has on deriving accurate surface temperature. Emplacement of a pahoehoe lava lobe at Kilauea volcano, Hawaii was captured with high spatial resolution/high frame rate TIR video in order to study this phenomenon. The data show the appearance of molten lava at a breakout point until it cools to form a glassy crust that begins to fold. Emissivity was adjusted sequentially along linear transects from a starting value of 1.0 to lower values until the TIR temperature matched the known temperature measured with a thermocouple. Below an emissivity of ~0.89, temperatures of the molten lava rose above the known lava temperature. This value suggests a decrease in emissivity with a change of state and is likely due to changes in the atomic bond structure of the melt. We have also recently completed the first ever calibrated laboratory-based emissivity measurements of molten basalts, and these high spectral resolution data confirm the field-based estimates. In contrast to rhyolites, basalts appear to display a less dramatic change between their glassy and molten spectra due to their higher melting and glass transition temperatures and the quick formation time of the crust. Therefore, the change in emissivity for molten rhyolite could
Input Enhancement in Instructed SLA: Theoretical Bases.
ERIC Educational Resources Information Center
Smith, Michael Sharwood
1993-01-01
The concept of input to the language learner is examined with reference to some current theorizing about language processing and the idea of modular systems of knowledge. It is argued that exposure to a second language engages the learner in a whole battery of different processing mechanisms. (21 references) (Author/LB)
Multiple Input Microcantilever Sensor with Capacitive Readout
Britton, C.L., Jr.; Brown, G.M.; Bryan, W.L.; Clonts, L.G.; DePriest, J.C.; Emergy, M.S.; Ericson, M.N.; Hu, Z.; Jones, R.L.; Moore, M.R.; Oden, P.I.; Rochelle, J.M.; Smith, S.F.; Threatt, T.D.; Thundat, T.; Turner, G.W.; Warmack, R.J.; Wintenberg, A.L.
1999-03-11
A surface-micromachined MEMS process has been used to demonstrate multiple-input chemical sensing using selectively coated cantilever arrays. Combined hydrogen and mercury-vapor detection was achieved with a palm-sized, self-powered module with spread-spectrum telemetry reporting.
Input-Based Incremental Vocabulary Instruction
ERIC Educational Resources Information Center
Barcroft, Joe
2012-01-01
This fascinating presentation of current research undoes numerous myths about how we most effectively learn new words in a second language. In clear, reader-friendly text, the author details the successful approach of IBI vocabulary instruction, which emphasizes the presentation of target vocabulary as input early on and the incremental (gradual)…
Soil Organic Carbon Input from Urban Turfgrasses
Technology Transfer Automated Retrieval System (TEKTRAN)
Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon sequ...
Input and Intake in Language Acquisition
ERIC Educational Resources Information Center
Gagliardi, Ann C.
2012-01-01
This dissertation presents an approach for a productive way forward in the study of language acquisition, sealing the rift between claims of an innate linguistic hypothesis space and powerful domain general statistical inference. This approach breaks language acquisition into its component parts, distinguishing the input in the environment from…
Soil Organic Carbon Input from Urban Turfgrasses
Technology Transfer Automated Retrieval System (TEKTRAN)
Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon (C) input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon ...
Treatments of Precipitation Inputs to Hydrologic Models
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...
Multichannel analyzers at high rates of input
NASA Technical Reports Server (NTRS)
Rudnick, S. J.; Strauss, M. G.
1969-01-01
Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.
The Contrast Theory of negative input.
Saxton, M
1997-02-01
Beliefs about whether or not children receive corrective input for grammatical errors depend crucially on how one defines the concept of correction. Arguably, previous conceptualizations do not provide a viable basis for empirical research (Gold, 1967; Brown & Hanlon, 1970; Hirsh-Pasek, Treiman & Schneiderman, 1984). Within the Contrast Theory of negative input, an alternative definition of negative evidence is offered, based on the idea that the unique discourse structure created in the juxtaposition of child error and adult correct form can reveal to the child the contrast, or conflict, between the two forms, and hence provide a basis for rejecting the erroneous form. A within-subjects experimental design was implemented for 36 children (mean age 5;0), in order to compare the immediate effects of negative evidence with those of positive input, on the acquisition of six novel irregular past tense forms. Children reproduced the correct irregular model more often, and persisted with fewer errors, following negative evidence rather than positive input.
"Thumball" Auxiliary Data-Input Device
NASA Technical Reports Server (NTRS)
Garner, H. Douglas; Busquets, Anthony M.; Hogge, Thomas W.; Parrish, Russell V.
1988-01-01
Track-ball-type device mounted on joystick and operated by thumb. Thumball designed to enable precise input of data about two different axes to autopilot, avionics computer, or other electronic device without need for operator to remove hands from joystick or other vehicle control levers.
Adaptive Random Testing with Combinatorial Input Domain
Lu, Yansheng
2014-01-01
Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure-detection capability and has been widely applied in different scenarios, such as numerical programs, some object-oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART-CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART-CID generally performs better than RT, with respect to different evaluation metrics. PMID:24772036
Input, Interaction and Output: An Overview
ERIC Educational Resources Information Center
Gass, Susan; Mackey, Alison
2006-01-01
This paper presents an overview of what has come to be known as the "Interaction Hypothesis," the basic tenet of which is that through input and interaction with interlocutors, language learners have opportunities to notice differences between their own formulations of the target language and the language of their conversational…
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Visualization of Parameter Space for Image Analysis
Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.
2013-01-01
Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361
Eckhoff, P.A.
1994-12-31
Building downwash is a complex technical subject that has important ramifications in the field of air quality dispersion modeling. Building downwash algorithms have been incorporated into air quality dispersion models such as the Industrial Source Complex (ISC2) Models, short term and long term versions (ISCST2 and ISCLT2). Input data preparation for these algorithms must reflect Environmental Protection Agency (EPA) guidance on Good Engineering Practice (GEP) stack height and building downwash. The current guidance is complex and errors can easily be made in the input data preparation. A computer program called the Building Profile Input Program (BPIP) was written to alleviate errors caused during input data preparation and to provide a standardized method for calculating building height (BH) and projected building width (PBW) values for input to the ISC2 models that reflect EPA guidance.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D
George L Mesina; David Aumiller; Francis Buschman
2014-07-01
Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.
How many dark energy parameters?
Linder, Eric V.; Huterer, Dragan
2005-05-16
For exploring the physics behind the accelerating universe a crucial question is how much we can learn about the dynamics through next generation cosmological experiments. For example, in defining the dark energy behavior through an effective equation of state, how many parameters can we realistically expect to tightly constrain? Through both general and specific examples (including new parametrizations and principal component analysis) we argue that the answer is 42 - no, wait, two. Cosmological parameter analyses involving a measure of the equation of state value at some epoch (e.g., w_0) and a measure of the change in equation of state (e.g., w') are therefore realistic in projecting dark energy parameter constraints. More elaborate parametrizations could have some uses (e.g., testing for bias or comparison with model features), but do not lead to accurately measured dark energy parameters.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830
Simple PID parameter tuning method based on outputs of the closed loop system
NASA Astrophysics Data System (ADS)
Han, Jianda; Zhu, Zhiqiang; Jiang, Ziya; He, Yuqing
2016-05-01
Most of the existing PID parameters tuning methods are only effective with pre-known accurate system models, which often require some strict identification experiments and thus infeasible for many complicated systems. Actually, in most practical engineering applications, it is desirable for the PID tuning scheme to be directly based on the input-output response of the closed-loop system. Thus, a new parameter tuning scheme for PID controllers without explicit mathematical model is developed in this paper. The paper begins with a new frequency domain properties analysis of the PID controller. After that, the definition of characteristic frequency for the PID controller is given in order to study the mathematical relationship between the PID parameters and the open-loop frequency properties of the controlled system. Then, the concepts of M-field and θ-field are introduced, which are then used to explain how the PID control parameters influence the closed-loop frequency-magnitude property and its time responses. Subsequently, the new PID parameter tuning scheme, i.e., a group of tuning rules, is proposed based on the preceding analysis. Finally, both simulations and experiments are conducted, and the results verify the feasibility and validity of the proposed methods. This research proposes a PID parameter tuning method based on outputs of the closed loop system.
Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters.
Liu, Fei; Heiner, Monika; Yang, Ming
2016-01-01
Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information.
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.
2015-09-22
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Aerodynamic Parameter Estimation for the X-43A (Hyper-X) from Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Derry, Stephen D.; Smith, Mark S.
2005-01-01
Aerodynamic parameters were estimated based on flight data from the third flight of the X-43A hypersonic research vehicle, also called Hyper-X. Maneuvers were flown using multiple orthogonal phase-optimized sweep inputs applied as simultaneous control surface perturbations at Mach 8, 7, 6, 5, 4, and 3 during the vehicle descent. Aerodynamic parameters, consisting of non-dimensional longitudinal and lateral stability and control derivatives, were estimated from flight data at each Mach number. Multi-step inputs at nearly the same flight conditions were also flown to assess the prediction capability of the identified models. Prediction errors were found to be comparable in magnitude to the modeling errors, which indicates accurate modeling. Aerodynamic parameter estimates were plotted as a function of Mach number, and compared with estimates from the pre-flight aerodynamic database, which was based on wind-tunnel tests and computational fluid dynamics. Agreement between flight estimates and values computed from the aerodynamic database was excellent overall.
NASA Astrophysics Data System (ADS)
Atieh, M.; Mehltretter, S. L.; Gharabaghi, B.; Rudra, R.
2015-12-01
One of the most uncertain modeling tasks in hydrology is the prediction of ungauged stream sediment load and concentration statistics. This study presents integrated artificial neural networks (ANN) models for prediction of sediment rating curve parameters (rating curve coefficient α and rating curve exponent β) for ungauged basins. The ANN models integrate a comprehensive list of input parameters to improve the accuracy achieved; the input parameters used include: soil, land use, topographic, climatic, and hydrometric data sets. The ANN models were trained on the randomly selected 2/3 of the dataset of 94 gauged streams in Ontario, Canada and validated on the remaining 1/3. The developed models have high correlation coefficients of 0.92 and 0.86 for α and β, respectively. The ANN model for the rating coefficient α is directly proportional to rainfall erosivity factor, soil erodibility factor, and apportionment entropy disorder index, whereas it is inversely proportional to vegetation cover and mean annual snowfall. The ANN model for the rating exponent β is directly proportional to mean annual precipitation, the apportionment entropy disorder index, main channel slope, standard deviation of daily discharge, and inversely proportional to the fraction of basin area covered by wetlands and swamps. Sediment rating curves are essential tools for the calculation of sediment load, concentration-duration curve (CDC), and concentration-duration-frequency (CDF) analysis for more accurate assessment of water quality for ungauged basins.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Highly Accurate Inverse Consistent Registration: A Robust Approach
Reuter, Martin; Rosas, H. Diana; Fischl, Bruce
2010-01-01
The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293
Accurate Telescope Mount Positioning with MEMS Accelerometers
NASA Astrophysics Data System (ADS)
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Parameters for burst detection
Bakkum, Douglas J.; Radivojevic, Milos; Frey, Urs; Franke, Felix; Hierlemann, Andreas; Takahashi, Hirokazu
2014-01-01
Bursts of action potentials within neurons and throughout networks are believed to serve roles in how neurons handle and store information, both in vivo and in vitro. Accurate detection of burst occurrences and durations are therefore crucial for many studies. A number of algorithms have been proposed to do so, but a standard method has not been adopted. This is due, in part, to many algorithms requiring the adjustment of multiple ad-hoc parameters and further post-hoc criteria in order to produce satisfactory results. Here, we broadly catalog existing approaches and present a new approach requiring the selection of only a single parameter: the number of spikes N comprising the smallest burst to consider. A burst was identified if N spikes occurred in less than T ms, where the threshold T was automatically determined from observing a probability distribution of inter-spike-intervals. Performance was compared vs. different classes of detectors on data gathered from in vitro neuronal networks grown over microelectrode arrays. Our approach offered a number of useful features including: a simple implementation, no need for ad-hoc or post-hoc criteria, and precise assignment of burst boundary time points. Unlike existing approaches, detection was not biased toward larger bursts, allowing identification and analysis of a greater range of neuronal and network dynamics. PMID:24567714
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate Weather Forecasting for Radio Astronomy
NASA Astrophysics Data System (ADS)
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
NASA Astrophysics Data System (ADS)
Graus, Richard R.; MacIntyre, Ian G.
1989-06-01
The computer model COREEF was used to simulate variations in the zonation patterns of Caribbean reefs in relation to parameters that affect the magnitude and distribution of wave and light energy. We first developed a simulated standard reef by exposing a simplified profile of the reef at Discovery Bay, Jamaica, to the known wave and light energy conditions to establish a reference coralgal and sedimentological zonation pattern. We then varied 13 parameters related to the wave and light energy input, bathymetric setting, and gross morphology of this reef to determine the effects of each parameter on the zonation pattern. Analysis of the simulation results indicates that submerging the reef or altering the wave or light energy input to the reef produces the greatest modifications of the zonation pattern. Morphological structures that alter a reef's horizontal dimensions only minimally affect the zonation pattern, but those structures that alter a reef's vertical dimensions-particularly steep-sided, wave reflecting structures-can significantly modify the zonation of the structure itself and that of more leeward areas. The more seaward the location of a morphological structure, the more profoundly it can affect the overall reef zonation. If waves break at the reef crest, wave energy conditions in the back reef are greatly reduced and the bottom consists of lower wave energy zones than those found at the same depths in the fore reef. If waves do not break at the crest, the back reef is subjected to almost the same wave conditions that exist in the fore reef, and the zones tend to be similar. The zonation patterns of some existing reefs resemble those of our simulated reefs, but other zonation patterns cannot be reproduced accurately because our simulation experiments do not consider the interactions between multiple parameters found on many existing reefs.
A new approach to compute accurate velocity of meteors
NASA Astrophysics Data System (ADS)
Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William
2016-10-01
The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy
Huston, Thomas E; Farfán, Eduardo B; Bolch, W Emmett; Bolch, Wesley E
2003-11-01
An important aspect in model uncertainty analysis is the evaluation of input parameter sensitivities with respect to model outcomes. In previous publications, parameter uncertainties were examined for the ICRP-66 respiratory tract model. The studies were aided by the development and use of a computer code LUDUC (Lung Dose Uncertainty Code) which allows probabilities density functions to be specified for all ICRP-66 model input parameters. These density functions are sampled using Latin hypercube techniques with values subsequently propagated through the ICRP-66 model. In the present study, LUDUC has been used to perform a detailed parameter sensitivity analysis of the ICRP-66 model using input parameter density functions specified in previously published articles. The results suggest that most of the variability in the dose to a given target region is explained by only a few input parameters. For example, for particle diameters between 0.1 and 50 microm, about 50% of the variability in the total lung dose (weighted sum of target tissue doses) for 239PuO2 is due to variability in the dose to the alveolar-interstitial (AI) region. In turn, almost 90% of the variability in the dose to the AI region is attributable to uncertainties in only four parameters in the model: the ventilation rate, the AI deposition fraction, the clearance rate constant for slow-phase absorption of deposited material to the blood, and the clearance rate constant for particle transport from the AI2 to bb1 compartment. A general conclusion is that many input parameters do not significantly influence variability in final doses. As a result, future research can focus on improving density functions for those input variables that contribute the most to variability in final dose values. PMID:14571988
Precision cosmological parameter estimation
NASA Astrophysics Data System (ADS)
Fendt, William Ashton, Jr.
2009-09-01
methods. These techniques will help in the understanding of new physics contained in current and future data sets as well as benefit the research efforts of the cosmology community. Our idea is to shift the computationally intensive pieces of the parameter estimation framework to a parallel training step. We then provide a machine learning code that uses this training set to learn the relationship between the underlying cosmological parameters and the function we wish to compute. This code is very accurate and simple to evaluate. It can provide incredible speed- ups of parameter estimation codes. For some applications this provides the convenience of obtaining results faster, while in other cases this allows the use of codes that would be impossible to apply in the brute force setting. In this thesis we provide several examples where our method allows more accurate computation of functions important for data analysis than is currently possible. As the techniques developed in this work are very general, there are no doubt a wide array of applications both inside and outside of cosmology. We have already seen this interest as other scientists have presented ideas for using our algorithm to improve their computational work, indicating its importance as modern experiments push forward. In fact, our algorithm will play an important role in the parameter analysis of Planck, the next generation CMB space mission.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
NASA Astrophysics Data System (ADS)
Deguchi, Daiki; Sato, Kazunori; Kino, Hiori; Kotani, Takao
2016-05-01
We have recently implemented a new version of the quasiparticle self-consistent GW (QSGW) method in the ecalj package released at http://github.com/tkotani/ecalj. Since the new version of the ecalj package is numerically stable and more accurate than the previous versions, we can perform calculations easily without being bothered with tuning input parameters. Here we examine its ability to describe energy band properties, e.g., band-gap energy, eigenvalues at special points, and effective mass, for a variety of semiconductors and insulators. We treat C, Si, Ge, Sn, SiC (in 2H, 3C, and 4H structures), (Al, Ga, In) × (N, P, As, Sb), (Zn, Cd, Mg) × (O, S, Se, Te), SiO2, HfO2, ZrO2, SrTiO3, PbS, PbTe, MnO, NiO, and HgO. We propose that a hybrid QSGW method, where we mix 80% of QSGW and 20% of LDA, gives universally good agreement with experiments for these materials.
Input to state stability in reservoir models
NASA Astrophysics Data System (ADS)
Müller, Markus; Sierra, Carlos
2016-04-01
Models in ecology and biogeochemistry, in particular models of the global carbon cycle, can be generalized as systems of non-autonomous ordinary differential equations (ODEs). For many applications, it is important to determine the stability properties for this type of systems, but most methods available for autonomous systems are not necessarily applicable for the non-autonomous case. We discuss here stability notions for non-autonomous nonlinear models represented by systems of ODEs explicitly dependent on time and a time-varying input signal. We propose Input to State Stability (ISS) as candidate for the necessary generalization of the established analysis with respect to equilibria or invariant sets for autonomous systems, and show its usefulness by applying it to reservoir models typical for element cycling in ecosystem, e.g. in soil organic matter decomposition. We also show how ISS generalizes existent concepts formerly only available for Linear Time Invariant (LTI) and Linear Time Variant (LTV) systems to the nonlinear case.
2012-10-03
Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: Robot arms - Provides joint control and several tool control schemes UGV's - Provides translational and rotational commands for "skid-steer" vehicles Pan-tilt units - Provides several modes of control including velocity, position,more » and point-tracking Head-mounted displays (HMO)- Controls the viewpoint of a HMO Umbra frames - Controls the position andorientation of an Umbra posrot object Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.« less
Multimodal interfaces with voice and gesture input
Milota, A.D.; Blattner, M.M.
1995-07-20
The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.
Circadian light-input pathways in Drosophila.
Yoshii, Taishi; Hermann-Luibl, Christiane; Helfrich-Förster, Charlotte
2016-01-01
Light is the most important environmental cue to entrain the circadian clock in most animals. In the fruit fly Drosophila melanogaster, the light entrainment mechanisms of the clock have been well-studied. The Drosophila brain contains approximately 150 neurons that rhythmically express circadian clock genes. These neurons are called "clock neurons" and control behavioral activity rhythms. Many clock neurons express the Cryptochrome (CRY) protein, which is sensitive to UV and blue light, and thus enables clock neurons deep in the brain to directly perceive light. In addition to the CRY protein, external photoreceptors in the Drosophila eyes play an important role in circadian light-input pathways. Recent studies have provided new insights into the mechanisms that integrate these light inputs into the circadian network of the brain. In this review, we will summarize the current knowledge on the light entrainment pathways in the Drosophila circadian clock. PMID:27066180
Virtual input device with diffractive optical element
NASA Astrophysics Data System (ADS)
Wu, Ching Chin; Chu, Chang Sheng
2005-02-01
As a portable device, such as PDA and cell phone, a small size build in virtual input device is more convenient for complex input demand. A few years ago, a creative idea called 'virtual keyboard' is announced, but up to now there's still no mass production method for this idea. In this paper we'll show the whole procedure of making a virtual keyboard. First of all is the HOE (Holographic Optical Element) design of keyboard image which yields a fan angle about 30 degrees, and then use the electron forming method to copy this pattern in high precision. And finally we can product this element by inject molding. With an adaptive lens design we can get a well correct keyboard image in distortion and a wilder fan angle about 70 degrees. With a batter alignment of HOE pattern lithography, we"re sure to get higher diffraction efficiency.
2012-10-03
Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: Robot arms - Provides joint control and several tool control schemes UGV's - Provides translational and rotational commands for "skid-steer" vehicles Pan-tilt units - Provides several modes of control including velocity, position, and point-tracking Head-mounted displays (HMO)- Controls the viewpoint of a HMO Umbra frames - Controls the position andorientation of an Umbra posrot object Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.
Signaling inputs to invadopodia and podosomes
Hoshino, Daisuke; Branch, Kevin M.; Weaver, Alissa M.
2013-01-01
Summary Remodeling of extracellular matrix (ECM) is a fundamental cell property that allows cells to alter their microenvironment and move through tissues. Invadopodia and podosomes are subcellular actin-rich structures that are specialized for matrix degradation and are formed by cancer and normal cells, respectively. Although initial studies focused on defining the core machinery of these two structures, recent studies have identified inputs from both growth factor and adhesion signaling as crucial for invasive activity. This Commentary will outline the current knowledge on the upstream signaling inputs to invadopodia and podosomes and their role in governing distinct stages of these invasive structures. We discuss invadopodia and podosomes as adhesion structures and highlight new data showing that invadopodia-associated adhesion rings promote the maturation of already-formed invadopodia. We present a model in which growth factor stimulation leads to phosphoinositide 3-kinase (PI3K) activity and formation of invadopodia, whereas adhesion signaling promotes exocytosis of proteinases at invadopodia. PMID:23843616
Controlling Synfire Chain by Inhibitory Synaptic Input
NASA Astrophysics Data System (ADS)
Shinozaki, Takashi; Câteau, Hideyuki; Urakubo, Hidetoshi; Okada, Masato
2007-04-01
The propagation of highly synchronous firings across neuronal networks, called the synfire chain, has been actively studied both theoretically and experimentally. The temporal accuracy and remarkable stability of the propagation have been repeatedly examined in previous studies. However, for such a mode of signal transduction to play a major role in processing information in the brain, the propagation should also be controlled dynamically and flexibly. Here, we show that inhibitory but not excitatory input can bidirectionally modulate the propagation, i.e., enhance or suppress the synchronous firings depending on the timing of the input. Our simulations based on the Hodgkin-Huxley neuron model demonstrate this bidirectional modulation and suggest that it should be achieved with any biologically inspired modeling. Our finding may help describe a concrete scenario of how multiple synfire chains lying in a neuronal network are appropriately controlled to perform significant information processing.
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysis provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.
Can Selforganizing Maps Accurately Predict Photometric Redshifts?
NASA Technical Reports Server (NTRS)
Way, Michael J.; Klose, Christian
2012-01-01
We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods
Aortic Input Impedance during Nitroprusside Infusion
Pepine, Carl J.; Nichols, W. W.; Curry, R. C.; Conti, C. Richard
1979-01-01
Beneficial effects of nitroprusside infusion in heart failure are purportedly a result of decreased afterload through “impedance” reduction. To study the effect of nitroprusside on vascular factors that determine the total load opposing left ventricular ejection, the total aortic input impedance spectrum was examined in 12 patients with heart failure (cardiac index <2.0 liters/min per m2 and left ventricular end diastolic pressure >20 mm Hg). This input impedance spectrum expresses both mean flow (resistance) and pulsatile flow (compliance and wave reflections) components of vascular load. Aortic root blood flow velocity and pressure were recorded continuously with a catheter-tip electromagnetic velocity probe in addition to left ventricular pressure. Small doses of nitroprusside (9-19 μg/min) altered the total aortic input impedance spectrum as significant (P < 0.05) reductions in both mean and pulsatile components were observed within 60-90 s. With these acute changes in vascular load, left ventricular end diastolic pressure declined (44%) and stroke volume increased (20%, both P < 0.05). Larger nitroprusside doses (20-38 μg/min) caused additional alteration in the aortic input impedance spectrum with further reduction in left ventricular end diastolic pressure and increase in stroke volume but no additional changes in the impedance spectrum or stroke volume occurred with 39-77 μg/min. Improved ventricular function persisted when aortic pressure was restored to control values with simultaneous phenylephrine infusion in three patients. These data indicate that nitroprusside acutely alters both the mean and pulsatile components of vascular load to effect improvement in ventricular function in patients with heart failure. The evidence presented suggests that it may be possible to reduce vascular load and improve ventricular function independent of aortic pressure reduction. PMID:457874
Sensory synergy as environmental input integration
Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo
2015-01-01
The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler. PMID:25628523
Generalized Input-Output Inequality Systems
Liu Yingfan Zhang Qinghong
2006-09-15
In this paper two types of generalized Leontief input-output inequality systems are introduced. The minimax properties for a class of functions associated with the inequalities are studied. Sufficient and necessary conditions for the inequality systems to have solutions are obtained in terms of the minimax value. Stability analysis for the solution set is provided in terms of upper semi-continuity and hemi-continuity of set-valued maps.
Using Focused Regression for Accurate Time-Constrained Scaling of Scientific Applications
Barnes, B; Garren, J; Lowenthal, D; Reeves, J; de Supinski, B; Schulz, M; Rountree, B
2010-01-28
Many large-scale clusters now have hundreds of thousands of processors, and processor counts will be over one million within a few years. Computational scientists must scale their applications to exploit these new clusters. Time-constrained scaling, which is often used, tries to hold total execution time constant while increasing the problem size along with the processor count. However, complex interactions between parameters, the processor count, and execution time complicate determining the input parameters that achieve this goal. In this paper we develop a novel gray-box, focused median prediction errors are less than 13%. regression-based approach that assists the computational scientist with maintaining constant run time on increasing processor counts. Combining application-level information from a small set of training runs, our approach allows prediction of the input parameters that result in similar per-processor execution time at larger scales. Our experimental validation across seven applications showed that median prediction errors are less than 13%.
Wang, Lei; Haccou, Patsy; Lu, Bao-Rong
2016-01-01
Environmental impacts caused by transgene flow from genetically engineered (GE) crops to their wild relatives mediated by pollination are longstanding biosafety concerns worldwide. Mathematical modeling provides a useful tool for estimating frequencies of pollen-mediated gene flow (PMGF) that are critical for assessing such environmental impacts. However, most PMGF models are impractical for this purpose because their parameterization requires actual data from field experiments. In addition, most of these models are usually too general and ignored the important biological characteristics of concerned plant species; and therefore cannot provide accurate prediction for PMGF frequencies. It is necessary to develop more accurate PMGF models based on biological and climatic parameters that can be easily measured in situ. Here, we present a quasi-mechanistic PMGF model that only requires the input of biological and wind speed parameters without actual data from field experiments. Validation of the quasi-mechanistic model based on five sets of published data from field experiments showed significant correlations between the model-simulated and field experimental-generated PMGF frequencies. These results suggest accurate prediction for PMGF frequencies using this model, provided that the necessary biological parameters and wind speed data are available. This model can largely facilitate the assessment and management of environmental impacts caused by transgene flow, such as determining transgene flow frequencies at a particular spatial distance, and establishing spatial isolation between a GE crop and its coexisting non-GE counterparts and wild relatives.
Wang, Lei; Haccou, Patsy; Lu, Bao-Rong
2016-01-01
Environmental impacts caused by transgene flow from genetically engineered (GE) crops to their wild relatives mediated by pollination are longstanding biosafety concerns worldwide. Mathematical modeling provides a useful tool for estimating frequencies of pollen-mediated gene flow (PMGF) that are critical for assessing such environmental impacts. However, most PMGF models are impractical for this purpose because their parameterization requires actual data from field experiments. In addition, most of these models are usually too general and ignored the important biological characteristics of concerned plant species; and therefore cannot provide accurate prediction for PMGF frequencies. It is necessary to develop more accurate PMGF models based on biological and climatic parameters that can be easily measured in situ. Here, we present a quasi-mechanistic PMGF model that only requires the input of biological and wind speed parameters without actual data from field experiments. Validation of the quasi-mechanistic model based on five sets of published data from field experiments showed significant correlations between the model-simulated and field experimental-generated PMGF frequencies. These results suggest accurate prediction for PMGF frequencies using this model, provided that the necessary biological parameters and wind speed data are available. This model can largely facilitate the assessment and management of environmental impacts caused by transgene flow, such as determining transgene flow frequencies at a particular spatial distance, and establishing spatial isolation between a GE crop and its coexisting non-GE counterparts and wild relatives. PMID:26959240
Wang, Lei; Haccou, Patsy; Lu, Bao-Rong
2016-01-01
Environmental impacts caused by transgene flow from genetically engineered (GE) crops to their wild relatives mediated by pollination are longstanding biosafety concerns worldwide. Mathematical modeling provides a useful tool for estimating frequencies of pollen-mediated gene flow (PMGF) that are critical for assessing such environmental impacts. However, most PMGF models are impractical for this purpose because their parameterization requires actual data from field experiments. In addition, most of these models are usually too general and ignored the important biological characteristics of concerned plant species; and therefore cannot provide accurate prediction for PMGF frequencies. It is necessary to develop more accurate PMGF models based on biological and climatic parameters that can be easily measured in situ. Here, we present a quasi-mechanistic PMGF model that only requires the input of biological and wind speed parameters without actual data from field experiments. Validation of the quasi-mechanistic model based on five sets of published data from field experiments showed significant correlations between the model-simulated and field experimental-generated PMGF frequencies. These results suggest accurate prediction for PMGF frequencies using this model, provided that the necessary biological parameters and wind speed data are available. This model can largely facilitate the assessment and management of environmental impacts caused by transgene flow, such as determining transgene flow frequencies at a particular spatial distance, and establishing spatial isolation between a GE crop and its coexisting non-GE counterparts and wild relatives. PMID:26959240
Minimizing structural vibrations with Input Shaping (TM)
NASA Technical Reports Server (NTRS)
Singhose, Bill; Singer, Neil
1995-01-01
A new method for commanding machines to move with increased dynamic performance was developed. This method is an enhanced version of input shaping, a patented vibration suppression algorithm. This technique intercepts a command input to a system command that moves the mechanical system with increased performance and reduced residual vibration. This document describes many advanced methods for generating highly optimized shaping sequences which are tuned to particular systems. The shaping sequence is important because it determines the trade off between move/settle time of the system and the insensitivity of the input shaping algorithm to variations or uncertainties in the machine which can be controlled. For example, a system with a 5 Hz resonance that takes 1 second to settle can be improved to settle instantaneously using a 0.2 shaping sequence (thus improving settle time by a factor of 5). This system could vary by plus or minus 15% in its natural frequency and still have no apparent vibration. However, the same system shaped with a 0.3 second shaping sequence could tolerate plus or minus 40% or more variation in natural frequency. This document describes how to generate sequences that maximize performance, sequences that maximize insensitivity, and sequences that trade off between the two. Several software tools are documented and included.
Molecular structure input on the web.
Ertl, Peter
2010-02-02
A molecule editor, that is program for input and editing of molecules, is an indispensable part of every cheminformatics or molecular processing system. This review focuses on a special type of molecule editors, namely those that are used for molecule structure input on the web. Scientific computing is now moving more and more in the direction of web services and cloud computing, with servers scattered all around the Internet. Thus a web browser has become the universal scientific user interface, and a tool to edit molecules directly within the web browser is essential.The review covers a history of web-based structure input, starting with simple text entry boxes and early molecule editors based on clickable maps, before moving to the current situation dominated by Java applets. One typical example - the popular JME Molecule Editor - will be described in more detail. Modern Ajax server-side molecule editors are also presented. And finally, the possible future direction of web-based molecule editing, based on technologies like JavaScript and Flash, is discussed.