Input Type and Parameter Resetting: Is Naturalistic Input Necessary?
ERIC Educational Resources Information Center
Rothman, Jason; Iverson, Michael
2007-01-01
It has been argued that extended exposure to naturalistic input provides L2 learners with more of an opportunity to converge of target morphosyntactic competence as compared to classroom-only environments, given that the former provide more positive evidence of less salient linguistic properties than the latter (e.g., Isabelli 2004). Implicitly,…
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
Agricultural and Environmental Input Parameters for the Biosphere Model
Kaylie Rasmuson; Kurt Rautenstrauch
2003-06-20
This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter
2016-04-01
Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Burley, Casey L.; Marcolini, Michael A.
1994-01-01
Rotor noise prediction codes predict the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predicitions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and cyclic flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the chordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of Central Processing Unit (CPU) time necessary for the various approximations.
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Marcolini, Michael A.; Burley, Casey L.
1991-01-01
The noise prediction code WOPWOP predicts the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predictions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the cordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of CPU time necessary for the various approximations.
Environmental Transport Input Parameters for the Biosphere Model
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).
Agricultural and Environmental Input Parameters for the Biosphere Model
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Inhalation Exposure Input Parameters for the Biosphere Model
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C_{7}H_{10}O_{2}, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Environmental Transport Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values
Inhalation Exposure Input Parameters for the Biosphere Model
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the
Inhalation Exposure Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-09-24
This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the
Soil-related Input Parameters for the Biosphere Model
A. J. Smith
2003-07-02
This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash
Hendricks, Terry J.; Karri, Naveen K.
2007-06-30
Advanced, direct thermal energy conversion technologies are receiving increased research attention in order to recover waste thermal energy in advanced vehicles and industrial processes. Advanced thermoelectric (TE) systems necessarily require integrated system-level analyses to establish accurate optimum system designs. Past system-level design and analysis has relied on well-defined deterministic input parameters even though many critically important environmental and system design parameters in the above mentioned applications are often randomly variable, sometimes according to complex relationships, rather than discrete, well-known deterministic variables. This work describes new research and development creating techniques and capabilities for probabilistic design and analysis of advanced TE power generation systems to quantify the effects of randomly uncertain design inputs in determining more robust optimum TE system designs and expected outputs. Selected case studies involving stochastic TE .material properties and coupled multi-variable stochasticity in key environmental and design parameters are presented and discussed to demonstrate key impacts from considering stochastic design inputs on the TE design optimization process. Critical findings show that: 1) stochastic Gaussian input distributions may produce Gaussian or non-Gaussian outcome probability distributions for critical TE design parameters, and 2) probabilistic input considerations can create design effects that warrant significant modifications to deterministically-derived optimum TE system designs. Magnitudes and directions of these design modifications are quantified for selected TE system design analysis cases.
Direct computation of parameters for accurate polarizable force fields
Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.
2014-11-21
We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.
Accurate 3D quantification of the bronchial parameters in MDCT
NASA Astrophysics Data System (ADS)
Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.
2005-08-01
The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.
Soil-Related Input Parameters for the Biosphere Model
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This
A Study on the Effect of Input Parameters on Springback Prediction Accuracy
NASA Astrophysics Data System (ADS)
Han, Y. S.; Yang, W. H.; Choi, K. Y.; Kim, B. H.
2011-08-01
In this study, it is considered the input parameters in springback simulation affect factors to use member part by Taguchi's method into six-sigma tool on the basis of experiment for acquiring much more accurate springback prediction in Pamstamp2G. The best combination of input parameters for higher springback prediction accuracy is determined to the fender part as the one is applied for member part. The cracks and wrinkles in drawing and flanging operation must be removed for predicting the higher springback in accuracy. The compensation of springback on the basis of simulation is carried out. It is concluded that 95% of accuracy for springback prediction in dimension is secured as comparing with tryout panel.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
A generalized multiple-input, multiple-output modal parameter estimation algorithm
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Blair, M. A.
1984-01-01
A new method for experimental determination of the modal parameters of a structure is presented. The method allows for multiple input forces to be applied simultaneously, and for an arbitrary number of acceleration response measurements to be employed. These data are used to form the equations of motion for a damped linear elastic structure. The modal parameters are then obtained through an eigenvalue technique. In conjunction with the development of the equations, an extensive computer simulation study was performed. The results of the study show a marked improvement in the mode shape identification for closely-spaced modes as the number of applied forces is increased. Also demonstrated is the influence of noise on the method's ability to identify accurate modal parameters. Here again, an increase in the number of exciters leads to a significant improvement in the identified parameters.
Identification of accurate nonlinear rainfall-runoff models with unique parameters
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N.
2009-04-01
We propose a strategy to identify models with unique parameters that yield accurate streamflow predictions, given a time-series of rainfall inputs. The procedure consists of five general steps. First, an a priori range of model structures is specified based on prior general and site-specific hydrologic knowledge. To this end, we rely on a flexible model code that allows a specification of a wide range of model structures, from simple to complex. Second, using global optimization each model structure is calibrated to a record of rainfall-runoff data, yielding optimal parameter values for each model structure. Third, accuracy of each model structure is determined by estimating model prediction errors using independent validation and statistical theory. Fourth, parameter identifiability of each calibrated model structure is estimated by means of Monte Carlo Markov Chain simulation. Finally, an assessment is made about each model structure in terms of its accuracy of mimicking rainfall-runoff processes (step 3), and the uniqueness of its parameters (step 4). The procedure results in the identification of the most complex and accurate model supported by the data, without causing parameter equifinality. As such, it provides insight into the information content of the data for identifying nonlinear rainfall-runoff models. We illustrate the method using rainfall-runoff data records from several MOPEX basins in the US.
Uncertainty related to input parameters of (137)Cs soil redistribution model for undisturbed fields.
Iurian, Andra-Rada; Mabit, Lionel; Cosma, Constantin
2014-10-01
This study presents an alternative method to empirically establish the effective diffusion coefficient and the convective velocity of (137)Cs in undisturbed soils. This approach offers the possibility to improve the parameterisation and the accuracy of the (137)Cs Diffusion and Migration Model (DMM) used to assess soil erosion magnitudes. The impact of the different input parameters of this radiometric model on the derived-soil redistribution rates has been determined for a Romanian pastureland located in the northwest extremity of the Transylvanian Plain. By fitting the convection-diffusion equation to the available experimental data, the diffusion coefficient and convection velocity of (137)Cs in soil could be determined; 72% of the (137)Cs soil content could be attributed to the (137)Cs fallout originating from Chernobyl. The medium-term net erosion rate obtained with the calculated input parameters reached -6.6 t ha(-1) yr(-1). The model highlights great sensitivity to parameter estimations and the calculated erosion rates for undisturbed landscapes can be highly impacted if the input parameters are not accurately determined from the experimental data set. Upper and lower bounds should be established based on the determined uncertainty budget for the reliable estimates of the derived redistribution rates. PMID:24929506
Application of optimal input synthesis to aircraft parameter identification
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.; Mehra, R. K.
1976-01-01
The Frequency Domain Input Synthesis procedure is used in identifying the stability and control derivatives of an aircraft. By using a frequency-domain approach, one can handle criteria that are not easily handled by the time-domain approaches. Numerical results are presented for optimal elevator deflections to estimate the longitudinal stability and control derivatives subject to root-mean square constraints on the input. The applicability of the steady state optimal inputs to finite duration flight testing is investigated. The steady state approximation of frequency-domain synthesis is good for data lengths greater than two time cycles for the short period mode of the aircraft longitudinal motions. Phase relationships between different frequency components become important for shorter data lengths. The frequency domain inputs are shown to be much better than the conventional doublet inputs.
Flight investigation of various control inputs intended for parameter estimation
NASA Technical Reports Server (NTRS)
Shafer, M. F.
1984-01-01
NASA's F-8 digital fly-by-wire aircraft has been subjected to stability and control derivative assessments, leading to the proposal of improved control inputs for more efficient control derivative estimation. This will reduce program costs by reducing flight test and data analysis requirements. Inputs were divided into sinusoidal types and cornered types. Those with corners produced the best set of stability and control derivatives for the unaugmented flight control system mode. Small inputs are noted to have provided worse derivatives than larger ones.
Accurate lattice parameter measurements of stoichiometric uranium dioxide
NASA Astrophysics Data System (ADS)
Leinders, Gregory; Cardinaels, Thomas; Binnemans, Koen; Verwerft, Marc
2015-04-01
The paper presents and discusses lattice parameter analyses of pure, stoichiometric UO2. Attention was paid to prepare stoichiometric samples and to maintain stoichiometry throughout the analyses. The lattice parameter of UO2.000±0.001 was evaluated as being 547.127 ± 0.008 pm at 20 °C, which is substantially higher than many published values for the UO2 lattice constant and has an improved precision by about one order of magnitude. The higher value of the lattice constant is mainly attributed to the avoidance of hyperstoichiometry in the present study and to a minor extent to the use of the currently accepted Cu Kα1 X-ray wavelength value. Many of the early studies used Cu Kα1 wavelength values that differ from the currently accepted value, which also contributed to an underestimation of the true lattice parameter.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Clinically accurate fetal ECG parameters acquired from maternal abdominal sensors
CLIFFORD, Gari; SAMENI, Reza; WARD, Mr. Jay; ROBINSON, Julian; WOLFBERG, Adam J.
2011-01-01
OBJECTIVE To evaluate the accuracy of a novel system for measuring fetal heart rate and ST-segment changes using non-invasive electrodes on the maternal abdomen. STUDY DESIGN Fetal ECGs were recorded using abdominal sensors from 32 term laboring women who had a fetal scalp electrode (FSE) placed for a clinical indication. RESULTS Good quality data for FHR estimation was available in 91.2% of the FSE segments, and 89.9% of the abdominal electrode segments. The root mean square (RMS) error between the FHR data calculated by both methods over all processed segments was 0.36 beats per minute. ST deviation from the isoelectric point ranged from 0 to 14.2% of R-wave amplitude. The RMS error between the ST change calculated by both methods averaged over all processed segments was 3.2%. CONCLUSION FHR and ST change acquired from the maternal abdomen is highly accurate and on average is clinically indistinguishable from FHR and ST change calculated using FSE data. PMID:21514560
Predicting accurate line shape parameters for CO2 transitions
NASA Astrophysics Data System (ADS)
Gamache, Robert R.; Lamouroux, Julien
2013-11-01
The vibrational dependence of CO2 half-widths and line shifts are given by a modification of the model proposed by Gamache and Hartmann [Gamache R, Hartmann J-M. J Quant Spectrosc Radiat Transfer 2004;83:119]. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power and a reference ro-vibrational transition. Calculations were made for 24 bands for lower rotational quantum numbers from 0 to 160 for N2-, O2-, air-, and self-collisions with CO2. These data were extrapolated to J″=200 to accommodate several databases. Comparison of the CRB calculations with measurement gives very high confidence in the data. In the model a Quantum Coordinate is defined by (c1 |Δν1|+c2 |Δν2|+c3|Δν3|)p. The power p is adjusted and a linear least-squares fit to the data by the model expression is made. The procedure is iterated on the correlation coefficient, R, until [|R|-1] is less than a threshold. The results demonstrate the appropriateness of the model. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From the data of the fits, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO2 databases to have complete information for the line shape parameters.
Sprung, J.L.; Jow, H-N ); Rollstin, J.A. ); Helton, J.C. )
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D. ); Amos, C.N. ); Helton, J. ); Boyd, G. )
1992-06-01
In support of the Nuclear Regulatory Commission's (NRC's) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
Evaluation of severe accident risks: Quantification of major input parameters
Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D. ); Murfin, W. ); Amos, C.N. )
1992-03-01
In support of the Nuclear Regulatory Commission's (NRC's) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a so-called'' point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Accurate Collisional Cross-Sections: Important Non-Lte Input Data
NASA Astrophysics Data System (ADS)
Mashonkina, L.
2010-11-01
Non-LTE modelling for a particular atom requires accurate collisional excitation and ionization cross-sections for the entire system of transitions in the atom. This review concerns with inelastic collisions with electrons and neutral hydrogen atoms. For the selected atoms, H i and Ca ii, comparisons are made between electron impact excitation rates from ab initio calculations and various theoretical approximations. The effect of the use of modern data on non-LTE modelling is shown. For most transitions and most atoms, hydrogen collisional rates are calculated using a semi-empirical modification of the classical Thomson formula for ionization by electrons. Approaches used to estimate empirically the efficiency of hydrogenic collisions in the statistical equilibrium of atoms are reviewed. This research was supported by the Deutsche Forschungsgemeinschaft with grant 436 RUS 17/13/07.
Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Optimal input design for aircraft parameter estimation using dynamic programming principles
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Morelli, Eugene A.
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Reinbolt, Jeffrey A.; Haftka, Raphael T.; Chmielewski, Terese L.; Fregly, Benjamin J.
2013-01-01
Variations in joint parameter values (axis positions and orientations in body segments) and inertial parameter values (segment masses, mass centers, and moments of inertia) as well as kinematic noise alter the results of inverse dynamics analyses of gait. Three-dimensional linkage models with joint constraints have been proposed as one way to minimize the effects of noisy kinematic data. Such models can also be used to perform gait optimizations to predict post-treatment function given pre-treatment gait data. This study evaluates whether accurate patient-specific joint and inertial parameter values are needed in three-dimensional linkage models to produce accurate inverse dynamics results for gait. The study was performed in two stages. First, we used optimization analyses to evaluate whether patient-specific joint and inertial parameter values can be calibrated accurately from noisy kinematic data, and second, we used Monte Carlo analyses to evaluate how errors in joint and inertial parameter values affect inverse dynamics calculations. Both stages were performed using a dynamic, 27 degree-of-freedom, full-body linkage model and synthetic (i.e., computer generated) gait data corresponding to a nominal experimental gait motion. In general, joint but not inertial parameter values could be found accurately from noisy kinematic data. Root-mean-square (RMS) errors were 3° and 4 mm for joint parameter values and 1 kg, 22 mm, and 74,500 kg*mm2 for inertial parameter values. Furthermore, errors in joint but not inertial parameter values had a significant effect on calculated lower-extremity inverse dynamics joint torques. The worst RMS torque error averaged 4% bodyweight*height (BW*H) due to joint parameter variations but less than 0.25% BW*H due to inertial parameter variations. These results suggest that inverse dynamics analyses of gait utilizing linkage models with joint constraints should calibrate the model’s joint parameter values to obtain accurate joint
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Measuring accurate body parameters of dressed humans with large-scale motion using a Kinect sensor.
Xu, Huanghao; Yu, Yao; Zhou, Yu; Li, Yang; Du, Sidan
2013-01-01
Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods. PMID:24064597
NASA Astrophysics Data System (ADS)
Faybishenko, B.; McCurley, R. D.; Wang, J. Y.
2004-12-01
To assess, via numerical simulation, the effect of 12 uncertain input parameters (characterizing soil and rock properties and boundary [meteorological] conditions), on net infiltration uncertainty, the Latin Hypercube Sampling (LHS) technique (a modified Monte Carlo approach using a form of stratified sampling) was used. Each uncertain input parameter is presented using a probability distribution function, characterizing the epistemic uncertainty (which arises from the lack of knowledge about parameters-an uncertainty that can be reduced as new information becomes available). One hundred LHS realizations (using the code LHS V2.50 developed at Sandia National Laboratories) of the uncertain input parameters were used to simulate the net infiltration over the Yucca Mountain repository footprint. Simulations were carried out using the code INFIL VA-2.a1 (a modified USGS code INFIL V2.0). The results of simulations were then used to determine the net infiltration probability distribution function. According to theoretical considerations, for 12 uncertain input parameters, from 15 to 36 realizations using the LHS technique should be sufficient to get meaningful results. In this presentation, we will show that the theoretical considerations may significantly underestimate the required number of realizations for the evaluation of the correlation between the net infiltration and uncertain input parameters. We will demonstrate that the calculated net infiltration rate (presented as a probability distribution function) oscillates as a function of simulation runs, and that the correlation between net infiltration rate and the uncertain input parameters depends on the number of simulation runs. For example, the correlation coefficient between the soil (or rock) permeability and net infiltration stabilizes only after 60-80 realizations. The results of the correlation analysis show that the correlation to net infiltration is highest for precipitation, bedrock permeability
Accurate and transferable extended Hückel-type tight-binding parameters
NASA Astrophysics Data System (ADS)
Cerdá, J.; Soria, F.
2000-03-01
We show how the simple extended Hückel theory can be easily parametrized in order to yield accurate band structures for bulk materials, while the resulting optimized atomic orbital basis sets present good transferability properties. The number of parameters involved is exceedingly small, typically ten or eleven per structural phase. We apply the method to almost fifty elemental and compound bulk phases.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
Input parameters to codes which analyze LMFBR wire-wrapped bundles
Hawley, J.T.; Chan, Y.N.; Todreas, N.E.
1980-12-01
This report provides a current summary of recommended values of key input parameters required by ENERGY code analysis of LMFBR wire wrapped bundles. This data is based on the interpretation of experimental results from the MIT and other available laboratory programs.
NASA Astrophysics Data System (ADS)
Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.
2015-09-01
Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.
Capote, R. , E-Mail: r.capotenoy@iaea.org; Herman, M.; Oblozinsky, P.; Young, P.G.; Goriely, S.; Belgya, T.; Ignatyuk, A.V.; Koning, A.J.; Hilaire, S.; Plujko, V.A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M.B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V.M.; Reffo, G.
2009-12-15
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
Capote, R.; Herman, M.; Capote,R.; Herman,M.; Oblozinsky,P.; Young,P.G.; Goriely,S.; Belgy,T.; Ignatyuk,A.V.; Koning,A.J.; Hilaire,S.; Pljko,V.A.; Avrigeanu,M.; Bersillon,O.; Chadwick,M.B.; Fukahori,T.; Ge, Zhigang; Han,Yinl,; Kailas,S.; Kopecky,J.; Maslov,V.M.; Reffo,G.; Sin,M.; Soukhovitskii,E.Sh.; Talou,P
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Smith, Z. K.; Steenburgh, R.; Fry, C. D.; Dryer, M.
2009-12-01
Predictions of interplanetary shock arrivals at Earth are important to space weather because they are often followed by geomagnetic disturbances that disrupt human technologies. The success of numerical simulation predictions depends on the codes and on the inputs obtained from solar observations. The inputs are usually divided into the more slowly varying background solar wind, onto which short-duration solar transient events are superposed. This paper examines the dependence of the prediction success on the range of values of the solar transient inputs. These input parameters are common to many 3-D MHD codes. The predictions of the Hakamada-Akasofu-Fry version 2 (HAFv2) model were used because its predictions of shock arrivals were tested, informally in the operational environment, from 1997 to 2006. The events list and HAFv2's performance were published in a series of three papers. The third event set is used to investigate the success and accuracy of the predictions in terms of the input parameter ranges (considered individually). By defining three thresholds for the input speed, duration, and X-ray class, it is possible to categorize the prediction outcomes by these input ranges. The X-ray class gives the most successful classification. Above the highest threshold, 89% of the predictions were successful while below the lowest threshold, only 40% were successful. The accuracy, measured in terms of the time differences between the observed and predicted shock arrivals, also shows largest improvement for the X-ray class. Guidelines are presented for space weather forecasters using the HAFv2 or other interplanetary simulation models.
Bubbico, Roberto; Mazzarotta, Barbara
2008-03-01
In the present paper the accidental release of toxic chemicals has been taken into consideration, and a sensitivity analysis study of the corresponding consequences calculation has been carried out. Four different toxic chemicals have been chosen for the simulations, and the effect of the variability of the main input parameters on the extension of the impact areas has been assessed. The results show that the influence of these parameters depends on the physical properties of the released substance and that not always the widely known rules of thumb, such as the positive influence of the wind velocity on gas dispersion, apply. In particular, the boiling temperature of the chemical has revealed to be the main parameter affecting the type of dependence of the impact distances on the input variables. PMID:17630190
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model
Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y
2011-10-27
Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.
NASA Astrophysics Data System (ADS)
Peng, Liang-You; Gong, Qihuang
2010-12-01
The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Impacts of input parameter spatial aggregation on an agricultural nonpoint source pollution model
NASA Astrophysics Data System (ADS)
FitzHugh, T. W.; Mackay, D. S.
2000-09-01
The accuracy of agricultural nonpoint source pollution models depends in part on how well model input parameters describe the relevant characteristics of the watershed. The spatial extent of input parameter aggregation has previously been shown to have a substantial impact on model output. This study investigates this problem using the Soil and Water Assessment Tool (SWAT), a distributed-parameter agricultural nonpoint source pollution model. The primary question addressed here is: how does the size or number of subwatersheds used to partition the watershed affect model output, and what are the processes responsible for model behavior? SWAT was run on the Pheasant Branch watershed in Dane County, WI, using eight watershed delineations, each with a different number of subwatersheds. Model runs were conducted for the period 1990-1996. Streamflow and outlet sediment predictions were not seriously affected by changes in subwatershed size. The lack of change in outlet sediment is due to the transport-limited nature of the Pheasant Branch watershed and the stable transport capacity of the lower part of the channel network. This research identifies the importance of channel parameters in determining the behavior of SWAT's outlet sediment predictions. Sediment generation estimates do change substantially, dropping by 44% between the coarsest and the finest watershed delineations. This change is primarily due to the sensitivity of the runoff term in the Modified Universal Soil Loss Equation to the area of hydrologic response units (HRUs). This sensitivity likely occurs because SWAT was implemented in this study with a very detailed set of HRUs. In order to provide some insight on the scaling behavior of the model two indexes were derived using the mathematics of the model. The indexes predicted SWAT scaling behavior from the data inputs without a need for running the model. Such indexes could be useful for model users by providing a direct way to evaluate alternative models
Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters
NASA Astrophysics Data System (ADS)
Falkenberg, T. V.; Vršnak, B.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P.; Hesse, M.
2010-06-01
Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time-dependent 3-D MHD model that can simulate the propagation of cone-shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit for ambient solar wind velocity, CME density, and elongation factor, regardless of whether one's main interest is arrival time, signal shape, or signal amplitude of the ICME. We find that though ENLILv2.5b currently does not include the magnetic cloud of the ICME, it replicates the signal at L1 well in the studied event. The arrival time difference between satellite data and the ENLILv2.5b baseline run of this study is less than 30 min.
Najafizadeh, Laleh; Gandjbakhche, Amir H.; Pourrezaei, Kambiz; Daryoush, Afshin
2013-01-01
Abstract. Modeling behavior of broadband (30 to 1000 MHz) frequency modulated near-infrared (NIR) photons through a phantom is the basis for accurate extraction of optical absorption and scattering parameters of biological turbid media. Photon dynamics in a phantom are predicted using both analytical and numerical simulation and are related to the measured insertion loss (IL) and insertion phase (IP) for a given geometry based on phantom optical parameters. Accuracy of the extracted optical parameters using finite element method (FEM) simulation is compared to baseline analytical calculations from the diffusion equation (DE) for homogenous brain phantoms. NIR spectroscopy is performed using custom-designed, broadband, free-space optical transmitter (Tx) and receiver (Rx) modules that are developed for photon migration at wavelengths of 680, 780, and 820 nm. Differential detection between two optical Rx locations separated by 0.3 cm is employed to eliminate systemic artifacts associated with interfaces of the optical Tx and Rx with the phantoms. Optical parameter extraction is achieved for four solid phantom samples using the least-square-error method in MATLAB (for DE) and COMSOL (for FEM) simulation by fitting data to measured results over broadband and narrowband frequency modulation. Confidence in numerical modeling of the photonic behavior using FEM has been established here by comparing the transmission mode’s experimental results with the predictions made by DE and FEM for known commercial solid brain phantoms. PMID:23322361
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Zhang, Xuesong; Liang, Faming; Yu, Beibei; Zong, Ziliang
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associated with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the ^{57}Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
Ming Parameter Input: Emma Model Redox Half Reaction Equation Deltag G Corrections for pH
D.M. Jolley
1998-07-23
The purpose of this calculation is to provide appropriate input parameters for use in MING V 1.0 (CSCI 300 18 V 1.0). This calculation corrects the Grogan and McKinley (1990) values for {Delta}G so that the data will function in the MING model. The Grogan and McKinley (1990) {Delta}G data are presented for a pH of 12 whereas the MING model requires that the {Delta}G be reported at standard conditions (i.e. pH of 0).
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is
Level Density Inputs in Nuclear Reaction Codes and the Role of the Spin Cutoff Parameter
NASA Astrophysics Data System (ADS)
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Bürger, A.; Görgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-05-01
The proton spectrum from the 57Fe(α, p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacings and to discrete levels and using the spin cutoff parameter with much a weaker excitation energy dependence than predicted by the Fermi-gas model.
NASA Astrophysics Data System (ADS)
Unal, B.; Askan, A.
2014-12-01
Earthquakes are among the most destructive natural disasters in Turkey and it is important to assess seismicity in different regions with the use of seismic networks. Bursa is located in Marmara Region, Northwestern Turkey and to the south of the very active North Anatolian Fault Zone. With around three million inhabitants and key industrial facilities of the country, Bursa is the fourth largest city in Turkey. Since most of the focus is on North Anatolian Fault zone, despite its significant seismicity, Bursa area has not been investigated extensively until recently. For reliable seismic hazard estimations and seismic design of structures, assessment of potential ground motions in this region is essential using both recorded and simulated data. In this study, we employ stochastic finite-fault simulation with dynamic corner frequency approach to model previous events as well to assess potential earthquakes in Bursa. To ensure simulations with reliable synthetic ground motion outputs, the input parameters must be carefully derived from regional data. In this study, using strong motion data collected at 33 stations in the region, site-specific parameters such as near-surface high frequency attenuation parameter and amplifications are obtained. Similarly, source and path parameters are adopted from previous studies that as well employ regional data. Initially, major previous events in the region are verified by comparing the records with the corresponding synthetics. Then simulations of scenario events in the region are performed. We present the results in terms of spatial distribution of peak ground motion parameters and time histories at selected locations.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
Comparisons of CAP88PC version 2.0 default parameters to site specific inputs
Lehto, M. A.; Courtney, J. C.; Charter, N.; Egan, T.
2000-03-02
The effects of varying the input for the CAP88PC Version 2.0 program on the total effective dose equivalents (TEDEs) were determined for hypothetical releases from the Hot Fuel Examination Facility (HFEF) located at the Argonne National Laboratory site on the Idaho National Engineering and Environmental Laboratory (INEEL). Values for site specific meteorological conditions and agricultural production parameters were determined for the 80 km radius surrounding the HFEF. Four nuclides, {sup 3}H, {sup 85}Kr, {sup 129}I, and {sup 137}Cs (with its short lived progeny, {sup 137m}Ba) were selected for this study; these are the radioactive materials most likely to be released from HFEF under normal or abnormal operating conditions. Use of site specific meteorological parameters of annual precipitation, average temperature, and the height of the inversion layer decreased the TEDE from {sup 137}Cs-{sup 137m}Ba up to 36%; reductions for other nuclides were less than 3%. Use of the site specific agricultural parameters reduced TEDE values between 7% and 49%, depending on the nuclide. Reductions are associated with decreased committed effective dose equivalents (CEDEs) from the ingestion pathway. This is not surprising since the HFEF is located well within the INEEL exclusion area, and the surrounding area closest to the release point is a high desert with limited agricultural diversity. Livestock and milk production are important in some counties at distances greater than 30 km from the HFEF.
Parameter estimates in dynamic models for PUB - influence of input data quality and scale
NASA Astrophysics Data System (ADS)
Arheimer, Berit; Dahné, Joel; Donnelly, Chantal; Strömqvist, Johan
2010-05-01
The Swedish Meteorological and Hydrological Institute (SMHI) produces hydrological predictions in ungauged basins of both water quantity and quality at different scales, using different input databases. This presentation will demonstrate two such model set-ups and the difference in estimated parameter values of the Hydrological Predictions for the Environment (HYPE) model. The model results are compared, and validation in independent sites is assumed to show the implications for PUB. The HYPE model is calibrated stepwise for a whole domain when applied, using a hydrological response units concept with interactive check between hydrology and hydrochemistry for soil/groundwater and rivers/lakes, respectively. Relatively few monitoring sites are used to receive reasonable results for the whole domain. The national S-HYPE model system (450 000 km2) produces predictions in 17 313 subbasins, where observations of water discharge are available in 300 and nutrient concentrations in 600 outlets. About 10% of these were used for model calibration and the rest for independent model validation, considered to represent the ungauged conditions. When applying the model for the whole Baltic Sea basin (1 700 000 km2), predictions are made in 5 100 subbasins. Observations are then available for water discharge in 160 unregulated river reaches and for nutrients in 761 subbasin outlets. About half of the water stations were used for calibration and 10% of the nutrient observations. Model performance is calculated using different evaluation criteria for independent sites. The differences in model performance between the national (S-HYPE) and the Baltic Sea basin (Balt-HYPE) scale applications can be attributed to either differences in model inputs or differences in calibration. In the Swedish application, more detailed input data on physiography, emissions and meteorology have been used for the higher resolution, while generally available databases and generic methods have been used
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Accurate parameters for HD 209458 and its planet from HST spectrophotometry
NASA Astrophysics Data System (ADS)
del Burgo, C.; Allende Prieto, C.
2016-08-01
We present updated parameters for the star HD 209458 and its transiting giant planet. The stellar angular diameter θ=0.2254±0.0017 mas is obtained from the average ratio between the absolute flux observed with the Hubble Space Telescope and that of the best-fitting Kurucz model atmosphere. This angular diameter represents an improvement in precision of more than four times compared to available interferometric determinations. The stellar radius R⋆=1.20±0.05 R⊙ is ascertained by combining the angular diameter with the Hipparcos trigonometric parallax, which is the main contributor to its uncertainty, and therefore the radius accuracy should be significantly improved with Gaia's measurements. The radius of the exoplanet Rp=1.41±0.06 RJ is derived from the corresponding transit depth in the light curve and our stellar radius. From the model fitting, we accurately determine the effective temperature, Teff=6071±20 K, which is in perfect agreement with the value of 6070±24 K calculated from the angular diameter and the integrated spectral energy distribution. We also find precise values from recent Padova Isochrones, such as R⋆=1.20±0.06 R⊙ and Teff=6099±41 K. We arrive at a consistent picture from these methods and compare the results with those from the literature.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling, the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
NASA Astrophysics Data System (ADS)
Kaiser, Andreas; Buchholz, Arno; Neugirg, Fabian; Schindewolf, Marcus
2016-04-01
Calanchi landscapes in central Italy have been subject to geoscientific research since many years, not exclusively but especially for questions regarding soil erosion and land degradation. Seasonal dynamics play an important role for morphological processes within the Calanchi. As in most Mediterranean landscapes also in the research site at Val d'Orcia long and dry summers are ended by heavy rainfall events in autumn. The latter contribute to most of the annual sediment output of the incised hollows and can cause damage to agricultural land and infrastructures. While research for understanding Calanco development is of high importance, the complex morphology and thus limited accessibility impedes in situ works. To still improve the understanding of morphodynamics without unnecessarily impinging natural conditions a remote sensing and erosion modelling approach was carried out in the presented work. UAV and LiDAR based very high resolution digital surface models were produced and served as an input parameter for the raster and physically based soil erosion model EROSION3D. Additionally, data on infiltration, runoff generation and sediment detachment were generated with artificial rainfall simulations - the most invasive but unavoidable method. To increase the 1 m plot length virtually to around 20 m the sediment loaded runoff water was again introduced to the plot by a reflux system. Rather elaborate logistics were required to set up the simulator on strongly inclined slopes, to establish sufficient water supply and to secure the simulator on the slope but experiments produced plausible results and valuable input data for modelling. The model results are then compared to the repeated UAV and LiDAR campaigns and the resulting digital elevation models of difference. By simulating different rainfall and moisture scenarios and implementing in situ measured weather data runoff induced processes can be distinguished from gravitational slides and rockfall.
Ralph, Duncan K.; Matsen, Frederick A.
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM “factorization” strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
Ralph, Duncan K; Matsen, Frederick A
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
Improving Rotor-Stator Interaction Noise Code Through Analysis of Input Parameters
NASA Technical Reports Server (NTRS)
Unton, Timothy J.
2004-01-01
There are two major sources of aircraft noise. The first is from the airframe and the second is from the engines. The focus of the acoustics branch at NASA Glenn is on the engine noise sources. There are two major sources of engine noise; fan noise and jet noise. Fan noise, produced by rotating machinery of the engine, consists of both tonal noise, which occurs at discrete frequencies, and broadband noise, which occurs across a wide range of frequencies. The focus of my assignment is on the broadband noise generated by the interaction of fan flow turbulence and the stator blades. such as the sweep and stagger angles and blade count, as well as the flow parameters such as intensity of turbulence in the flow. The tool I employed in this work is a computer program that predicts broadband noise from fans. The program assumes that the complex shape of the curved blade can be represented as a single flat plate, allowing it to use fairly simple equations that can be solved in a reasonable amount of time. While the results from such representation provided reasonable estimates of the broadband noise levels, they did not usually represent the entire spectrum accurately. My investigation found that the discrepancy between data and theory can be improved if the leading edge and the trailing edge of the blade are treated separately. Using this approach, I reduced the maximum error in noise level from a high of 30% to less than 5% for the cases investigated. Detailed results of this investigation will be discussed at my presentation. The objective of this study is to investigate the influence of geometric parameters
NASA Astrophysics Data System (ADS)
Mellinger, Philippe; Döhler, Michael; Mevel, Laurent
2016-09-01
An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.
Butcher, B.M.
1997-08-01
A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.
NASA Astrophysics Data System (ADS)
Miyasato, Yoshihiko
The problem of constructing model reference adaptive H∞ control for distributed parameter systems of hyperbolic type preceded by unknown input nonlinearity such as dead zone or backlash, is considered in this paper. Distributed parameter systems are infinite dimensional processes, but the proposed control scheme is constructed from finite dimensional controllers. An adaptive inverse model is introduced to estimate and compensate the input nonlinearity. The stabilizing control signal is added to regulate the effect of spill-over terms, and it is derived as a solution of certain H∞ control problem where the residual part of the inverse model and the spill-over term are considered as external disturbances to the process.
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
NASA Astrophysics Data System (ADS)
Li, W. P.; Luo, B.; Huang, H.
2016-02-01
This paper presents a vibration control strategy for a two-link Flexible Joint Manipulator (FJM) with a Hexapod Active Manipulator (HAM). A dynamic model of the multi-body, rigid-flexible system composed of an FJM, a HAM and a spacecraft was built. A hybrid controller was proposed by combining the Input Shaping (IS) technique with an Adaptive-Parameter Auto Disturbance Rejection Controller (APADRC). The controller was used to suppress the vibration caused by external disturbances and input motions. Parameters of the APADRC were adaptively adjusted to ensure the characteristic of the closed loop system to be a given reference system, even if the configuration of the manipulator significantly changes during motion. Because precise parameters of the flexible manipulator are not required in the IS system, the operation of the controller was sufficiently robust to accommodate uncertainties in system parameters. Simulations results verified the effectiveness of the HAM scheme and controller in the vibration suppression of FJM during operation.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
NASA Astrophysics Data System (ADS)
Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was
Ghezzi, Luan; Da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; Ge, Jian; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Cargile, Phillip; Pepper, Joshua; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Wang, Ji; and others
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T {sub eff}, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ∼ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T {sub eff}, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Accurate nuclear masses from a three parameter Kohn-Sham DFT approach (BCPM)
Baldo, M.; Robledo, L. M.; Schuck, P.; Vinas, X.
2012-10-20
Given the promising features of the recently proposed Barcelona-Catania-Paris (BCP) functional [1], it is the purpose of this work to still improve on it. It is, for instance, shown that the number of open parameters can be reduced from 4-5 to 2-3, i.e. by practically a factor of two without deteriorating the results.
Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited
Fogtmann-Schulz, Alexandra; Hinrup, Brian; Van Eylen, Vincent; Christensen-Dalsgaard, Jørgen; Kjeldsen, Hans; Silva Aguirre, Víctor; Tingley, Brandon
2014-02-01
Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 is the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.
NASA Astrophysics Data System (ADS)
Hochlaf, M.; Puzzarini, C.; Senent, M. L.
2015-07-01
We present multi-component computations for rotational constants, vibrational and torsional levels of medium-sized molecules. Through the treatment of two organic sulphur molecules, ethyl mercaptan and dimethyl sulphide, which are relevant for atmospheric and astrophysical media, we point out the outstanding capabilities of explicitly correlated coupled clusters (CCSD(T)-F12) method in conjunction with the cc-pVTZ-F12 basis set for the accurate predictions of such quantities. Indeed, we show that the CCSD(T)-F12/cc-pVTZ-F12 equilibrium rotational constants are in good agreement with those obtained by means of a composite scheme based on CCSD(T) calculations that accounts for the extrapolation to the complete basis set (CBS) limit and core-correlation effects [CCSD(T)/CBS+CV], thus leading to values of ground-state rotational constants rather close to the corresponding experimental data. For vibrational and torsional levels, our analysis reveals that the anharmonic frequencies derived from CCSD(T)-F12/cc-pVTZ-F12 harmonic frequencies and anharmonic corrections (Δν = ω - ν) at the CCSD/cc-pVTZ level closely agree with experimental results. The pattern of the torsional transitions and the shape of the potential energy surfaces along the torsional modes are also well reproduced using the CCSD(T)-F12/cc-pVTZ-F12 energies. Interestingly, this good accuracy is accompanied with a strong reduction of the computational costs. This makes the procedures proposed here as schemes of choice for effective and accurate prediction of spectroscopic properties of organic compounds. Finally, popular density functional approaches are compared with the coupled cluster (CC) methodologies in torsional studies. The long-range CAM-B3LYP functional of Handy and co-workers is recommended for large systems.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Accurate solutions, parameter studies and comparisons for the Euler and potential flow equations
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Batina, John T.
1988-01-01
Parameter studies are conducted using the Euler and potential flow equation models for steady and unsteady flows in both two and three dimensions. The Euler code is an implicit, upwind, finite volume code which uses the Van Leer method of flux vector splitting which has been recently extended for use on dynamic meshes and maintain all the properties of the original splitting. The potential flow code is an implicit, finite difference method for solving the transonic small disturbance equations and incorporates both entropy and vorticity corrections into the solution procedures thereby extending its applicability into regimes where shock strength normally precludes its use. Parameter studies resulting in benchmark type calculations include the effects of spatial and temporal refinement, spatial order of accuracy, far field boundary conditions for steady flow, frequency of oscillation, and the use of subiterations at each time step to reduce linearization and factorization errors. Comparisons between Euler and potential flow results are made, as well as with experimental data where available.
Accurate solutions, parameter studies and comparisons for the Euler and potential flow equations
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Batina, John T.
1988-01-01
Parameter studies are conducted using the Euler and potential flow equation models for unsteady and steady flows in both two and three dimensions. The Euler code is an implicit, upwind, finite volume code which uses the Van Leer method of flux-vector-splitting which has been recently extended for use on dynamic meshes and maintain all the properties of the original splitting. The potential flow code is an implicit, finite difference method for solving the transonic small disturbance equations and incorporates both entropy and vorticity corrections into the solution procedures thereby extending its applicability into regimes where shock strength normally precludes its use. Parameter studies resulting in benchmark type calculations include the effects of spatial and temporal refinement, spatial order of accuracy, far field boundary conditions for steady flow, frequency of oscillation, and the use of subiterations at each time step to reduce linearization and factorization errors. Comparisons between Euler and potential flows results are made as well as with experimental data where available.
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.
2010-07-10
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.
2012-01-01
A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal–ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal–ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for 15N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of 15N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of 15N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site. PMID:22329704
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
NASA Astrophysics Data System (ADS)
Filioglou, M.; Balis, D.; Siomos, N.; Poupkou, A.; Dimopoulos, S.; Chaikovsky, A.
2016-06-01
A targeted sensitivity study of the LIRIC algorithm was considered necessary to estimate the uncertainty introduced to the volume concentration profiles, due to the arbitrary selection of user-defined input parameters. For this purpose three different tests were performed using Thessaloniki's Lidar data. Overall, tests in the selection of the regularization parameters, an upper and a lower limit test were performed. The different sensitivity tests were applied on two cases with different predominant aerosol types, a dust episode and a typical urban case.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-01-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. PMID:25883146
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D.; Murfin, W.; Amos, C.N.
1992-03-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a ``so-called`` point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.
NASA Astrophysics Data System (ADS)
Ponizko, A. S.
1987-04-01
The specific metal input requirements for the construction of explosion-proof electric motors can be reduced by improving the forced-air cooling and relieving the explosion hazardous pressure through the use of gas permeable fire barriers. Quantitative estimates of the cooling efficiency for explosion-proof, asynchronous motors cooled by a twin blower mounted on the motor shaft are provided. The ventilation inside the explosion-proof containment is accomplished by the air from the inside blower of the fan assembly being sucked through the porous elements from the working end of the shaft, passed through the rotor channels, through the porous elements of the second bearing shield plate, and directed by the vanes of the fan into the air flow coming from the outer forced-air circulating blower. Calculations of the air flow, temperature and cooling efficiency are given for a four-pole 160 kW VAO315M-4 motor. The performance of the porous fire barriers in industry environments is also discussed.
Ajami, N K; Duan, Q; Sorooshian, S
2006-05-05
This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Identification of the battery state-of-health parameter from input-output pairs of time series data
NASA Astrophysics Data System (ADS)
Li, Yue; Chattopadhyay, Pritthi; Ray, Asok; Rahn, Christopher D.
2015-07-01
As a paradigm of dynamic data-driven application systems (DDDAS), this paper addresses real-time identification of the State of Health (SOH) parameter over the life span of a battery that is subjected to approximately repeated cycles of discharging/recharging current. In the proposed method, finite-length data of interest are selected via wavelet-based segmentation from the time series of synchronized input-output (i.e., current-voltage) pairs in the respective two-dimensional space. Then, symbol strings are generated by partitioning the selected segments of the input-output time series to construct a special class of probabilistic finite state automata (PFSA), called D-Markov machines. Pertinent features of the statistics of battery dynamics are extracted as the state emission matrices of these PFSA. This real-time method of SOH parameter identification relies on the divergence between extracted features. The underlying concept has been validated on (approximately periodic) experimental data, generated from a commercial-scale lead-acid battery. It is demonstrated by real-time analysis of the acquired current-voltage data on in-situ computational platforms that the proposed method is capable of distinguishing battery current-voltage dynamics at different aging stages, as an alternative to computation-intensive and electrochemistry-dependent analysis via physics-based modeling.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
NASA Astrophysics Data System (ADS)
Villalba, Jesus Daniel; Gomez, Ivan Dario; Laier, Jose Elias
2010-09-01
Structural damage detection is a very important research topic and, currently, there are not specific tools to solve it. A promising tool that can be used is the artificial neural network, ANN, which can deal with hard problems. This paper uses a back propagation ANN with Bayesian regularization training to locate and quantify damage in truss structures. The input parameters corresponded to natural frequencies combined with shape modes, modal flexibilities or modal strain energies. The ANN was trained by considering only simple damage scenarios, random multiple damage scenarios or a combination of them. The results are shown in terms of the percentage of cases in which the ANN trained achieves a determined performance in assessing both the damage extension and the presence of damaged elements. The best performance for the ANN is obtained by using modal strain energies and multiple damage scenarios.
NASA Astrophysics Data System (ADS)
Iorio, L.
2016-01-01
By using the most recently published Doppler tomography measurements and accurate theoretical modelling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination ip to the plane of the sky and of the sky-projected spin-orbit misalignment λ at two epochs about six years apart allowed for the determination of the longitude of the ascending node Ω and of the orbital inclination I to the apparent equatorial plane at the same epochs. As a consequence, average rates of change dot{Ω }_exp, dot{I}_exp of this two orbital elements, accurate to a ≈10-2 deg yr-1 level, were calculated as well. By comparing them to general theoretical expressions dot{Ω }_{J_2}, dot{I}_{J_2} for their precessions induced by an oblate star whose symmetry axis is arbitrarily oriented, we were able to determine the angle i⋆ between the line of sight the star's spin {S}^{star } and its first even zonal harmonic J_2^{star } obtaining i^{star } = {142}^{+10}_{-11} deg, J_2^{star } = 2.1^{+0.8}_{-0.5}times; 10^{-4}. As a by-product, the angle between {S}^{star } and the orbital angular momentum L is as large as about ψ ≈ 100 ° psi; ^{2008} = 99^{+5}_{-4} deg, ψ ^{{2014}} = 103^{+5}_{-4} deg and changes at a rate dot{ψ }= 0.{7}^{+1.5}_{-1.6} deg {yr}^{-1}. The predicted general relativistic Lense-Thirring precessions, of the order of ≈10-3deg yr-1, are, at present, about one order of magnitude below the measurability threshold.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the
NASA Astrophysics Data System (ADS)
Rezaei, Meisam; Seuntjens, Piet; Shahidi, Reihaneh; Joris, Ingeborg; Boënne, Wesley; Cornelis, Wim
2016-04-01
Soil hydraulic parameters, which can be derived from in situ and/or laboratory experiments, are key input parameters for modeling water flow in the vadose zone. In this study, we measured soil hydraulic properties with typical laboratory measurements and field tension infiltration experiments using Wooding's analytical solution and inverse optimization along the vertical direction within two typical podzol profiles with sand texture in a potato field. The objective was to identify proper sets of hydraulic parameters and to evaluate their relevance on hydrological model performance for irrigation management purposes. Tension disc infiltration experiments were carried out at five different depths for both profiles at consecutive negative pressure heads of 12, 6, 3 and 0.1 cm. At the same locations and depths undisturbed samples were taken to determine the water retention curve with hanging water column and pressure extractors and lab saturated hydraulic conductivity with the constant head method. Both approaches allowed to determine the Mualem-van Genuchten (MVG) hydraulic parameters (residual water content θr, saturated water content θs,, shape parameters α and n, and field or lab saturated hydraulic conductivity Kfs and Kls). Results demonstrated horizontal differences and vertical variability of hydraulic properties. Inverse optimization resulted in excellent matches between observed and fitted infiltration rates in combination with final water content at the end of the experiment, θf, using Hydrus 2D/3D. It also resulted in close correspondence of and Kfs with those from Logsdon and Jaynes' (1993) solution of Wooding's equation. The MVG parameters Kfs and α estimated from the inverse solution (θr set to zero), were relatively similar to values from Wooding's solution which were used as initial value and the estimated θs corresponded to (effective) field saturated water content θf. We found the Gardner parameter αG to be related to the optimized van
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.
Monette, F.; Biwer, B.; LePoire, D.; Chen, S.Y.
1994-02-01
The U.S. Department of Energy is considering a broad range of alternatives for the future configuration of radioactive waste management at its network of facilities. Because the transportation of radioactive waste is an integral component of the management alternatives being considered, the estimated human health risks associated with both routine and accident transportation conditions must be assessed to allow a complete appraisal of the alternatives. This paper provides an overview of the technical approach being used to assess the radiological risks from the transportation of radioactive wastes. The approach presented employs the RADTRAN 4 computer code to estimate the collective population risk during routine and accident transportation conditions. Supplemental analyses are conducted using the RISKIND computer code to address areas of specific concern to individuals or population subgroups. RISKIND is used for estimating routine doses to maximally exposed individuals and for assessing the consequences of the most severe credible transportation accidents. The transportation risk assessment is designed to ensure -- through uniform and judicious selection of models, data, and assumptions -- that relative comparisons of risk among the various alternatives are meaningful. This is accomplished by uniformly applying common input parameters and assumptions to each waste type for all alternatives. The approach presented can be applied to all radioactive waste types and provides a consistent and comprehensive evaluation of transportation-related risk.
NASA Astrophysics Data System (ADS)
Kazemi, Mohsen; Aghakhani, Masood; Haghshenas-Jazi, Ehsan; Behmaneshfar, Ali
2016-02-01
The aim of this paper is to optimize the depth of penetration with regard to the effect of MgO nanoparticles and welding input parameters. For this purpose, response surface methodology (RSM) with central composite rotatable design (CCRD) was used. The welding current, arc voltage, nozzle-to-plate distance, welding speed, and thickness of MgO nanoparticles were determined as the factors, and depth of penetration was considered as the response. Quadratic polynomial model was used for determining the relationship between the response and factors. A reduced model was obtained from the data which the values of R 2, R 2 (pred), and R 2 (adj) of this model were 92.05, 69.05, and 86.31 pct, respectively. Thus, this model was suitable, and it was used to determine the optimum levels of factors. The results show that the welding current, arc voltage, and nozzle-to-plate distance factors should be adjusted in high level, and welding speed and thickness of MgO nanoparticles factors should be adjusted in low level.
NASA Astrophysics Data System (ADS)
Kimura, H.; Asano, Y.; Matsumoto, T.
2012-12-01
The rapid determination of hypocentral parameters and their transmission to the public are valuable components of disaster mitigation. We have operated an automatic system for this purpose—termed the Accurate and QUick Analysis system for source parameters (AQUA)—since 2005 (Matsumura et al., 2006). In this system, the initial hypocenter, the moment tensor (MT), and the centroid moment tensor (CMT) solutions are automatically determined and posted on the NIED Hi-net Web site (www.hinet.bosai.go.jp). This paper describes improvements made to the AQUA to overcome limitations that became apparent after the 2011 Tohoku Earthquake (05:46:17, March 11, 2011 in UTC). The improvements included the processing of NIED F-net velocity-type strong motion records, because NIED F-net broadband seismographs are saturated for great earthquakes such as the 2011 Tohoku Earthquake. These velocity-type strong motion seismographs provide unsaturated records not only for the 2011 Tohoku Earthquake, but also for recording stations located close to the epicenters of M>7 earthquakes. We used 0.005-0.020 Hz records for M>7.5 earthquakes, in contrast to the 0.01-0.05 Hz records employed in the original system. The initial hypocenters determined based on arrival times picked by using seismograms recorded by NIED Hi-net stations can have large errors in terms of magnitude and hypocenter location, especially for great earthquakes or earthquakes located far from the onland Hi-net network. The size of the 2011 Tohoku Earthquake was initially underestimated in the AQUA to be around M5 at the initial stage of rupture. Numerous aftershocks occurred at the outer rise east of the Japan trench, where a great earthquake is anticipated to occur. Hence, we modified the system to repeat the MT analyses assuming a larger size, for all earthquakes for which the magnitude was initially underestimated. We also broadened the search range of centroid depth for earthquakes located far from the onland Hi
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda
2009-02-01
The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Brawand, Nicholas; Vörös, Márton; Govoni, Marco; Galli, Giulia
The accurate prediction of optoelectronic properties of molecules and solids is a persisting challenge for current density functional theory (DFT) based methods. We propose a hybrid functional where the mixing fraction of exact and local exchange is determined by a non-empirical, system dependent function. This functional yields ionization potentials, fundamental and optical gaps of many, diverse systems in excellent agreement with experiments, including organic and inorganic molecules and nanocrystals. We further demonstrate that the newly defined hybrid functional gives the correct alignment between the energy level of the exemplary TTF-TCNQ donor-acceptor system. DOE-BES: DE-FG02-06ER46262.
Jiang, Bin; Guo, Hua
2016-08-01
In search for an accurate description of the dissociative chemisorption of water on the Ni(111) surface, we report a new nine-dimensional potential energy surface (PES) based on a large number of density functional theory points using the RPBE functional. Seven-dimensional quantum dynamical calculations have been carried out on the RPBE PES, followed by site averaging and lattice effect corrections, yielding sticking probabilities that are compared with both the previous theoretical results based on a PW91 PES and experiment. It is shown that the RPBE functional increases the reaction barrier, but has otherwise a minor impact on the PES topography. Better agreement with experimental results is obtained with the new PES, but the agreement is still not quantitative. Possible sources of the remaining discrepancies are discussed. PMID:27436348
NASA Astrophysics Data System (ADS)
Deb, S.; Maitra, K.; Roychoudhuri, A.
1985-06-01
In the wake of the energy crisis, attempts are being made to develop a variety of energy conversion devices, such as solar cells. The single most important operational characteristic for a conversion element generating electricity is the V against I curve. Three points on this characteristic curve are of paramount importance, including the short-circuit, the open-circuit, and the maximum power point. The present paper has the objective to propose a new simple and accurate method of determining the maximum power point (Vm, Im) of the V against I characteristics, based on a geometrical interpretation. The method is general enough to be applicable to any energy conversion device having a nonlinear V against I characteristic. The paper provides also a method for determining the fill factor (FF), the series resistance (Rs), and the diode ideality factor (A) from a single set of connected observations.
NASA Astrophysics Data System (ADS)
Marin, Andrew T.; Musselman, Kevin P.; MacManus-Driscoll, Judith L.
2013-04-01
This work shows that when a Schottky barrier is present in a photovoltaic device, such as in a device with an ITO/ZnO contact, equivalent circuit analysis must be performed with admittance spectroscopy to accurately determine the pn junction interface recombination parameters (i.e., capture cross section and density of trap states). Without equivalent circuit analysis, a Schottky barrier can produce an error of ˜4-orders of magnitude in the capture cross section and ˜50% error in the measured density of trap states. Using a solution processed ZnO/Cu2O photovoltaic test system, we apply our analysis to clearly separate the contributions of interface states at the pn junction from the Schottky barrier at the ITO/ZnO contact so that the interface state recombination parameters can be accurately characterized. This work is widely applicable to the multitude of photovoltaic devices, which use ZnO adjacent to ITO.
Rosen, I.G.; Luczak, Susan E.; Weiss, Jordan
2014-01-01
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented. PMID:24707065
Kostylev, Maxim; Wilson, David
2014-01-01
Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Coffield, T; Patricia Lee, P
2007-01-31
The purpose of this report is to update parameters utilized in Human Health Exposure calculations and Bioaccumulation Transfer Factors utilized at SRS for Performance Assessment modeling. The reason for the update is to utilize more recent information issued, validate information currently used and correct minor inconsistencies between modeling efforts performed in SRS contiguous areas of the heavy industrialized central site usage areas called the General Separations Area (GSA). SRS parameters utilized were compared to a number of other DOE facilities and generic national/global references to establish relevance of the parameters selected and/or verify the regional differences of the southeast USA. The parameters selected were specifically chosen to be expected values along with identifying a range for these values versus the overly conservative specification of parameters for estimating an annual dose to the maximum exposed individual (MEI). The end uses are to establish a standardized source for these parameters that is up to date with existing data and maintain it via review of any future issued national references to evaluate the need for changes as new information is released. These reviews are to be added to this document by revision.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.
2014-11-01
The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.
NASA Astrophysics Data System (ADS)
Di Giovanni, P.; Ahearn, T. S.; Semple, S. I.; Azlan, C. A.; Lloyd, W. K. C.; Gilbert, F. J.; Redpath, T. W.
2011-03-01
The objective of this work was to propose and demonstrate a novel technique for the assessment of tumour pharmacokinetic parameters together with a regionally estimated vascular input function. A breast cancer patient T2*-weighted dynamic contrast enhanced MRI (DCE-MRI) dataset acquired at high temporal resolution during the first-pass bolus perfusion was used for testing the technique. Extraction of the lesion volume transfer constant Ktrans together with the intravascular plasma volume fraction vp was achieved by optimizing a capillary input function with a measure of cardiac output using the principle of intravascular indicator dilution theory. For a region of interest drawn within the breast lesion a vp of 0.16 and a Ktrans of 0.70 min-1 were estimated. Despite the value of vp being higher than expected, estimated Ktrans was in accordance with the literature values. In conclusion, the technique proposed here, has the main advantage of allowing the estimation of breast tumour pharmacokinetic parameters from first-pass perfusion T2*-weighted DCE-MRI data without the need of measuring an arterial input function. The technique may also have applicability to T1-weighted DCE-MRI data.
NASA Astrophysics Data System (ADS)
Fuchs, Sven; Bording, Thue S.; Balling, Niels
2015-04-01
Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i
Faulkner, William B; Shaw, Bryan W; Grosch, Tom
2008-10-01
As of December 2006, the American Meteorological Society/U.S. Environmental Protection Agency (EPA) Regulatory Model with Plume Rise Model Enhancements (AERMOD-PRIME; hereafter AERMOD) replaced the Industrial Source Complex Short Term Version 3 (ISCST3) as the EPA-preferred regulatory model. The change from ISCST3 to AERMOD will affect Prevention of Significant Deterioration (PSD) increment consumption as well as permit compliance in states where regulatory agencies limit property line concentrations using modeling analysis. Because of differences in model formulation and the treatment of terrain features, one cannot predict a priori whether ISCST3 or AERMOD will predict higher or lower pollutant concentrations downwind of a source. The objectives of this paper were to determine the sensitivity of AERMOD to various inputs and compare the highest downwind concentrations from a ground-level area source (GLAS) predicted by AERMOD to those predicted by ISCST3. Concentrations predicted using ISCST3 were sensitive to changes in wind speed, temperature, solar radiation (as it affects stability class), and mixing heights below 160 m. Surface roughness also affected downwind concentrations predicted by ISCST3. AERMOD was sensitive to changes in albedo, surface roughness, wind speed, temperature, and cloud cover. Bowen ratio did not affect the results from AERMOD. These results demonstrate AERMOD's sensitivity to small changes in wind speed and surface roughness. When AERMOD is used to determine property line concentrations, small changes in these variables may affect the distance within which concentration limits are exceeded by several hundred meters. PMID:18939775
Baker, Christopher M.; Lopes, Pedro E. M.; Zhu, Xiao; Roux, Benoît; MacKerell, Alexander D.
2010-01-01
Lennard-Jones (LJ) parameters for a variety of model compounds have previously been optimized within the CHARMM Drude polarizable force field to reproduce accurately pure liquid phase thermodynamic properties as well as additional target data. While the polarizable force field resulting from this optimization procedure has been shown to satisfactorily reproduce a wide range of experimental reference data across numerous series of small molecules, a slight but systematic overestimate of the hydration free energies has also been noted. Here, the reproduction of experimental hydration free energies is greatly improved by the introduction of pair-specific LJ parameters between solute heavy atoms and water oxygen atoms that override the standard LJ parameters obtained from combining rules. The changes are small and a systematic protocol is developed for the optimization of pair-specific LJ parameters and applied to the development of pair-specific LJ parameters for alkanes, alcohols and ethers. The resulting parameters not only yield hydration free energies in good agreement with experimental values, but also provide a framework upon which other pair-specific LJ parameters can be added as new compounds are parametrized within the CHARMM Drude polarizable force field. Detailed analysis of the contributions to the hydration free energies reveals that the dispersion interaction is the main source of the systematic errors in the hydration free energies. This information suggests that the systematic error may result from problems with the LJ combining rules and is combined with analysis of the pair-specific LJ parameters obtained in this work to identify a preliminary improved combining rule. PMID:20401166
NASA Astrophysics Data System (ADS)
Cabassa-Miranda, E.; Garnett Marques Brum, C.
2013-12-01
We are presenting a statistical study of the behavior of the noontime F2 peak parameters (foF2 and hmF2) to the variation of solar energy input based on digisonde data and EUV-UV solar emissions registered by SOHO satellite for geomagnetic quiet-to-normal condition. For this, we selected digisonde data from fourteen different stations spread along the American sector (ten of them located above and four below the equator). These registers were collected from 2000 to 2012 and encompass the last unusual super minimum period.
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2015-06-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals ( P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms ( P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale ( P < 0.05). White and brown-white patched cows had significantly longer ( P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower ( P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different ( P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2014-08-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals (P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms (P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale (P < 0.05). White and brown-white patched cows had significantly longer (P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower (P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different (P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
NASA Astrophysics Data System (ADS)
Vergara, H. J.; Kirstetter, P.; Hong, Y.; Gourley, J. J.; Wang, X.
2013-12-01
The Ensemble Kalman Filter (EnKF) is arguably the assimilation approach that has found the widest application in hydrologic modeling. Its relatively easy implementation and computational efficiency makes it an attractive method for research and operational purposes. However, the scientific literature featuring this approach lacks guidance on how the errors in the forecast need to be characterized so as to get the required corrections from the assimilation process. Moreover, several studies have indicated that the performance of the EnKF is 'sub-optimal' when assimilating certain hydrologic observations. Likewise, some authors have suggested that the underlying assumptions of the Kalman Filter and its dependence on linear dynamics make the EnKF unsuitable for hydrologic modeling. Such assertions are often based on ineffectiveness and poor robustness of EnKF implementations resulting from restrictive specification of error characteristics and the absence of a-priori information of error magnitudes. Therefore, understanding the capabilities and limitations of the EnKF to improve hydrologic forecasts require studying its sensitivity to the manner in which errors in the hydrologic modeling system are represented through ensembles. This study presents a methodology that explores various uncertainty representation configurations to characterize the errors in the hydrologic forecasts in a data assimilation context. The uncertainty in rainfall inputs is represented through a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which provides information about second-order statistics of quantitative precipitation estimates (QPE) error. The uncertainty in model parameters is described adding perturbations based on parameters covariance information. The method allows for the identification of rainfall and parameter perturbation combinations for which the performance of the EnKF is 'optimal' given a set of objective functions. In this process, information about
Bakker, Chris J G; de Leeuw, Hendrik; van de Maat, Gerrit H; van Gorp, Jetse S; Bouwman, Job G; Seevinck, Peter R
2013-01-01
Lack of spatial accuracy is a recognized problem in magnetic resonance imaging (MRI) which severely detracts from its value as a stand-alone modality for applications that put high demands on geometric fidelity, such as radiotherapy treatment planning and stereotactic neurosurgery. In this paper, we illustrate the potential and discuss the limitations of spectroscopic imaging as a tool for generating purely phase-encoded MR images and parameter maps that preserve the geometry of an object and allow localization of object features in world coordinates. Experiments were done on a clinical system with standard facilities for imaging and spectroscopy. Images were acquired with a regular spin echo sequence and a corresponding spectroscopic imaging sequence. In the latter, successive samples of the acquired echo were used for the reconstruction of a series of evenly spaced images in the time and frequency domain. Experiments were done with a spatial linearity phantom and a series of test objects representing a wide range of susceptibility- and chemical-shift-induced off-resonance conditions. In contrast to regular spin echo imaging, spectroscopic imaging was shown to be immune to off-resonance effects, such as those caused by field inhomogeneity, susceptibility, chemical shift, f(0) offset and field drift, and to yield geometrically accurate images and parameter maps that allowed object structures to be localized in world coordinates. From these illustrative examples and a discussion of the limitations of purely phase-encoded imaging techniques, it is concluded that spectroscopic imaging offers a fundamental solution to the geometric deficiencies of MRI which may evolve toward a practical solution when full advantage will be taken of current developments with regard to scan time reduction. This perspective is backed up by a demonstration of the significant scan time reduction that may be achieved by the use of compressed sensing for a simple phantom. PMID:22898694
Sandala, Gregory M.; Hopmann, Kathrin H.; Ghosh, Abhik
2011-01-01
structure. Significant improvements to the isomer shift calibrations are obtained for B3LYP and B3LYP* when geometries obtained with the OLYP functional are used. In addition, greatly improved performance of these functionals is found if the complete test set is grouped separately into Fe–NO and Fe–S complexes. Calibration fits including only Fe–NO complexes are found to be excellent, while those containing the non-nitrosyl Fe–S complexes alone are found to demonstrate less accurate correlations. Similar trends are also found with OLYP, OPBE, PW91, and BP86. Correlations between experimental and calculated QSs were also investigated. Generally, universal and separate Fe–NO and Fe–S fit parameters obtained to determine QSs are found to be of good to excellent quality for every density functional examined, especially if [Fe4(NO)4(μ3-S)4]− is removed from the test set. PMID:22039359
NASA Astrophysics Data System (ADS)
Hsieh, H. P.; Sung, K. B.; Hsu, F. W.
2014-05-01
Diffuse reflectance spectroscopy has been applied as a non-invasive method to measure tissue optical properties, which are associate with anatomical information. The algorithm widely used to extract, optical parameters from reflectance spectra is the regression method, which is time-consuming and frequently converge to local maxima. In this study, the effects of parameters changes on spectra are analyzed in different fiber geometries, source-detector separations and wavelengths. In the end of this paper, a new fitting algorithm is proposed base on parameters features found. The new algorithm is expected to enhance the accuracy of parameters extracted and save 75% of the process time.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
NASA Astrophysics Data System (ADS)
Bruntt, H.
2009-10-01
Context: The CoRoT satellite has provided high-quality light curves of several solar-like stars. Analysis of these light curves provides oscillation frequencies that make it possible to probe the interior of the stars. However, additional constraints on the fundamental parameters of the stars are important for the theoretical modelling to be successful. Aims: We estimate the fundamental parameters (mass, radius, and luminosity) of the first four solar-like targets to be observed in the asteroseismic field. In addition, we determine their effective temperature, metallicity, and detailed abundance patterns. Methods: To constrain the stellar mass, radius and age we used the shotgun software, which compares the location of the stars in the Hertzsprung-Russell diagram with theoretical evolution models. This method takes the uncertainties of the observed parameters into account, including the large separation determined from the solar-like oscillations. We determined the effective temperatures and abundance patterns in the stars from the analysis of high-resolution spectra obtained with the HARPS, NARVAL, ELODIE and FEROS spectrographs. Results: We determined the mass, radius, and luminosity of the four CoRoT targets to within 5{-}10%, 2{-}4% and 5{-}13%, respectively. The quality of the stellar spectra determines how well we can constrain the effective temperature. For the two best spectra we get 1-σ uncertainties below 60 K and 100{-}150 K for the other two. The uncertainty on the surface gravity is less than 0.08 dex for three stars, while it is 0.15 dex for HD 181906. The reason for the larger uncertainty is that the spectrum has two components with a luminosity ratio of L_p/Ls = 0.50±0.15. While Hipparcos astrometric data strongly suggest it is a binary star, we find evidence that the fainter star may be a background star, since it is less luminous but hotter.
Shi, Deheng; Liu, Qionglan; Sun, Jinfeng; Zhu, Zunlue
2014-03-25
The potential energy curves (PECs) of 28 Ω states generated from the 12 states (X(4)Σ(-), 1(2)Π, 1(2)Σ(-), 1(2)Δ, 1(2)Σ(+), 2(2)Π, A(4)Π, B(4)Σ(-), 3(2)Π, 1(6)Σ(-), 2(2)Σ(-) and 1(6)Π) of the BN(+) cation are studied for the first time for internuclear separations from about 0.1 to 1.0 nm using an ab initio quantum chemical method. All the Λ-S states correlate to the first four dissociation channels. The 1(6)Σ(-), 3(2)Π and A(4)Π states are found to be the inverted ones. The 1(2)Σ(+), 2(2)Π, 3(2)Π and 2(2)Σ(-) states are found to possess the double well. The PECs are calculated by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson correction. Core-valence correlation correction is included by a cc-pCV5Z basis set. Scalar relativistic correction is calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. The convergent behavior of present calculations is discussed with respect to the basis set and level of theory. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian using the all-electron cc-pCV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The spectroscopic parameters are obtained, and the vibrational properties of 1(2)Σ(+), 2(2)Π, 3(2)Π and 2(2)Σ(-) states are evaluated. Analyses demonstrate that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The conclusion is gained that the effect of spin-orbit coupling on the spectroscopic parameters are not obvious almost for all the Λ-S states involved in the present paper. PMID:24334021
Shi, Deheng; Li, Peiling; Sun, Jinfeng; Zhu, Zunlue
2014-01-01
The potential energy curves (PECs) of 28 Ω states generated from 9 Λ-S states (X(2)Π, 1(4)Π, 1(6)Π, 1(2)Σ(+), 1(4)Σ(+), 1(6)Σ(+), 1(4)Σ(-), 2(4)Π and 1(4)Δ) are studied for the first time using an ab initio quantum chemical method. All the 9 Λ-S states correlate to the first two dissociation limits, N((4)Su)+Se((3)Pg) and N((4)Su)+Se((3)Dg), of NSe radical. Of these Λ-S states, the 1(6)Σ(+), 1(4)Σ(+), 1(6)Π, 2(4)Π and 1(4)Δ are found to be rather weakly bound states. The 1(2)Σ(+) is found to be unstable and has double wells. And the 1(6)Σ(+), 1(4)Σ(+), 1(4)Π and 1(6)Π are found to be the inverted ones with the SO coupling included. The PEC calculations are made by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson modification. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian. The convergence of the present calculations is discussed with respect to the basis set and the level of theory. Core-valence correlation corrections are included with a cc-pCVTZ basis set. Scalar relativistic corrections are calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of spin-orbit coupling constants is discussed in brief for some Λ-S states with one shallow well on each PEC. The spectroscopic parameters of 9 Λ-S and 28 Ω states are determined by fitting the first ten vibrational levels whenever available, which are calculated by solving the rovibrational Schrödinger equation with Numerov's method. The splitting energy in the X(2)Π Λ-S state is determined to be about 864.92 cm(-1), which agrees favorably with the measurements of 891.80 cm(-1). Moreover, other spectroscopic parameters of Λ-S and Ω states involved here are
Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim
2016-01-01
Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192
NASA Astrophysics Data System (ADS)
Suchomska, K.; Graczyk, D.; Smolec, R.; Pietrzyński, G.; Gieren, W.; Stȩpień, K.; Konorski, P.; Pilecki, B.; Villanova, S.; Thompson, I. B.; Górski, M.; Karczmarek, P.; Wielgórski, P.; Anderson, R. I.
2015-07-01
We have analyzed the double-lined eclipsing binary system ASAS J180057-2333.8 from the All Sky Automated Survey (ASAS) catalogue. We measure absolute physical and orbital parameters for this system based on archival V-band and I-band ASAS photometry, as well as on high-resolution spectroscopic data obtained with ESO 3.6 m/HARPS and CORALIE spectrographs. The physical and orbital parameters of the system were derived with an accuracy of about 0.5-3 per cent. The system is a very rare configuration of two bright well-detached giants of spectral types K1 and K4 and luminosity class II. The radii of the stars are R1 = 52.12 ± 1.38 and R2 = 67.63 ± 1.40 R⊙ and their masses are M1 = 4.914 ± 0.021 and M2 = 4.875 ± 0.021 M⊙. The exquisite accuracy of 0.5 per cent obtained for the masses of the components is one of the best mass determinations for giants. We derived a precise distance to the system of 2.14 ± 0.06 kpc (stat.) ± 0.05 (syst.) which places the star in the Sagittarius-Carina arm. The Galactic rotational velocity of the star is Θs = 258 ± 26 km s-1 assuming Θ0 = 238 km s-1. A comparison with PARSEC isochrones places the system at the early phase of core helium burning with an age of slightly larger than 100 million years. The effect of overshooting on stellar evolutionary tracks was explored using the MESA star code.
NASA Astrophysics Data System (ADS)
Montes, D.; Caballero, J. A.; Alonso-Floriano, F. J.; Cortes Contreras, M.; Gonzalez-Alvarez, E.; Hidalgo, D.; Holgado, G.; Llamas, M.; Martinez-Rodriguez, H.; Sanz-Forcada, J.
2015-01-01
We help compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES `input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers (see the other contributions in this volume), we compile a huge pile of public data on over 2100 M dwarfs, and analyze them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master and grade students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D.; Amos, C.N.; Helton, J.; Boyd, G.
1992-06-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
NASA Astrophysics Data System (ADS)
Joosten, S.; Pammler, K.; Silny, J.
2009-02-01
The problem of electromagnetic interference of electronic implants such as cardiac pacemakers has been well known for many years. An increasing number of field sources in everyday life and occupational environment leads unavoidably to an increased risk for patients with electronic implants. However, no obligatory national or international safety regulations exist for the protection of this patient group. The aim of this study is to find out the anatomical and physiological worst-case conditions for patients with an implanted pacemaker adjusted to unipolar sensing in external time-varying electric fields. The results of this study with 15 volunteers show that, in electric fields, variation of the interference voltage at the input of a cardiac pacemaker adds up to 200% only because of individual factors. These factors should be considered in human studies and in the setting of safety regulations.
NASA Astrophysics Data System (ADS)
Orkin, V. L.; Khamaganov, V. G.; Martynova, L. E.; Kurylo, M. J.
2012-12-01
The emissions of halogenated (Cl, Br containing) organics of both natural and anthropogenic origin contribute to the balance of and changes in the stratospheric ozone concentration. The associated chemical cycles are initiated by the photochemical decomposition of the portion of source gases that reaches the stratosphere. Reactions with hydroxyl radicals and photolysis are the main processes dictating the compound lifetime in the troposphere and release of active halogen in the stratosphere for a majority of halogen source gases. Therefore, the accuracy of photochemical data is of primary importance for the purpose of comprehensive atmospheric modeling and for simplified kinetic estimations of global impacts on the atmosphere, such as in ozone depletion (i.e., the Ozone Depletion Potential, ODP) and climate change (i.e., the Global Warming Potential, GWP). The sources of critically evaluated photochemical data for atmospheric modeling, NASA/JPL Publications and IUPAC Publications, recommend uncertainties within 10%-60% for the majority of OH reaction rate constants with only a few cases where uncertainties lie at the low end of this range. These uncertainties can be somewhat conservative because evaluations are based on the data from various laboratories obtained during the last few decades. Nevertheless, even the authors of the original experimental works rarely estimate the total combined uncertainties of the published OH reaction rate constants to be less than ca. 10%. Thus, uncertainties in the photochemical properties of potential and current atmospheric trace gases obtained under controlled laboratory conditions still may constitute a major source of uncertainty in estimating the compound's environmental impact. One of the purposes of the presentation is to illustrate the potential for obtaining accurate laboratory measurements of the OH reaction rate constant over the temperature range of atmospheric interest. A detailed inventory of accountable sources of
Liu, Hui; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue; Shulin, Zhang
2014-04-24
The potential energy curves (PECs) of 54 spin-orbit states generated from the 22 electronic states of O2 molecule are investigated for the first time for internuclear separations from about 0.1 to 1.0nm. Of the 22 electronic states, the X(3)Σg(-), A(')(3)Δu, A(3)Σu(+), B(3)Σu(-), C(3)Πg, a(1)Δg, b(1)Σg(+), c(1)Σu(-), d(1)Πg, f(1)Σu(+), 1(5)Πg, 1(3)Πu, 2(3)Σg(-), 1(5)Σu(-), 2(1)Σu(-) and 2(1)Δg are found to be bound, whereas the 1(5)Σg(+), 2(5)Σg(+), 1(1)Πu, 1(5)Δg, 1(5)Πu and 2(1)Πu are found to be repulsive ones. The B(3)Σu(-) and d(1)Πg states possess the double well. And the 1(3)Πu, C(3)Πg, A'(3)Δu, 1(5)Δg and 2(5)Σg(+) states are the inverted ones when the spin-orbit coupling is included. The PEC calculations are done by the complete active space self-consistent field (CASSCF) method, which is followed by the internally contracted multireference configuration interaction (icMRCI) approach with the Davidson correction. Core-valence correlation and scalar relativistic corrections are taken into account. The convergence of present calculations is evaluated with respect to the basis set and level of theory. The vibrational properties are discussed for the 1(5)Πg, 1(3)Πu, d(1)Πg and 1(5)Σu(-) states and for the second well of the B(3)Σu(-) state. The spin-orbit coupling effect is accounted for by the state interaction method with the Breit-Pauli Hamiltonian. The PECs of all the electronic states and spin-orbit states are extrapolated to the complete basis set limit. The spectroscopic parameters are obtained, and compared with available experimental and other theoretical results. Analyses demonstrate that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The conclusion is obtained that the effect of spin-orbit coupling on the spectroscopic parameters are small almost for all the electronic states involved in this paper except for the 1(5)Σu(-), 1(5)Πg and 1(3)Πu. PMID:24486866
NASA Astrophysics Data System (ADS)
Shi, De-Heng; Liu, Qionglan; Yu, Wei; Sun, Jinfeng; Zhu, Zunlue
2014-05-01
The potential energy curves (PECs) of 23 Ω states generated from the 12 electronic states (X1 Σ +, 21 Σ +, 11 Σ -, 11 Π, 21 Π, 11 Δ, 13 Σ +, 23 Σ +, 13 Σ -, a3 Π, 23 Π and 13 Δ) are studied for the first time. All the states correlate to the first dissociation channel of the SiBr+ cation. Of these electronic states, the 23 Σ + is the repulsive one without the spin-orbit coupling, whereas it becomes the bound one with the spin-orbit coupling added. On the one hand, without the spin-orbit coupling, the 11 Π, 21 Π and 23 Π are the rather weakly bound states, and only the 11 Π state possesses the double well; on the other hand, with the spin-orbit coupling included, the a3 Π and 11 Π states possess the double well, and the 13 Σ + and 13 Σ - are the inverted states. The PECs are calculated by the CASSCF method, which is followed by the internally contracted MRCI approach with the Davidson modification. Scalar relativistic correction is calculated by the third-order Douglas-Kroll Hamiltonian approximation with a cc-pVTZ-DK basis set. Core-valence correlation correction is included with a cc-pCVTZ basis set. The spin-orbit coupling is accounted for by the state interaction method with the Breit-Pauli Hamiltonian using the all-electron aug-cc-pCVTZ basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of the spin-orbit coupling constant is discussed in brief. The spectroscopic parameters are evaluated for the 11 bound electronic states and the 23 bound Ω states, and are compared with available measurements. Excellent agreement has been found between the present results and the experimental data. It demonstrates that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The Franck-Condon factors and radiative lifetimes of the transitions from the a3 Π 0 + and a3 Π 1 states to the X1 Σ + 0+ state are calculated for several low vibrational levels, and
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Jannik, T.; Karapatakis, D.; Lee, P.; Farfan, E.
2010-08-06
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6
Input to the PRAST computer code used in the SRS probabilistic risk assessment
Kearnaghan, D.P.
1992-10-15
The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment.
Wang, Yong; Goh, Wang Ling; Chai, Kevin T-C; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators. PMID:27131699
NASA Astrophysics Data System (ADS)
Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.
NASA Astrophysics Data System (ADS)
de la Paz, Mercedes; Gómez-Parra, Abelardo; Forja, Jesús
2008-06-01
The main objective of the present study is to assess the temporal variability of the carbonate system, and the mechanisms driving that variability, in the Rio San Pedro, a tidal creek located in the Bay of Cadiz (SW Iberian Peninsula). This shallow tidal creek is affected by effluents of organic matter and nutrients from surrounding marine fish farms. In 2004, 11 tidal samplings, seasonally distributed, were carried out for the measurement of total alkalinity (TA), pH, dissolved oxygen and Chlorophyll- a (Chl- a) using a fixed station. In addition, several longitudinal samplings were carried out both in the tidal creek and in the adjacent waters of the Bay of Cadiz, in order to obtain a spatial distribution of the carbonate parameters. Tidal mixing is the main factor controlling the dissolved inorganic carbon (DIC) variability, showing almost conservative behaviour on a tidal time scale. The amplitude of the daily oscillations of DIC, pH and chlorophyll show a high dependence on the spring-neap tide sequence, with the maximum amplitude associated with spring tides. Additionally, a marked seasonality has been found in the DIC, pH and oxygen concentrations. This seasonality seems to be related to the increase in metabolic rates with the temperature, the alternation of storm events and high evaporation rates, together with intense seasonal variability in the discharges from fish farms. In addition, the export of DIC from the Rio San Pedro to the adjacent coastal area has been evaluated using the tidal prism model, obtaining a net export of 1.05×10 10 g C yr -1.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2008-11-01
An accurate estimation of the temperature field in weld pool and its surrounding area is important for a priori determination of the weld-pool dimensions and the weld thermal cycles. A finite element based three-dimensional (3-D) quasi-steady heat-transfer model is developed in the present work to compute temperature field in gas tungsten arc welding (GTAW) process. The numerical model considers temperature-dependent material properties and latent heat of melting and solidification. A novelty of the numerical model is that the welding heat source is considered in the form of an adaptive volumetric heat source that confirms to the size and the shape of the weld pool. The need to predefine the dimensions of the volumetric heat source is thus overcome. The numerical model is further integrated with a parent-centric recombination (PCX) operated generalized generation gap (G3) model based genetic algorithm to identify the magnitudes of process efficiency and arc radius that are usually unknown but required for the accurate estimation of the net heat input into the workpiece. The complete numerical model and the genetic algorithm based optimization code are developed indigenously using an Intel Fortran Compiler. The integrated model is validated further with a number of experimentally measured weld dimensions in GTA-welded samples in stainless steels.
Liebetrau, A.M.
1983-10-01
Work is underway at Pacific Northwest Laboratory (PNL) to improve the probabilistic analysis used to model pressurized thermal shock (PTS) incidents in reactor pressure vessels, and, further, to incorporate these improvements into the existing Vessel Integrity Simulation Analysis (VISA) code. Two topics related to work on input distributions in VISA are discussed in this paper. The first involves the treatment of flaw size distributions and the second concerns errors in the parameters in the (Guthrie) equation which is used to compute ..delta..RT/sub NDT/, the shift in reference temperature for nil ductility transition.
Toward an inventory of nitrogen input to the United States
Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...
Input/output system identification - Learning from repeated experiments
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Horta, Lucas G.; Longman, Richard W.
1990-01-01
The paper describes three approaches and possible variations for the determination of the Markov parameters for forced response data using general inputs. It is shown that, when the parameters in the solution procedure are bootstrapped, the results can be obtained very efficiently, but the errors propagate throughout all parameters. By arranging the data in a different form and using singular value decomposition, the resulting identified parameters are more accurate, in the least number of successive experiments, at the expense of a large matrix singular value decomposition. When a recursive procedure is employed, the calculations can be performed very efficiently, but the number of repetitions of the experiments is much greater for a given accuracy than for any of the previous approaches. An alternative formulation is proposed to combine the advantages of each of the approaches.
Mackay, Donald; Hughes, Lauren; Powell, David E; Kim, Jaeshin
2014-09-01
The QWASI fugacity mass balance model has been widely used since 1983 for both scientific and regulatory purposes to estimate the concentrations of organic chemicals in water and sediment, given an assumed rate of chemical emission, advective inflow in water or deposition from the atmosphere. It has become apparent that an updated version is required, especially to incorporate improved methods of obtaining input parameters such as partition coefficients. Accordingly, the model has been revised and it is now available in spreadsheet format. Changes to the model are described and the new version is applied to two chemicals, D5 (decamethylcyclopentasiloxane) and PCB-180, in two lakes, Lake Pepin (MN, USA) and Lake Ontario, showing the model's capability of illustrating both the chemical to chemical differences and lake to lake differences. Since there are now increased regulatory demands for rigorous sensitivity and uncertainty analyses, these aspects are discussed and two approaches are illustrated. It is concluded that the new QWASI water quality model can be of value for both evaluative and simulation purposes, thus providing a tool for obtaining an improved understanding of chemical mass balances in lakes, as a contribution to the assessment of fate and exposure and as a step towards the assessment of risk. PMID:24997940
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.
2015-12-01
Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this
A new generalized correlation for accurate vapor pressure prediction
NASA Astrophysics Data System (ADS)
An, Hui; Yang, Wenming
2012-08-01
An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.
Deridder, Sander; Desmet, Gert
2012-02-01
Using computational fluid dynamics (CFD), the effective B-term diffusion constant γ(eff) has been calculated for four different random sphere packings with different particle size distributions and packing geometries. Both fully porous and porous-shell sphere packings are considered. The obtained γ(eff)-values have subsequently been used to determine the value of the three-point geometrical constant (ζ₂) appearing in the 2nd-order accurate effective medium theory expression for γ(eff). It was found that, whereas the 1st-order accurate effective medium theory expression is accurate to within 5% over most part of the retention factor range, the 2nd-order accurate expression is accurate to within 1% when calculated with the best-fit ζ₂-value. Depending on the exact microscopic geometry, the best-fit ζ₂-values typically lie in the range of 0.20-0.30, holding over the entire range of intra-particle diffusion coefficients typically encountered for small molecules (0.1 ≤ D(pz)/D(m) ≤ 0.5). These values are in agreement with the ζ₂-value proposed by Thovert et al. for the random packing they considered. PMID:22236565
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Analysis of Stochastic Response of Neural Networks with Stochastic Input
Energy Science and Technology Software Center (ESTSC)
1996-10-10
Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.
ERIC Educational Resources Information Center
Berliss-Vincent, Jane; Whitford, Gigi
2002-01-01
This article presents both the factors involved in successful speech input use and the potential barriers that may suggest that other access technologies could be more appropriate for a given individual. Speech input options that are available are reviewed and strategies for optimizing use of speech recognition technology are discussed. (Contains…
NASA Technical Reports Server (NTRS)
Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette
2006-01-01
This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.
High input impedance amplifier
NASA Technical Reports Server (NTRS)
Kleinberg, Leonard L.
1995-01-01
High input impedance amplifiers are provided which reduce the input impedance solely to a capacitive reactance, or, in a somewhat more complex design, provide an extremely high essentially infinite, capacitive reactance. In one embodiment, where the input impedance is reduced in essence, to solely a capacitive reactance, an operational amplifier in a follower configuration is driven at its non-inverting input and a resistor with a predetermined magnitude is connected between the inverting and non-inverting inputs. A second embodiment eliminates the capacitance from the input by adding a second stage to the first embodiment. The second stage is a second operational amplifier in a non-inverting gain-stage configuration where the output of the first follower stage drives the non-inverting input of the second stage and the output of the second stage is fed back to the non-inverting input of the first stage through a capacitor of a predetermined magnitude. These amplifiers, while generally useful, are very useful as sensor buffer amplifiers that may eliminate significant sources of error.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Inferring Indel Parameters using a Simulation-based Approach.
Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A; Pupko, Tal
2015-12-01
In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226
Inferring Indel Parameters using a Simulation-based Approach
Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A.; Pupko, Tal
2015-01-01
In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Moussa, D.; Damache, S.; Ouichaoui, S.
2015-01-01
The stopping powers of thin Al foils for H+ and 4He+ ions have been measured over the energy range E = (206.03- 2680.05) keV/amu with an overall relative uncertainty better than 1% using the transmission method. The derived S (E) experimental data are compared to previous ones from the literature, to values derived by the SRIM-2008 code or compiled in the ICRU-49 report, and to the predictions of Sigmund-Schinner binary collision stopping theory. Besides, the S (E) data for H+ ions together with those for He2+ ions reported by Andersen et al. (1977) have been analyzed over the energy interval E > 1.0 MeV using the modified Bethe-Bloch stopping theory. The following sets of values have been inferred for the mean excitation potential, I, and the Barkas-Andersen parameter, b, for H+ and He+ projectiles, respectively: { (I = 164 ± 3) eV, b = 1.40 } and { (I = 163 ± 2.5) eV, b = 1.38 } . As expected, the I parameter is found to be independent of the projectile electronic structure presumably indicating that the contribution of charge exchange effects becomes negligible as the projectile velocity increases. Therefore, the I parameter must be determined from precise stopping power measurements performed at high projectile energies where the Bethe stopping theory is fully valid.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
Multiple-input experimental modal analysis
NASA Technical Reports Server (NTRS)
Allemang, R. J.; Brown, D. L.
1985-01-01
The development of experimental modal analysis techniques is reviewed. System and excitation assumptions are discussed. The methods examined include the forced normal mode excitation method, the frequency response function method, the damped complex exponential response method, the Ibrahim time domain approach, the polyreference approach, and mathematical input-output model methods. The current trend toward multiple input utilization in the estimation of system parameters is noted.
Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data
NASA Astrophysics Data System (ADS)
Lavrentiev, M.; Shchemel, A.; Simonov, K.
problem can be formally propounded this way: A distribution of various combinations of observed values should be estimated. Totality of the combinations is represented by the set of variables. The results of observations determine excerption of outputs. In the scope of the propounded problem continuous (along with its derivations) homomorphic reflec- tion of the space of hidden parameters to the space of observed parameters should be found. It allows to reconstruct lack information of the inputs when the number of the 1 inputs is not less than the number of hidden parameters and to estimate the distribution if information for synonymous prediction of unknown inputs is not sufficient. The following approach to build approximation based on the excerption is suggested: the excerption is supplemented with the hidden parameters, which are distributed uni- formly in a multidimensional limited space. Then one should find correspondence of model and observed outputs. Therefore the correspondence will provide that the best approximation is the most accurate. In the odd iterations dependence between hid- den inputs and outputs is being optimized (like the conventional problem is solved). Correspondence between tasks is changing in the case when the error is reducing and distribution of inputs remains intact. Therefore, a special transform is applied to reduce error at every iteration. If the mea- sure of distribution is constant, then the condition of transformations is simplified. Such transforms are named "canonical" or "volume invariant transforms" and, there- fore, are well known. This approach is suggested for solving main inverse task of the tsunami problem. Basing on registered tsunami in seaside and shelf to estimate parameters of tsunami's hearth. 2
Optical input impedance of nanostrip antennas
NASA Astrophysics Data System (ADS)
Wang, Ivan; Du, Ya-ping
2012-05-01
We conduct an investigation into optical nanoantennas in the form of a strip dipole made from aluminum. With the finite-difference time domain simulation both optical input impedance and radiation efficiency of nanostrip antennas are addressed. An equivalent circuit is presented as well for the nanostrip antennas at optical resonances. The optical input resistance can be adjusted by varying the geometric parameters of antenna strips. By changing both strip area and strip length simultaneously, optical input resistance can be adjusted for matching impedance with an external feeding or loading circuit. It is found that the optical radiation efficiency does not change significantly when the size of a nanostrip antenna varies moderately.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Laumer, Bernhard; Schuster, Fabian; Stutzmann, Martin; Bergmaier, Andreas; Dollinger, Guenther; Eickhoff, Martin
2013-06-21
Zn{sub 1-x}Mg{sub x}O epitaxial films with Mg concentrations 0{<=}x{<=}0.3 were grown by plasma-assisted molecular beam epitaxy on a-plane sapphire substrates. Precise determination of the Mg concentration x was performed by elastic recoil detection analysis. The bandgap energy was extracted from absorption measurements with high accuracy taking electron-hole interaction and exciton-phonon complexes into account. From these results a linear relationship between bandgap energy and Mg concentration is established for x{<=}0.3. Due to alloy disorder, the increase of the photoluminescence emission energy with Mg concentration is less pronounced. An analysis of the lattice parameters reveals that the epitaxial films grow biaxially strained on a-plane sapphire.
VizieR Online Data Catalog: CARMENES input catalogue of M dwarfs. I (Alonso-Floriano+, 2015)
NASA Astrophysics Data System (ADS)
Alonso-Floriano, F. J.; Morales, J. C.; Caballero, J. A.; Montes, D.; Klutsch, A.; Mundt, R.; Cortes-Contreras, M.; Ribas, I.; Reiners, A.; Amado, P. J.; Quirrenbach, A.; Jeffers, S. V.
2015-03-01
List of 753 late-type stars, mostly M dwarfs, observed with the low-resolution optical spectrograph CAFOS at the 2.2m Calar Alto telescope for the preparation of the CARMENES input catalogue (http://carmenes.caha.es/). We provide basic data, observation parameters, spectral-typing indices, zeta metallicity index, Hα pseudo-equivalent width, spectral type from the literature, and our accurate adopted spectral type. (4 data files).
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Measuring Input Thresholds on an Existing Board
NASA Technical Reports Server (NTRS)
Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.
2011-01-01
A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and
Crespo, Cristina; Fernández, José R; Aboy, Mateo; Mojón, Artemio
2013-03-01
This paper reports the results of a study designed to determine whether there are statistically significant differences between the values of ambulatory blood pressure monitoring (ABPM) parameters obtained using different methods-fixed schedule, diary, and automatic algorithm based on actigraphy-of defining the main activity and rest periods, and to determine the clinical relevance of such differences. We studied 233 patients (98 men/135 women), 61.29 ± .83 yrs of age (mean ± SD). Statistical methods were used to measure agreement in the diagnosis and classification of subjects within the context of ABPM and cardiovascular disease risk assessment. The results show that there are statistically significant differences both at the group and individual levels. Those at the individual level have clinically significant implications, as they can result in a different classification, and, therefore, different diagnosis and treatment for individual subjects. The use of an automatic algorithm based on actigraphy can lead to better individual treatment by correcting the accuracy problems associated with the fixed schedule on patients whose actual activity/rest routine differs from the fixed schedule assumed, and it also overcomes the limitations and reliability issues associated with the use of diaries. PMID:23130607
Blind estimation of compartmental model parameters.
Di Bella, E V; Clackdoyle, R; Gullberg, G T
1999-03-01
Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions--the tissue time-activity curves--are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min(-1) and washout rates between 0.2 and 1.0 min(-1). The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-01-01
(P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). CONCLUSION: A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability. PMID:27053857
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
Hypermnesia using auditory input.
Allen, J
1992-07-01
The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564
Instrumentation for measuring energy inputs to implements
Tompkins, F.D.; Wilhelm, L.R.
1981-01-01
A microcomputer-based instrumentation system for monitoring tractor operating parameters and energy inputs to implements was developed and mounted on a 75-power-takeoff-KW tractor. The instrumentation system, including sensors and data handling equipment, is discussed. 10 refs.
Selecting training inputs via greedy rank covering
Buchsbaum, A.L.; Santen, J.P.H. van
1996-12-31
We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.
DO MODEL UNCERTAINTY WITH CORRELATED INPUTS
The effect of correlation among the input parameters and variables on the output uncertainty of the Streeter-Phelps water quality model is examined. hree uncertainty analysis techniques are used: sensitivity analysis, first-order error analysis, and Monte Carlo simulation. odifie...
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2006-12-01
defining the HYDRO1k metrics of aspect, flow direction, slope etc., we refine the grid scale from the current HYDRO1k GTOPO30 DEM dimension of 1 km to a local DEM for our study area having a grid scale of 0.25 km. We employ higher-order 9 point finite differences to compute local topographic gradients, then aggragate (or integrate) the "HYDRO1k-type" parameters to the 1 km pixel dimensions of the NDVI data. We then perform a multivariate comparison of the derived-hydrologic parameters with characteristic phenological behaviors from the interannual NDVI modeled time series. For example, as one would expect, in spite of similarities of peak NDVI values in a particularly "wet" year, irrigated agricultural sites are well- discriminated from natural semi-arid grassland due to the multivariate controls from observed precipitation, surface water runoff, topographic slope, and the intrinsic fine structure in the behavior of the interannual NDVI time series. NDVI time series from montane areas provide interesting insight into the time of disappearance of snow cover, as well as the relation of summertime phenology to elevation and slope. A striking pattern emerges regarding the similitude between seasonal surface water runoff and interannual trends in phenology that corroborates the potential of NDVI data to monitor and characterize long term trends in the response of phenology to hydrological processes.
NASA Astrophysics Data System (ADS)
The Arctic Research and Policy Act (Eos, June 26, 1984, p. 412) was signed into law by President Ronald Reagan this past July. One of its objectives is to develop a 5-year research plan for the Arctic. A request for input to this plan is being issued this week to nearly 500 people in science, engineering, and industry.To promote Arctic research and to recommend research policy in the Arctic, the new law establishes a five-member Arctic Research Commission, to be appointed by the President, and establishes an Interagency Arctic Research Policy Committee, to be composed of representatives from nearly a dozen agencies having interests in the region. The commission will make policy recommendations, and the interagency committee will implement those recommendations. The National Science Foundation (NSF) has been designated as the lead agency of the interagency committee.
Developing Accurate Spatial Maps of Cotton Fiber Quality Parameters
Technology Transfer Automated Retrieval System (TEKTRAN)
Awareness of the importance of cotton fiber quality (Gossypium, L. sps.) has increased as advances in spinning technology require better quality cotton fiber. Recent advances in geospatial information sciences allow an improved ability to study the extent and causes of spatial variability in fiber p...
Input Multiplicities in Process Control.
ERIC Educational Resources Information Center
Koppel, Lowell B.
1983-01-01
Describes research investigating potential effect of input multiplicity on multivariable chemical process control systems. Several simple processes are shown to exhibit the possibility of theoretical developments on input multiplicity and closely related phenomena are discussed. (JN)
Modeling and generating input processes
Johnson, M.E.
1987-01-01
This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.
Chaudhary, Naveed Ishtiaq; Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Aslam, Muhammad Saeed
2013-01-01
A novel algorithm is developed based on fractional signal processing approach for parameter estimation of input nonlinear control autoregressive (INCAR) models. The design scheme consists of parameterization of INCAR systems to obtain linear-in-parameter models and to use fractional least mean square algorithm (FLMS) for adaptation of unknown parameter vectors. The performance analyses of the proposed scheme are carried out with third-order Volterra least mean square (VLMS) and kernel least mean square (KLMS) algorithms based on convergence to the true values of INCAR systems. It is found that the proposed FLMS algorithm provides most accurate and convergent results than those of VLMS and KLMS under different scenarios and by taking the low-to-high signal-to-noise ratio. PMID:23853538
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
NASA Astrophysics Data System (ADS)
Andréassian, Vazken; Perrin, Charles; Michel, Claude
2004-01-01
This paper attempts to assess the impact of improved estimates of areal potential evapotranspiration (PE) on the results of two rainfall-runoff models. A network of 42 PE stations was used for a sample of 62 watersheds and two watershed models of different complexity (the four-parameter GR4J model and an eight-parameter modified version of TOPMODEL), to test how sensitive rainfall-runoff models were to watershed PE estimated with the Penman equation. First, Penman PE estimates were regionalized in the Massif Central highlands of France, a mountainous area where PE is known to vary greatly with elevation, latitude, and longitude. The two watershed models were then used to assess changes in model efficiency with the improved PE input. Finally, the behavior of one of the model's parameters was analyzed, to understand how watershed models cope with systematic errors in the estimated PE input. In terms of model efficiency, in both models it was found that very simple assumptions on watershed PE input (the same average input for all watersheds) yield the same results as more accurate input obtained from regionalization. The detailed evaluation of the GR4J model calibrated with different PE input scenarios showed that the model is clearly sensitive to PE input, but that it uses its two production parameters to adapt to the various PE scenarios.
Estimating nonstationary input signals from a single neuronal spike train
NASA Astrophysics Data System (ADS)
Kim, Hideaki; Shinomoto, Shigeru
2012-11-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.
Olivares, Alberto; Ruiz-Garcia, Gonzalo; Olivares, Gonzalo; Górriz, Juan Manuel; Ramirez, Javier
2013-01-01
Ellipsoid fitting algorithms are widely used to calibrate Magnetic Angular Rate and Gravity (MARG) sensors. These algorithms are based on the minimization of an error function that optimizes the parameters of a mathematical sensor model that is subsequently applied to calibrate the raw data. The convergence of this kind of algorithms to a correct solution is very sensitive to input data. Input calibration datasets must be properly distributed in space so data can be accurately fitted to the theoretical ellipsoid model. Gathering a well distributed set is not an easy task as it is difficult for the operator carrying out the maneuvers to keep a visual record of all the positions that have already been covered, as well as the remaining ones. It would be then desirable to have a system that gives feedback to the operator when the dataset is ready, or to enable the calibration process in auto-calibrated systems. In this work, we propose two different algorithms that analyze the goodness of the distributions by computing four different indicators. The first approach is based on a thresholding algorithm that uses only one indicator as its input and the second one is based on a Fuzzy Logic System (FLS) that estimates the calibration error for a given calibration set using a weighted combination of two indicators. Very accurate classification between valid and invalid datasets is achieved with average Area Under Curve (AUC) of up to 0.98. PMID:24013490
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Waite, Anthony; /SLAC
2011-09-07
Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each
Solar astrophysical fundamental parameters
NASA Astrophysics Data System (ADS)
Meftah, M.; Irbah, A.; Hauchecorne, A.
2014-08-01
The accurate determination of the solar photospheric radius has been an important problem in astronomy for many centuries. From the measurements made by the PICARD spacecraft during the transit of Venus in 2012, we obtained a solar radius of 696,156±145 kilometres. This value is consistent with recent measurements carried out atmosphere. This observation leads us to propose a change of the canonical value obtained by Arthur Auwers in 1891. An accurate value for total solar irradiance (TSI) is crucial for the Sun-Earth connection, and represents another solar astrophysical fundamental parameter. Based on measurements collected from different space instruments over the past 35 years, the absolute value of the TSI, representative of a quiet Sun, has gradually decreased from 1,371W.m-2 in 1978 to around 1,362W.m-2 in 2013, mainly due to the radiometers calibration differences. Based on the PICARD data and in agreement with Total Irradiance Monitor measurements, we predicted the TSI input at the top of the Earth's atmosphere at a distance of one astronomical unit (149,597,870 kilometres) from the Sun to be 1,362±2.4W.m-2, which may be proposed as a reference value. To conclude, from the measurements made by the PICARD spacecraft, we obtained a solar photospheric equator-to-pole radius difference value of 5.9±0.5 kilometres. This value is consistent with measurements made by different space instruments, and can be given as a reference value.
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Payne, A.C.; Breeding, R.J.; Gorham, E.D.; Brown, T.D.; Rightley, G.S.; Gregory, J.J. ); Murfin, W. ); Amos, C.N. )
1991-04-01
This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the U.S. Nuclear Regulatory Commission. The results of the Containment Loads and Molten Core/Containment Interaction Expert Panel Elicitation are presented in this part of Volume 2 of NUREG/CR-4551. The Containment Loads Expert Panel considered seven issues: (1) hydrogen phenomena at Grand Gulf; (2) hydrogen burn at vessel breach at Sequoyah; (3) BWR reactor building failure due to hydrogen; (4) Grand Gulf containment loads at vessel breach; (5) pressure increment in the Sequoyah containment at vessel breach; (6) loads at vessel breach: Surry; and (7) pressure increment in the Zion containment at vessel breach. The report begins with a brief discussion of the methods used to elicit the information from the experts. The information for each issue is then presented in five sections: (1) a brief definition of the issue, (2) a brief summary of the technical rationale supporting the distributions developed by each of the experts, (3) a brief description of the operations that the project staff performed on the raw elicitation results in order to aggregate the distributions, (4) the aggregated distributions, and (5) the individual expert elicitation summaries. The Molten Core/Containment Interaction Panel considered three issues. The results of the following two of these issues are presented in this document: (1) Peach Bottom drywell shell meltthrough; and (2) Grand Gulf pedestal erosion. 89 figs., 154 tabs.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Third order TRANSPORT with MAD (Methodical Accelerator Design) input
Carey, D.C.
1988-09-20
This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)
An Integrative Method for Accurate Comparative Genome Mapping
Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y
2006-01-01
We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1999-01-01
A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time.
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
REL - English Bulk Data Input.
ERIC Educational Resources Information Center
Bigelow, Richard Henry
A bulk data input processor which is available for the Rapidly Extensible Language (REL) English versions is described. In REL English versions, statements that declare names of data items and their interrelationships normally are lines from a terminal or cards in a batch input stream. These statements provide a convenient means of declaring some…
Accurate wavelength calibration method for flat-field grating spectrometers.
Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping
2011-09-01
A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
PREVIMER : Meteorological inputs and outputs
NASA Astrophysics Data System (ADS)
Ravenel, H.; Lecornu, F.; Kerléguer, L.
2009-09-01
PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during
Anomalous neuronal responses to fluctuated inputs
NASA Astrophysics Data System (ADS)
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain.
Anomalous neuronal responses to fluctuated inputs.
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain. PMID:26565270
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.