Input Type and Parameter Resetting: Is Naturalistic Input Necessary?
ERIC Educational Resources Information Center
Rothman, Jason; Iverson, Michael
2007-01-01
It has been argued that extended exposure to naturalistic input provides L2 learners with more of an opportunity to converge of target morphosyntactic competence as compared to classroom-only environments, given that the former provide more positive evidence of less salient linguistic properties than the latter (e.g., Isabelli 2004). Implicitly,…
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756
Agricultural and Environmental Input Parameters for the Biosphere Model
Kaylie Rasmuson; Kurt Rautenstrauch
2003-06-20
This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Albert, Carlo; Rieckermann, Jörg; Reichert, Peter
2016-04-01
Rainfall input uncertainty is one of the major concerns in hydrological modeling. Unfortunately, during inference, input errors are usually neglected, which can lead to biased parameters and implausible predictions. Rainfall multipliers can reduce this problem but still fail when the observed input (precipitation) has a different temporal pattern from the true one or if the true nonzero input is not detected. In this study, we propose an improved input error model which is able to overcome these challenges and to assess and reduce input uncertainty. We formulate the average precipitation over the watershed as a stochastic input process (SIP) and, together with a model of the hydrosystem, include it in the likelihood function. During statistical inference, we use "noisy" input (rainfall) and output (runoff) data to learn about the "true" rainfall, model parameters, and runoff. We test the methodology with the rainfall-discharge dynamics of a small urban catchment. To assess its advantages, we compare SIP with simpler methods of describing uncertainty within statistical inference: (i) standard least squares (LS), (ii) bias description (BD), and (iii) rainfall multipliers (RM). We also compare two scenarios: accurate versus inaccurate forcing data. Results show that when inferring the input with SIP and using inaccurate forcing data, the whole-catchment precipitation can still be realistically estimated and thus physical parameters can be "protected" from the corrupting impact of input errors. While correcting the output rather than the input, BD inferred similarly unbiased parameters. This is not the case with LS and RM. During validation, SIP also delivers realistic uncertainty intervals for both rainfall and runoff. Thus, the technique presented is a significant step toward better quantifying input uncertainty in hydrological inference. As a next step, SIP will have to be combined with a technique addressing model structure uncertainty.
Accurate parameter estimation for unbalanced three-phase system.
Chen, Yuan; So, Hing Cheung
2014-01-01
Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS. PMID:25162056
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Burley, Casey L.; Marcolini, Michael A.
1994-01-01
Rotor noise prediction codes predict the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predicitions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and cyclic flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the chordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of Central Processing Unit (CPU) time necessary for the various approximations.
Sensitivity of acoustic predictions to variation of input parameters
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Marcolini, Michael A.; Burley, Casey L.
1991-01-01
The noise prediction code WOPWOP predicts the thickness and loading noise produced by a helicopter rotor, given the blade motion, rotor operating conditions, and fluctuating force distribution over the blade surface. However, the criticality of these various inputs, and their respective effects on the predicted acoustic field, have never been fully addressed. This paper examines the importance of these inputs, and the sensitivity of the acoustic predictions to a variation of each parameter. The effects of collective and cyclic pitch, as well as coning and flapping, are presented. Blade loading inputs are examined to determine the necessary spatial and temporal resolution, as well as the importance of the cordwise distribution. The acoustic predictions show regions in the acoustic field where significant errors occur when simplified blade motions or blade loadings are used. An assessment of the variation in the predicted acoustic field is balanced by a consideration of CPU time necessary for the various approximations.
Environmental Transport Input Parameters for the Biosphere Model
M. Wasiolek
2004-09-10
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]).
Agricultural and Environmental Input Parameters for the Biosphere Model
K. Rasmuson; K. Rautenstrauch
2004-09-14
This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.
Inhalation Exposure Input Parameters for the Biosphere Model
K. Rautenstrauch
2004-09-10
This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.
Accurate and robust estimation of camera parameters using RANSAC
NASA Astrophysics Data System (ADS)
Zhou, Fuqiang; Cui, Yi; Wang, Yexin; Liu, Liu; Gao, He
2013-03-01
Camera calibration plays an important role in the field of machine vision applications. The popularly used calibration approach based on 2D planar target sometimes fails to give reliable and accurate results due to the inaccurate or incorrect localization of feature points. To solve this problem, an accurate and robust estimation method for camera parameters based on RANSAC algorithm is proposed to detect the unreliability and provide the corresponding solutions. Through this method, most of the outliers are removed and the calibration errors that are the main factors influencing measurement accuracy are reduced. Both simulative and real experiments have been carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large noise condition and quite efficient to improve the calibration accuracy compared with the original state.
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempiricalmore » OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.« less
Machine learning of parameters for accurate semiempirical quantum chemical calculations
Dral, Pavlo O.; von Lilienfeld, O. Anatole; Thiel, Walter
2015-04-14
We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C_{7}H_{10}O_{2}, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules.
Macroscopic singlet oxygen model incorporating photobleaching as an input parameter
NASA Astrophysics Data System (ADS)
Kim, Michele M.; Finlay, Jarod C.; Zhu, Timothy C.
2015-03-01
A macroscopic singlet oxygen model for photodynamic therapy (PDT) has been used extensively to calculate the reacted singlet oxygen concentration for various photosensitizers. The four photophysical parameters (ξ, σ, β, δ) and threshold singlet oxygen dose ([1O2]r,sh) can be found for various drugs and drug-light intervals using a fitting algorithm. The input parameters for this model include the fluence, photosensitizer concentration, optical properties, and necrosis radius. An additional input variable of photobleaching was implemented in this study to optimize the results. Photobleaching was measured by using the pre-PDT and post-PDT sensitizer concentrations. Using the RIF model of murine fibrosarcoma, mice were treated with a linear source with fluence rates from 12 - 150 mW/cm and total fluences from 24 - 135 J/cm. The two main drugs investigated were benzoporphyrin derivative monoacid ring A (BPD) and 2-[1-hexyloxyethyl]-2-devinyl pyropheophorbide-a (HPPH). Previously published photophysical parameters were fine-tuned and verified using photobleaching as the additional fitting parameter. Furthermore, photobleaching can be used as an indicator of the robustness of the model for the particular mouse experiment by comparing the experimental and model-calculated photobleaching ratio.
Environmental Transport Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-06-27
This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values
Inhalation Exposure Input Parameters for the Biosphere Model
M. Wasiolek
2006-06-05
This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the
Inhalation Exposure Input Parameters for the Biosphere Model
M. A. Wasiolek
2003-09-24
This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the
Soil-related Input Parameters for the Biosphere Model
A. J. Smith
2003-07-02
This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash
Hendricks, Terry J.; Karri, Naveen K.
2007-06-30
Advanced, direct thermal energy conversion technologies are receiving increased research attention in order to recover waste thermal energy in advanced vehicles and industrial processes. Advanced thermoelectric (TE) systems necessarily require integrated system-level analyses to establish accurate optimum system designs. Past system-level design and analysis has relied on well-defined deterministic input parameters even though many critically important environmental and system design parameters in the above mentioned applications are often randomly variable, sometimes according to complex relationships, rather than discrete, well-known deterministic variables. This work describes new research and development creating techniques and capabilities for probabilistic design and analysis of advanced TE power generation systems to quantify the effects of randomly uncertain design inputs in determining more robust optimum TE system designs and expected outputs. Selected case studies involving stochastic TE .material properties and coupled multi-variable stochasticity in key environmental and design parameters are presented and discussed to demonstrate key impacts from considering stochastic design inputs on the TE design optimization process. Critical findings show that: 1) stochastic Gaussian input distributions may produce Gaussian or non-Gaussian outcome probability distributions for critical TE design parameters, and 2) probabilistic input considerations can create design effects that warrant significant modifications to deterministically-derived optimum TE system designs. Magnitudes and directions of these design modifications are quantified for selected TE system design analysis cases.
Direct computation of parameters for accurate polarizable force fields
Verstraelen, Toon Vandenbrande, Steven; Ayers, Paul W.
2014-11-21
We present an improved electronic linear response model to incorporate polarization and charge-transfer effects in polarizable force fields. This model is a generalization of the Atom-Condensed Kohn-Sham Density Functional Theory (DFT), approximated to second order (ACKS2): it can now be defined with any underlying variational theory (next to KS-DFT) and it can include atomic multipoles and off-center basis functions. Parameters in this model are computed efficiently as expectation values of an electronic wavefunction, obviating the need for their calibration, regularization, and manual tuning. In the limit of a complete density and potential basis set in the ACKS2 model, the linear response properties of the underlying theory for a given molecular geometry are reproduced exactly. A numerical validation with a test set of 110 molecules shows that very accurate models can already be obtained with fluctuating charges and dipoles. These features greatly facilitate the development of polarizable force fields.
Accurate 3D quantification of the bronchial parameters in MDCT
NASA Astrophysics Data System (ADS)
Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.
2005-08-01
The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.
Soil-Related Input Parameters for the Biosphere Model
A. J. Smith
2004-09-09
This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This
A Study on the Effect of Input Parameters on Springback Prediction Accuracy
NASA Astrophysics Data System (ADS)
Han, Y. S.; Yang, W. H.; Choi, K. Y.; Kim, B. H.
2011-08-01
In this study, it is considered the input parameters in springback simulation affect factors to use member part by Taguchi's method into six-sigma tool on the basis of experiment for acquiring much more accurate springback prediction in Pamstamp2G. The best combination of input parameters for higher springback prediction accuracy is determined to the fender part as the one is applied for member part. The cracks and wrinkles in drawing and flanging operation must be removed for predicting the higher springback in accuracy. The compensation of springback on the basis of simulation is carried out. It is concluded that 95% of accuracy for springback prediction in dimension is secured as comparing with tryout panel.
Loewe, Axel; Wilhelms, Mathias; Schmid, Jochen; Krause, Mathias J.; Fischer, Fathima; Thomas, Dierk; Scholz, Eberhard P.; Dössel, Olaf; Seemann, Gunnar
2016-01-01
Computational models of cardiac electrophysiology provided insights into arrhythmogenesis and paved the way toward tailored therapies in the last years. To fully leverage in silico models in future research, these models need to be adapted to reflect pathologies, genetic alterations, or pharmacological effects, however. A common approach is to leave the structure of established models unaltered and estimate the values of a set of parameters. Today’s high-throughput patch clamp data acquisition methods require robust, unsupervised algorithms that estimate parameters both accurately and reliably. In this work, two classes of optimization approaches are evaluated: gradient-based trust-region-reflective and derivative-free particle swarm algorithms. Using synthetic input data and different ion current formulations from the Courtemanche et al. electrophysiological model of human atrial myocytes, we show that neither of the two schemes alone succeeds to meet all requirements. Sequential combination of the two algorithms did improve the performance to some extent but not satisfactorily. Thus, we propose a novel hybrid approach coupling the two algorithms in each iteration. This hybrid approach yielded very accurate estimates with minimal dependency on the initial guess using synthetic input data for which a ground truth parameter set exists. When applied to measured data, the hybrid approach yielded the best fit, again with minimal variation. Using the proposed algorithm, a single run is sufficient to estimate the parameters. The degree of superiority over the other investigated algorithms in terms of accuracy and robustness depended on the type of current. In contrast to the non-hybrid approaches, the proposed method proved to be optimal for data of arbitrary signal to noise ratio. The hybrid algorithm proposed in this work provides an important tool to integrate experimental data into computational models both accurately and robustly allowing to assess the often non
A generalized multiple-input, multiple-output modal parameter estimation algorithm
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.; Blair, M. A.
1984-01-01
A new method for experimental determination of the modal parameters of a structure is presented. The method allows for multiple input forces to be applied simultaneously, and for an arbitrary number of acceleration response measurements to be employed. These data are used to form the equations of motion for a damped linear elastic structure. The modal parameters are then obtained through an eigenvalue technique. In conjunction with the development of the equations, an extensive computer simulation study was performed. The results of the study show a marked improvement in the mode shape identification for closely-spaced modes as the number of applied forces is increased. Also demonstrated is the influence of noise on the method's ability to identify accurate modal parameters. Here again, an increase in the number of exciters leads to a significant improvement in the identified parameters.
Identification of accurate nonlinear rainfall-runoff models with unique parameters
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N.
2009-04-01
We propose a strategy to identify models with unique parameters that yield accurate streamflow predictions, given a time-series of rainfall inputs. The procedure consists of five general steps. First, an a priori range of model structures is specified based on prior general and site-specific hydrologic knowledge. To this end, we rely on a flexible model code that allows a specification of a wide range of model structures, from simple to complex. Second, using global optimization each model structure is calibrated to a record of rainfall-runoff data, yielding optimal parameter values for each model structure. Third, accuracy of each model structure is determined by estimating model prediction errors using independent validation and statistical theory. Fourth, parameter identifiability of each calibrated model structure is estimated by means of Monte Carlo Markov Chain simulation. Finally, an assessment is made about each model structure in terms of its accuracy of mimicking rainfall-runoff processes (step 3), and the uniqueness of its parameters (step 4). The procedure results in the identification of the most complex and accurate model supported by the data, without causing parameter equifinality. As such, it provides insight into the information content of the data for identifying nonlinear rainfall-runoff models. We illustrate the method using rainfall-runoff data records from several MOPEX basins in the US.
Uncertainty related to input parameters of (137)Cs soil redistribution model for undisturbed fields.
Iurian, Andra-Rada; Mabit, Lionel; Cosma, Constantin
2014-10-01
This study presents an alternative method to empirically establish the effective diffusion coefficient and the convective velocity of (137)Cs in undisturbed soils. This approach offers the possibility to improve the parameterisation and the accuracy of the (137)Cs Diffusion and Migration Model (DMM) used to assess soil erosion magnitudes. The impact of the different input parameters of this radiometric model on the derived-soil redistribution rates has been determined for a Romanian pastureland located in the northwest extremity of the Transylvanian Plain. By fitting the convection-diffusion equation to the available experimental data, the diffusion coefficient and convection velocity of (137)Cs in soil could be determined; 72% of the (137)Cs soil content could be attributed to the (137)Cs fallout originating from Chernobyl. The medium-term net erosion rate obtained with the calculated input parameters reached -6.6 t ha(-1) yr(-1). The model highlights great sensitivity to parameter estimations and the calculated erosion rates for undisturbed landscapes can be highly impacted if the input parameters are not accurately determined from the experimental data set. Upper and lower bounds should be established based on the determined uncertainty budget for the reliable estimates of the derived redistribution rates. PMID:24929506
Application of optimal input synthesis to aircraft parameter identification
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.; Mehra, R. K.
1976-01-01
The Frequency Domain Input Synthesis procedure is used in identifying the stability and control derivatives of an aircraft. By using a frequency-domain approach, one can handle criteria that are not easily handled by the time-domain approaches. Numerical results are presented for optimal elevator deflections to estimate the longitudinal stability and control derivatives subject to root-mean square constraints on the input. The applicability of the steady state optimal inputs to finite duration flight testing is investigated. The steady state approximation of frequency-domain synthesis is good for data lengths greater than two time cycles for the short period mode of the aircraft longitudinal motions. Phase relationships between different frequency components become important for shorter data lengths. The frequency domain inputs are shown to be much better than the conventional doublet inputs.
Flight investigation of various control inputs intended for parameter estimation
NASA Technical Reports Server (NTRS)
Shafer, M. F.
1984-01-01
NASA's F-8 digital fly-by-wire aircraft has been subjected to stability and control derivative assessments, leading to the proposal of improved control inputs for more efficient control derivative estimation. This will reduce program costs by reducing flight test and data analysis requirements. Inputs were divided into sinusoidal types and cornered types. Those with corners produced the best set of stability and control derivatives for the unaugmented flight control system mode. Small inputs are noted to have provided worse derivatives than larger ones.
Accurate lattice parameter measurements of stoichiometric uranium dioxide
NASA Astrophysics Data System (ADS)
Leinders, Gregory; Cardinaels, Thomas; Binnemans, Koen; Verwerft, Marc
2015-04-01
The paper presents and discusses lattice parameter analyses of pure, stoichiometric UO2. Attention was paid to prepare stoichiometric samples and to maintain stoichiometry throughout the analyses. The lattice parameter of UO2.000±0.001 was evaluated as being 547.127 ± 0.008 pm at 20 °C, which is substantially higher than many published values for the UO2 lattice constant and has an improved precision by about one order of magnitude. The higher value of the lattice constant is mainly attributed to the avoidance of hyperstoichiometry in the present study and to a minor extent to the use of the currently accepted Cu Kα1 X-ray wavelength value. Many of the early studies used Cu Kα1 wavelength values that differ from the currently accepted value, which also contributed to an underestimation of the true lattice parameter.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Clinically accurate fetal ECG parameters acquired from maternal abdominal sensors
CLIFFORD, Gari; SAMENI, Reza; WARD, Mr. Jay; ROBINSON, Julian; WOLFBERG, Adam J.
2011-01-01
OBJECTIVE To evaluate the accuracy of a novel system for measuring fetal heart rate and ST-segment changes using non-invasive electrodes on the maternal abdomen. STUDY DESIGN Fetal ECGs were recorded using abdominal sensors from 32 term laboring women who had a fetal scalp electrode (FSE) placed for a clinical indication. RESULTS Good quality data for FHR estimation was available in 91.2% of the FSE segments, and 89.9% of the abdominal electrode segments. The root mean square (RMS) error between the FHR data calculated by both methods over all processed segments was 0.36 beats per minute. ST deviation from the isoelectric point ranged from 0 to 14.2% of R-wave amplitude. The RMS error between the ST change calculated by both methods averaged over all processed segments was 3.2%. CONCLUSION FHR and ST change acquired from the maternal abdomen is highly accurate and on average is clinically indistinguishable from FHR and ST change calculated using FSE data. PMID:21514560
Predicting accurate line shape parameters for CO2 transitions
NASA Astrophysics Data System (ADS)
Gamache, Robert R.; Lamouroux, Julien
2013-11-01
The vibrational dependence of CO2 half-widths and line shifts are given by a modification of the model proposed by Gamache and Hartmann [Gamache R, Hartmann J-M. J Quant Spectrosc Radiat Transfer 2004;83:119]. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power and a reference ro-vibrational transition. Calculations were made for 24 bands for lower rotational quantum numbers from 0 to 160 for N2-, O2-, air-, and self-collisions with CO2. These data were extrapolated to J″=200 to accommodate several databases. Comparison of the CRB calculations with measurement gives very high confidence in the data. In the model a Quantum Coordinate is defined by (c1 |Δν1|+c2 |Δν2|+c3|Δν3|)p. The power p is adjusted and a linear least-squares fit to the data by the model expression is made. The procedure is iterated on the correlation coefficient, R, until [|R|-1] is less than a threshold. The results demonstrate the appropriateness of the model. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From the data of the fits, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO2 databases to have complete information for the line shape parameters.
Sprung, J.L.; Jow, H-N ); Rollstin, J.A. ); Helton, J.C. )
1990-12-01
Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D. ); Amos, C.N. ); Helton, J. ); Boyd, G. )
1992-06-01
In support of the Nuclear Regulatory Commission's (NRC's) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
Evaluation of severe accident risks: Quantification of major input parameters
Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D. ); Murfin, W. ); Amos, C.N. )
1992-03-01
In support of the Nuclear Regulatory Commission's (NRC's) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a so-called'' point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.
Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1993-01-01
The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Accurate Collisional Cross-Sections: Important Non-Lte Input Data
NASA Astrophysics Data System (ADS)
Mashonkina, L.
2010-11-01
Non-LTE modelling for a particular atom requires accurate collisional excitation and ionization cross-sections for the entire system of transitions in the atom. This review concerns with inelastic collisions with electrons and neutral hydrogen atoms. For the selected atoms, H i and Ca ii, comparisons are made between electron impact excitation rates from ab initio calculations and various theoretical approximations. The effect of the use of modern data on non-LTE modelling is shown. For most transitions and most atoms, hydrogen collisional rates are calculated using a semi-empirical modification of the classical Thomson formula for ionization by electrons. Approaches used to estimate empirically the efficiency of hydrogenic collisions in the statistical equilibrium of atoms are reviewed. This research was supported by the Deutsche Forschungsgemeinschaft with grant 436 RUS 17/13/07.
Optimal Input Design for Aircraft Parameter Estimation using Dynamic Programming Principles
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Optimal input design for aircraft parameter estimation using dynamic programming principles
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Morelli, Eugene A.
1990-01-01
A new technique was developed for designing optimal flight test inputs for aircraft parameter estimation experiments. The principles of dynamic programming were used for the design in the time domain. This approach made it possible to include realistic practical constraints on the input and output variables. A description of the new approach is presented, followed by an example for a multiple input linear model describing the lateral dynamics of a fighter aircraft. The optimal input designs produced by the new technique demonstrated improved quality and expanded capability relative to the conventional multiple input design method.
Reinbolt, Jeffrey A.; Haftka, Raphael T.; Chmielewski, Terese L.; Fregly, Benjamin J.
2013-01-01
Variations in joint parameter values (axis positions and orientations in body segments) and inertial parameter values (segment masses, mass centers, and moments of inertia) as well as kinematic noise alter the results of inverse dynamics analyses of gait. Three-dimensional linkage models with joint constraints have been proposed as one way to minimize the effects of noisy kinematic data. Such models can also be used to perform gait optimizations to predict post-treatment function given pre-treatment gait data. This study evaluates whether accurate patient-specific joint and inertial parameter values are needed in three-dimensional linkage models to produce accurate inverse dynamics results for gait. The study was performed in two stages. First, we used optimization analyses to evaluate whether patient-specific joint and inertial parameter values can be calibrated accurately from noisy kinematic data, and second, we used Monte Carlo analyses to evaluate how errors in joint and inertial parameter values affect inverse dynamics calculations. Both stages were performed using a dynamic, 27 degree-of-freedom, full-body linkage model and synthetic (i.e., computer generated) gait data corresponding to a nominal experimental gait motion. In general, joint but not inertial parameter values could be found accurately from noisy kinematic data. Root-mean-square (RMS) errors were 3° and 4 mm for joint parameter values and 1 kg, 22 mm, and 74,500 kg*mm2 for inertial parameter values. Furthermore, errors in joint but not inertial parameter values had a significant effect on calculated lower-extremity inverse dynamics joint torques. The worst RMS torque error averaged 4% bodyweight*height (BW*H) due to joint parameter variations but less than 0.25% BW*H due to inertial parameter variations. These results suggest that inverse dynamics analyses of gait utilizing linkage models with joint constraints should calibrate the model’s joint parameter values to obtain accurate joint
Suggestions for CAP-TSD mesh and time-step input parameters
NASA Technical Reports Server (NTRS)
Bland, Samuel R.
1991-01-01
Suggestions for some of the input parameters used in the CAP-TSD (Computational Aeroelasticity Program-Transonic Small Disturbance) computer code are presented. These parameters include those associated with the mesh design and time step. The guidelines are based principally on experience with a one-dimensional model problem used to study wave propagation in the vertical direction.
A simple and accurate resist parameter extraction method for sub-80-nm DRAM patterns
NASA Astrophysics Data System (ADS)
Lee, Sook; Hwang, Chan; Park, Dong-Woon; Kim, In-Sung; Kim, Ho-Chul; Woo, Sang-Gyun; Cho, Han-Ku; Moon, Joo-Tae
2004-05-01
Due to the polarization effect of high NA lithography, the consideration of resist effect in lithography simulation becomes increasingly important. In spite of the importance of resist simulation, many process engineers are reluctant to consider resist effect in lithography simulation due to time-consuming procedure to extract required resist parameters and the uncertainty of measurement of some parameters. Weiss suggested simplified development model, and this model does not require the complex kinetic parameters. For the device fabrication engineers, there is a simple and accurate parameter extraction and optimizing method using Weiss model. This method needs refractive index, Dill"s parameters and development rate monitoring (DRM) data in parameter extraction. The parameters extracted using referred sequence is not accurate, so that we have to optimize the parameters to fit the critical dimension scanning electron microscopy (CD SEM) data of line and space patterns. Hence, the FiRM of Sigma-C is utilized as a resist parameter-optimizing program. According to our study, the illumination shape, the aberration and the pupil mesh point have a large effect on the accuracy of resist parameter in optimization. To obtain the optimum parameters, we need to find the saturated mesh points in terms of normalized intensity log slope (NILS) prior to an optimization. The simulation results using the optimized parameters by this method shows good agreement with experiments for iso-dense bias, Focus-Exposure Matrix data and sub 80nm device pattern simulation.
NASA Technical Reports Server (NTRS)
Hughes, D. L.; Ray, R. J.; Walton, J. T.
1985-01-01
The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.
Measuring accurate body parameters of dressed humans with large-scale motion using a Kinect sensor.
Xu, Huanghao; Yu, Yao; Zhou, Yu; Li, Yang; Du, Sidan
2013-01-01
Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods. PMID:24064597
NASA Astrophysics Data System (ADS)
Faybishenko, B.; McCurley, R. D.; Wang, J. Y.
2004-12-01
To assess, via numerical simulation, the effect of 12 uncertain input parameters (characterizing soil and rock properties and boundary [meteorological] conditions), on net infiltration uncertainty, the Latin Hypercube Sampling (LHS) technique (a modified Monte Carlo approach using a form of stratified sampling) was used. Each uncertain input parameter is presented using a probability distribution function, characterizing the epistemic uncertainty (which arises from the lack of knowledge about parameters-an uncertainty that can be reduced as new information becomes available). One hundred LHS realizations (using the code LHS V2.50 developed at Sandia National Laboratories) of the uncertain input parameters were used to simulate the net infiltration over the Yucca Mountain repository footprint. Simulations were carried out using the code INFIL VA-2.a1 (a modified USGS code INFIL V2.0). The results of simulations were then used to determine the net infiltration probability distribution function. According to theoretical considerations, for 12 uncertain input parameters, from 15 to 36 realizations using the LHS technique should be sufficient to get meaningful results. In this presentation, we will show that the theoretical considerations may significantly underestimate the required number of realizations for the evaluation of the correlation between the net infiltration and uncertain input parameters. We will demonstrate that the calculated net infiltration rate (presented as a probability distribution function) oscillates as a function of simulation runs, and that the correlation between net infiltration rate and the uncertain input parameters depends on the number of simulation runs. For example, the correlation coefficient between the soil (or rock) permeability and net infiltration stabilizes only after 60-80 realizations. The results of the correlation analysis show that the correlation to net infiltration is highest for precipitation, bedrock permeability
Accurate and transferable extended Hückel-type tight-binding parameters
NASA Astrophysics Data System (ADS)
Cerdá, J.; Soria, F.
2000-03-01
We show how the simple extended Hückel theory can be easily parametrized in order to yield accurate band structures for bulk materials, while the resulting optimized atomic orbital basis sets present good transferability properties. The number of parameters involved is exceedingly small, typically ten or eleven per structural phase. We apply the method to almost fifty elemental and compound bulk phases.
NASA Astrophysics Data System (ADS)
Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.
2016-08-01
Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging
Input parameters to codes which analyze LMFBR wire-wrapped bundles
Hawley, J.T.; Chan, Y.N.; Todreas, N.E.
1980-12-01
This report provides a current summary of recommended values of key input parameters required by ENERGY code analysis of LMFBR wire wrapped bundles. This data is based on the interpretation of experimental results from the MIT and other available laboratory programs.
NASA Astrophysics Data System (ADS)
Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.
2015-09-01
Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.
Capote, R. , E-Mail: r.capotenoy@iaea.org; Herman, M.; Oblozinsky, P.; Young, P.G.; Goriely, S.; Belgya, T.; Ignatyuk, A.V.; Koning, A.J.; Hilaire, S.; Plujko, V.A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M.B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V.M.; Reffo, G.
2009-12-15
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through (http://www-nds.iaea.org/RIPL-3/). This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
Capote, R.; Herman, M.; Capote,R.; Herman,M.; Oblozinsky,P.; Young,P.G.; Goriely,S.; Belgy,T.; Ignatyuk,A.V.; Koning,A.J.; Hilaire,S.; Pljko,V.A.; Avrigeanu,M.; Bersillon,O.; Chadwick,M.B.; Fukahori,T.; Ge, Zhigang; Han,Yinl,; Kailas,S.; Kopecky,J.; Maslov,V.M.; Reffo,G.; Sin,M.; Soukhovitskii,E.Sh.; Talou,P
2009-12-01
We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains
NASA Astrophysics Data System (ADS)
Smith, Z. K.; Steenburgh, R.; Fry, C. D.; Dryer, M.
2009-12-01
Predictions of interplanetary shock arrivals at Earth are important to space weather because they are often followed by geomagnetic disturbances that disrupt human technologies. The success of numerical simulation predictions depends on the codes and on the inputs obtained from solar observations. The inputs are usually divided into the more slowly varying background solar wind, onto which short-duration solar transient events are superposed. This paper examines the dependence of the prediction success on the range of values of the solar transient inputs. These input parameters are common to many 3-D MHD codes. The predictions of the Hakamada-Akasofu-Fry version 2 (HAFv2) model were used because its predictions of shock arrivals were tested, informally in the operational environment, from 1997 to 2006. The events list and HAFv2's performance were published in a series of three papers. The third event set is used to investigate the success and accuracy of the predictions in terms of the input parameter ranges (considered individually). By defining three thresholds for the input speed, duration, and X-ray class, it is possible to categorize the prediction outcomes by these input ranges. The X-ray class gives the most successful classification. Above the highest threshold, 89% of the predictions were successful while below the lowest threshold, only 40% were successful. The accuracy, measured in terms of the time differences between the observed and predicted shock arrivals, also shows largest improvement for the X-ray class. Guidelines are presented for space weather forecasters using the HAFv2 or other interplanetary simulation models.
Bubbico, Roberto; Mazzarotta, Barbara
2008-03-01
In the present paper the accidental release of toxic chemicals has been taken into consideration, and a sensitivity analysis study of the corresponding consequences calculation has been carried out. Four different toxic chemicals have been chosen for the simulations, and the effect of the variability of the main input parameters on the extension of the impact areas has been assessed. The results show that the influence of these parameters depends on the physical properties of the released substance and that not always the widely known rules of thumb, such as the positive influence of the wind velocity on gas dispersion, apply. In particular, the boiling temperature of the chemical has revealed to be the main parameter affecting the type of dependence of the impact distances on the input variables. PMID:17630190
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model
Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y
2011-10-27
Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.
NASA Astrophysics Data System (ADS)
Peng, Liang-You; Gong, Qihuang
2010-12-01
The accurate computations of hydrogenic continuum wave functions are very important in many branches of physics such as electron-atom collisions, cold atom physics, and atomic ionization in strong laser fields, etc. Although there already exist various algorithms and codes, most of them are only reliable in a certain ranges of parameters. In some practical applications, accurate continuum wave functions need to be calculated at extremely low energies, large radial distances and/or large angular momentum number. Here we provide such a code, which can generate accurate hydrogenic continuum wave functions and corresponding Coulomb phase shifts at a wide range of parameters. Without any essential restrict to angular momentum number, the present code is able to give reliable results at the electron energy range [10,10] eV for radial distances of [10,10] a.u. We also find the present code is very efficient, which should find numerous applications in many fields such as strong field physics. Program summaryProgram title: HContinuumGautchi Catalogue identifier: AEHD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1233 No. of bytes in distributed program, including test data, etc.: 7405 Distribution format: tar.gz Programming language: Fortran90 in fixed format Computer: AMD Processors Operating system: Linux RAM: 20 MBytes Classification: 2.7, 4.5 Nature of problem: The accurate computation of atomic continuum wave functions is very important in many research fields such as strong field physics and cold atom physics. Although there have already existed various algorithms and codes, most of them can only be applicable and reliable in a certain range of parameters. We present here an accurate FORTRAN program for
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2016-01-01
Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050
Impacts of input parameter spatial aggregation on an agricultural nonpoint source pollution model
NASA Astrophysics Data System (ADS)
FitzHugh, T. W.; Mackay, D. S.
2000-09-01
The accuracy of agricultural nonpoint source pollution models depends in part on how well model input parameters describe the relevant characteristics of the watershed. The spatial extent of input parameter aggregation has previously been shown to have a substantial impact on model output. This study investigates this problem using the Soil and Water Assessment Tool (SWAT), a distributed-parameter agricultural nonpoint source pollution model. The primary question addressed here is: how does the size or number of subwatersheds used to partition the watershed affect model output, and what are the processes responsible for model behavior? SWAT was run on the Pheasant Branch watershed in Dane County, WI, using eight watershed delineations, each with a different number of subwatersheds. Model runs were conducted for the period 1990-1996. Streamflow and outlet sediment predictions were not seriously affected by changes in subwatershed size. The lack of change in outlet sediment is due to the transport-limited nature of the Pheasant Branch watershed and the stable transport capacity of the lower part of the channel network. This research identifies the importance of channel parameters in determining the behavior of SWAT's outlet sediment predictions. Sediment generation estimates do change substantially, dropping by 44% between the coarsest and the finest watershed delineations. This change is primarily due to the sensitivity of the runoff term in the Modified Universal Soil Loss Equation to the area of hydrologic response units (HRUs). This sensitivity likely occurs because SWAT was implemented in this study with a very detailed set of HRUs. In order to provide some insight on the scaling behavior of the model two indexes were derived using the mathematics of the model. The indexes predicted SWAT scaling behavior from the data inputs without a need for running the model. Such indexes could be useful for model users by providing a direct way to evaluate alternative models
Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters
NASA Astrophysics Data System (ADS)
Falkenberg, T. V.; Vršnak, B.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P.; Hesse, M.
2010-06-01
Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time-dependent 3-D MHD model that can simulate the propagation of cone-shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit for ambient solar wind velocity, CME density, and elongation factor, regardless of whether one's main interest is arrival time, signal shape, or signal amplitude of the ICME. We find that though ENLILv2.5b currently does not include the magnetic cloud of the ICME, it replicates the signal at L1 well in the studied event. The arrival time difference between satellite data and the ENLILv2.5b baseline run of this study is less than 30 min.
Najafizadeh, Laleh; Gandjbakhche, Amir H.; Pourrezaei, Kambiz; Daryoush, Afshin
2013-01-01
Abstract. Modeling behavior of broadband (30 to 1000 MHz) frequency modulated near-infrared (NIR) photons through a phantom is the basis for accurate extraction of optical absorption and scattering parameters of biological turbid media. Photon dynamics in a phantom are predicted using both analytical and numerical simulation and are related to the measured insertion loss (IL) and insertion phase (IP) for a given geometry based on phantom optical parameters. Accuracy of the extracted optical parameters using finite element method (FEM) simulation is compared to baseline analytical calculations from the diffusion equation (DE) for homogenous brain phantoms. NIR spectroscopy is performed using custom-designed, broadband, free-space optical transmitter (Tx) and receiver (Rx) modules that are developed for photon migration at wavelengths of 680, 780, and 820 nm. Differential detection between two optical Rx locations separated by 0.3 cm is employed to eliminate systemic artifacts associated with interfaces of the optical Tx and Rx with the phantoms. Optical parameter extraction is achieved for four solid phantom samples using the least-square-error method in MATLAB (for DE) and COMSOL (for FEM) simulation by fitting data to measured results over broadband and narrowband frequency modulation. Confidence in numerical modeling of the photonic behavior using FEM has been established here by comparing the transmission mode’s experimental results with the predictions made by DE and FEM for known commercial solid brain phantoms. PMID:23322361
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Zhang, Xuesong; Liang, Faming; Yu, Beibei; Zong, Ziliang
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associated with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the ^{57}Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
Ming Parameter Input: Emma Model Redox Half Reaction Equation Deltag G Corrections for pH
D.M. Jolley
1998-07-23
The purpose of this calculation is to provide appropriate input parameters for use in MING V 1.0 (CSCI 300 18 V 1.0). This calculation corrects the Grogan and McKinley (1990) values for {Delta}G so that the data will function in the MING model. The Grogan and McKinley (1990) {Delta}G data are presented for a pH of 12 whereas the MING model requires that the {Delta}G be reported at standard conditions (i.e. pH of 0).
Level density inputs in nuclear reaction codes and the role of the spin cutoff parameter
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Burger, A.; Gorgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-09-03
Here, the proton spectrum from the 57Fe(α,p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacing and to discrete levels and using the spin cutoff parameter with much weaker excitation energy dependence than it is predicted by the Fermi-gas model.
State, Parameter, and Unknown Input Estimation Problems in Active Automotive Safety Applications
NASA Astrophysics Data System (ADS)
Phanomchoeng, Gridsada
A variety of driver assistance systems such as traction control, electronic stability control (ESC), rollover prevention and lane departure avoidance systems are being developed by automotive manufacturers to reduce driver burden, partially automate normal driving operations, and reduce accidents. The effectiveness of these driver assistance systems can be significant enhanced if the real-time values of several vehicle parameters and state variables, namely tire-road friction coefficient, slip angle, roll angle, and rollover index, can be known. Since there are no inexpensive sensors available to measure these variables, it is necessary to estimate them. However, due to the significant nonlinear dynamics in a vehicle, due to unknown and changing plant parameters, and due to the presence of unknown input disturbances, the design of estimation algorithms for this application is challenging. This dissertation develops a new approach to observer design for nonlinear systems in which the nonlinearity has a globally (or locally) bounded Jacobian. The developed approach utilizes a modified version of the mean value theorem to express the nonlinearity in the estimation error dynamics as a convex combination of known matrices with time varying coefficients. The observer gains are then obtained by solving linear matrix inequalities (LMIs). A number of illustrative examples are presented to show that the developed approach is less conservative and more useful than the standard Lipschitz assumption based nonlinear observer. The developed nonlinear observer is utilized for estimation of slip angle, longitudinal vehicle velocity, and vehicle roll angle. In order to predict and prevent vehicle rollovers in tripped situations, it is necessary to estimate the vertical tire forces in the presence of unknown road disturbance inputs. An approach to estimate unknown disturbance inputs in nonlinear systems using dynamic model inversion and a modified version of the mean value theorem is
Level Density Inputs in Nuclear Reaction Codes and the Role of the Spin Cutoff Parameter
NASA Astrophysics Data System (ADS)
Voinov, A. V.; Grimes, S. M.; Brune, C. R.; Bürger, A.; Görgen, A.; Guttormsen, M.; Larsen, A. C.; Massey, T. N.; Siem, S.
2014-05-01
The proton spectrum from the 57Fe(α, p) reaction has been measured and analyzed with the Hauser-Feshbach model of nuclear reactions. Different input level density models have been tested. It was found that the best description is achieved with either Fermi-gas or constant temperature model functions obtained by fitting them to neutron resonance spacings and to discrete levels and using the spin cutoff parameter with much a weaker excitation energy dependence than predicted by the Fermi-gas model.
NASA Astrophysics Data System (ADS)
Unal, B.; Askan, A.
2014-12-01
Earthquakes are among the most destructive natural disasters in Turkey and it is important to assess seismicity in different regions with the use of seismic networks. Bursa is located in Marmara Region, Northwestern Turkey and to the south of the very active North Anatolian Fault Zone. With around three million inhabitants and key industrial facilities of the country, Bursa is the fourth largest city in Turkey. Since most of the focus is on North Anatolian Fault zone, despite its significant seismicity, Bursa area has not been investigated extensively until recently. For reliable seismic hazard estimations and seismic design of structures, assessment of potential ground motions in this region is essential using both recorded and simulated data. In this study, we employ stochastic finite-fault simulation with dynamic corner frequency approach to model previous events as well to assess potential earthquakes in Bursa. To ensure simulations with reliable synthetic ground motion outputs, the input parameters must be carefully derived from regional data. In this study, using strong motion data collected at 33 stations in the region, site-specific parameters such as near-surface high frequency attenuation parameter and amplifications are obtained. Similarly, source and path parameters are adopted from previous studies that as well employ regional data. Initially, major previous events in the region are verified by comparing the records with the corresponding synthetics. Then simulations of scenario events in the region are performed. We present the results in terms of spatial distribution of peak ground motion parameters and time histories at selected locations.
NASA Astrophysics Data System (ADS)
Lorite, I. J.; Mateos, L.; Fereres, E.
2005-01-01
SummaryThe simulations of dynamic, spatially distributed non-linear models are impacted by the degree of spatial and temporal aggregation of their input parameters and variables. This paper deals with the impact of these aggregations on the assessment of irrigation scheme performance by simulating water use and crop yield. The analysis was carried out on a 7000 ha irrigation scheme located in Southern Spain. Four irrigation seasons differing in rainfall patterns were simulated (from 1996/1997 to 1999/2000) with the actual soil parameters and with hypothetical soil parameters representing wider ranges of soil variability. Three spatial aggregation levels were considered: (I) individual parcels (about 800), (II) command areas (83) and (III) the whole irrigation scheme. Equally, five temporal aggregation levels were defined: daily, weekly, monthly, quarterly and annually. The results showed little impact of spatial aggregation in the predictions of irrigation requirements and of crop yield for the scheme. The impact of aggregation was greater in rainy years, for deep-rooted crops (sunflower) and in scenarios with heterogeneous soils. The highest impact on irrigation requirement estimations was in the scenario of most heterogeneous soil and in 1999/2000, a year with frequent rainfall during the irrigation season: difference of 7% between aggregation levels I and III was found. Equally, it was found that temporal aggregation had only significant impact on irrigation requirements predictions for time steps longer than 4 months. In general, simulated annual irrigation requirements decreased as the time step increased. The impact was greater in rainy years (specially with abundant and concentrated rain events) and in crops which cycles coincide in part with the rainy season (garlic, winter cereals and olive). It is concluded that in this case, average, representative values for the main inputs of the model (crop, soil properties and sowing dates) can generate results
Comparisons of CAP88PC version 2.0 default parameters to site specific inputs
Lehto, M. A.; Courtney, J. C.; Charter, N.; Egan, T.
2000-03-02
The effects of varying the input for the CAP88PC Version 2.0 program on the total effective dose equivalents (TEDEs) were determined for hypothetical releases from the Hot Fuel Examination Facility (HFEF) located at the Argonne National Laboratory site on the Idaho National Engineering and Environmental Laboratory (INEEL). Values for site specific meteorological conditions and agricultural production parameters were determined for the 80 km radius surrounding the HFEF. Four nuclides, {sup 3}H, {sup 85}Kr, {sup 129}I, and {sup 137}Cs (with its short lived progeny, {sup 137m}Ba) were selected for this study; these are the radioactive materials most likely to be released from HFEF under normal or abnormal operating conditions. Use of site specific meteorological parameters of annual precipitation, average temperature, and the height of the inversion layer decreased the TEDE from {sup 137}Cs-{sup 137m}Ba up to 36%; reductions for other nuclides were less than 3%. Use of the site specific agricultural parameters reduced TEDE values between 7% and 49%, depending on the nuclide. Reductions are associated with decreased committed effective dose equivalents (CEDEs) from the ingestion pathway. This is not surprising since the HFEF is located well within the INEEL exclusion area, and the surrounding area closest to the release point is a high desert with limited agricultural diversity. Livestock and milk production are important in some counties at distances greater than 30 km from the HFEF.
Parameter estimates in dynamic models for PUB - influence of input data quality and scale
NASA Astrophysics Data System (ADS)
Arheimer, Berit; Dahné, Joel; Donnelly, Chantal; Strömqvist, Johan
2010-05-01
The Swedish Meteorological and Hydrological Institute (SMHI) produces hydrological predictions in ungauged basins of both water quantity and quality at different scales, using different input databases. This presentation will demonstrate two such model set-ups and the difference in estimated parameter values of the Hydrological Predictions for the Environment (HYPE) model. The model results are compared, and validation in independent sites is assumed to show the implications for PUB. The HYPE model is calibrated stepwise for a whole domain when applied, using a hydrological response units concept with interactive check between hydrology and hydrochemistry for soil/groundwater and rivers/lakes, respectively. Relatively few monitoring sites are used to receive reasonable results for the whole domain. The national S-HYPE model system (450 000 km2) produces predictions in 17 313 subbasins, where observations of water discharge are available in 300 and nutrient concentrations in 600 outlets. About 10% of these were used for model calibration and the rest for independent model validation, considered to represent the ungauged conditions. When applying the model for the whole Baltic Sea basin (1 700 000 km2), predictions are made in 5 100 subbasins. Observations are then available for water discharge in 160 unregulated river reaches and for nutrients in 761 subbasin outlets. About half of the water stations were used for calibration and 10% of the nutrient observations. Model performance is calculated using different evaluation criteria for independent sites. The differences in model performance between the national (S-HYPE) and the Baltic Sea basin (Balt-HYPE) scale applications can be attributed to either differences in model inputs or differences in calibration. In the Swedish application, more detailed input data on physiography, emissions and meteorology have been used for the higher resolution, while generally available databases and generic methods have been used
NASA Astrophysics Data System (ADS)
Lachaume, Regis; Rabus, Markus; Jordan, Andres
2015-08-01
In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
Accurate parameters for HD 209458 and its planet from HST spectrophotometry
NASA Astrophysics Data System (ADS)
del Burgo, C.; Allende Prieto, C.
2016-08-01
We present updated parameters for the star HD 209458 and its transiting giant planet. The stellar angular diameter θ=0.2254±0.0017 mas is obtained from the average ratio between the absolute flux observed with the Hubble Space Telescope and that of the best-fitting Kurucz model atmosphere. This angular diameter represents an improvement in precision of more than four times compared to available interferometric determinations. The stellar radius R⋆=1.20±0.05 R⊙ is ascertained by combining the angular diameter with the Hipparcos trigonometric parallax, which is the main contributor to its uncertainty, and therefore the radius accuracy should be significantly improved with Gaia's measurements. The radius of the exoplanet Rp=1.41±0.06 RJ is derived from the corresponding transit depth in the light curve and our stellar radius. From the model fitting, we accurately determine the effective temperature, Teff=6071±20 K, which is in perfect agreement with the value of 6070±24 K calculated from the angular diameter and the integrated spectral energy distribution. We also find precise values from recent Padova Isochrones, such as R⋆=1.20±0.06 R⊙ and Teff=6099±41 K. We arrive at a consistent picture from these methods and compare the results with those from the literature.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling, the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.
Covey, Curt; Lucas, Donald D.; Tannahill, John; Garaizar, Xabier; Klein, Richard
2013-07-01
Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less
NASA Astrophysics Data System (ADS)
Kaiser, Andreas; Buchholz, Arno; Neugirg, Fabian; Schindewolf, Marcus
2016-04-01
Calanchi landscapes in central Italy have been subject to geoscientific research since many years, not exclusively but especially for questions regarding soil erosion and land degradation. Seasonal dynamics play an important role for morphological processes within the Calanchi. As in most Mediterranean landscapes also in the research site at Val d'Orcia long and dry summers are ended by heavy rainfall events in autumn. The latter contribute to most of the annual sediment output of the incised hollows and can cause damage to agricultural land and infrastructures. While research for understanding Calanco development is of high importance, the complex morphology and thus limited accessibility impedes in situ works. To still improve the understanding of morphodynamics without unnecessarily impinging natural conditions a remote sensing and erosion modelling approach was carried out in the presented work. UAV and LiDAR based very high resolution digital surface models were produced and served as an input parameter for the raster and physically based soil erosion model EROSION3D. Additionally, data on infiltration, runoff generation and sediment detachment were generated with artificial rainfall simulations - the most invasive but unavoidable method. To increase the 1 m plot length virtually to around 20 m the sediment loaded runoff water was again introduced to the plot by a reflux system. Rather elaborate logistics were required to set up the simulator on strongly inclined slopes, to establish sufficient water supply and to secure the simulator on the slope but experiments produced plausible results and valuable input data for modelling. The model results are then compared to the repeated UAV and LiDAR campaigns and the resulting digital elevation models of difference. By simulating different rainfall and moisture scenarios and implementing in situ measured weather data runoff induced processes can be distinguished from gravitational slides and rockfall.
Ralph, Duncan K.; Matsen, Frederick A.
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM “factorization” strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
Ralph, Duncan K; Matsen, Frederick A
2016-01-01
VDJ rearrangement and somatic hypermutation work together to produce antibody-coding B cell receptor (BCR) sequences for a remarkable diversity of antigens. It is now possible to sequence these BCRs in high throughput; analysis of these sequences is bringing new insight into how antibodies develop, in particular for broadly-neutralizing antibodies against HIV and influenza. A fundamental step in such sequence analysis is to annotate each base as coming from a specific one of the V, D, or J genes, or from an N-addition (a.k.a. non-templated insertion). Previous work has used simple parametric distributions to model transitions from state to state in a hidden Markov model (HMM) of VDJ recombination, and assumed that mutations occur via the same process across sites. However, codon frame and other effects have been observed to violate these parametric assumptions for such coding sequences, suggesting that a non-parametric approach to modeling the recombination process could be useful. In our paper, we find that indeed large modern data sets suggest a model using parameter-rich per-allele categorical distributions for HMM transition probabilities and per-allele-per-position mutation probabilities, and that using such a model for inference leads to significantly improved results. We present an accurate and efficient BCR sequence annotation software package using a novel HMM "factorization" strategy. This package, called partis (https://github.com/psathyrella/partis/), is built on a new general-purpose HMM compiler that can perform efficient inference given a simple text description of an HMM. PMID:26751373
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
Improving Rotor-Stator Interaction Noise Code Through Analysis of Input Parameters
NASA Technical Reports Server (NTRS)
Unton, Timothy J.
2004-01-01
There are two major sources of aircraft noise. The first is from the airframe and the second is from the engines. The focus of the acoustics branch at NASA Glenn is on the engine noise sources. There are two major sources of engine noise; fan noise and jet noise. Fan noise, produced by rotating machinery of the engine, consists of both tonal noise, which occurs at discrete frequencies, and broadband noise, which occurs across a wide range of frequencies. The focus of my assignment is on the broadband noise generated by the interaction of fan flow turbulence and the stator blades. such as the sweep and stagger angles and blade count, as well as the flow parameters such as intensity of turbulence in the flow. The tool I employed in this work is a computer program that predicts broadband noise from fans. The program assumes that the complex shape of the curved blade can be represented as a single flat plate, allowing it to use fairly simple equations that can be solved in a reasonable amount of time. While the results from such representation provided reasonable estimates of the broadband noise levels, they did not usually represent the entire spectrum accurately. My investigation found that the discrepancy between data and theory can be improved if the leading edge and the trailing edge of the blade are treated separately. Using this approach, I reduced the maximum error in noise level from a high of 30% to less than 5% for the cases investigated. Detailed results of this investigation will be discussed at my presentation. The objective of this study is to investigate the influence of geometric parameters
NASA Astrophysics Data System (ADS)
Mellinger, Philippe; Döhler, Michael; Mevel, Laurent
2016-09-01
An important step in the operational modal analysis of a structure is to infer on its dynamic behavior through its modal parameters. They can be estimated by various modal identification algorithms that fit a theoretical model to measured data. When output-only data is available, i.e. measured responses of the structure, frequencies, damping ratios and mode shapes can be identified assuming that ambient sources like wind or traffic excite the system sufficiently. When also input data is available, i.e. signals used to excite the structure, input/output identification algorithms are used. The use of input information usually provides better modal estimates in a desired frequency range. While the identification of the modal mass is not considered in this paper, we focus on the estimation of the frequencies, damping ratios and mode shapes, relevant for example for modal analysis during in-flight monitoring of aircrafts. When identifying the modal parameters from noisy measurement data, the information on their uncertainty is most relevant. In this paper, new variance computation schemes for modal parameters are developed for four subspace algorithms, including output-only and input/output methods, as well as data-driven and covariance-driven methods. For the input/output methods, the known inputs are considered as realizations of a stochastic process. Based on Monte Carlo validations, the quality of identification, accuracy of variance estimations and sensor noise robustness are discussed. Finally these algorithms are applied on real measured data obtained during vibrations tests of an aircraft.
Butcher, B.M.
1997-08-01
A summary of the input parameter values used in final predictions of closure and waste densification in the Waste Isolation Pilot Plant disposal room is presented, along with supporting references. These predictions are referred to as the final porosity surface data and will be used for WIPP performance calculations supporting the Compliance Certification Application to be submitted to the U.S. Environmental Protection Agency. The report includes tables and list all of the input parameter values, references citing their source, and in some cases references to more complete descriptions of considerations leading to the selection of values.
NASA Astrophysics Data System (ADS)
Miyasato, Yoshihiko
The problem of constructing model reference adaptive H∞ control for distributed parameter systems of hyperbolic type preceded by unknown input nonlinearity such as dead zone or backlash, is considered in this paper. Distributed parameter systems are infinite dimensional processes, but the proposed control scheme is constructed from finite dimensional controllers. An adaptive inverse model is introduced to estimate and compensate the input nonlinearity. The stabilizing control signal is added to regulate the effect of spill-over terms, and it is derived as a solution of certain H∞ control problem where the residual part of the inverse model and the spill-over term are considered as external disturbances to the process.
Multiple Input Design for Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene
2003-01-01
A method for designing multiple inputs for real-time dynamic system identification in the frequency domain was developed and demonstrated. The designed inputs are mutually orthogonal in both the time and frequency domains, with reduced peak factors to provide good information content for relatively small amplitude excursions. The inputs are designed for selected frequency ranges, and therefore do not require a priori models. The experiment design approach was applied to identify linear dynamic models for the F-15 ACTIVE aircraft, which has multiple control effectors.
NASA Astrophysics Data System (ADS)
Li, W. P.; Luo, B.; Huang, H.
2016-02-01
This paper presents a vibration control strategy for a two-link Flexible Joint Manipulator (FJM) with a Hexapod Active Manipulator (HAM). A dynamic model of the multi-body, rigid-flexible system composed of an FJM, a HAM and a spacecraft was built. A hybrid controller was proposed by combining the Input Shaping (IS) technique with an Adaptive-Parameter Auto Disturbance Rejection Controller (APADRC). The controller was used to suppress the vibration caused by external disturbances and input motions. Parameters of the APADRC were adaptively adjusted to ensure the characteristic of the closed loop system to be a given reference system, even if the configuration of the manipulator significantly changes during motion. Because precise parameters of the flexible manipulator are not required in the IS system, the operation of the controller was sufficiently robust to accommodate uncertainties in system parameters. Simulations results verified the effectiveness of the HAM scheme and controller in the vibration suppression of FJM during operation.
Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua
2012-01-01
Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094
NASA Astrophysics Data System (ADS)
Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was
Ghezzi, Luan; Da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; Ge, Jian; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Cargile, Phillip; Pepper, Joshua; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Wang, Ji; and others
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T {sub eff}, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ∼ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T {sub eff}, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An
FAST TRACK COMMUNICATION Accurate estimate of α variation and isotope shift parameters in Na and Mg+
NASA Astrophysics Data System (ADS)
Sahoo, B. K.
2010-12-01
We present accurate calculations of fine-structure constant variation coefficients and isotope shifts in Na and Mg+ using the relativistic coupled-cluster method. In our approach, we are able to discover the roles of various correlation effects explicitly to all orders in these calculations. Most of the results, especially for the excited states, are reported for the first time. It is possible to ascertain suitable anchor and probe lines for the studies of possible variation in the fine-structure constant by using the above results in the considered systems.
Accurate nuclear masses from a three parameter Kohn-Sham DFT approach (BCPM)
Baldo, M.; Robledo, L. M.; Schuck, P.; Vinas, X.
2012-10-20
Given the promising features of the recently proposed Barcelona-Catania-Paris (BCP) functional [1], it is the purpose of this work to still improve on it. It is, for instance, shown that the number of open parameters can be reduced from 4-5 to 2-3, i.e. by practically a factor of two without deteriorating the results.
Accurate parameters of the oldest known rocky-exoplanet hosting system: Kepler-10 revisited
Fogtmann-Schulz, Alexandra; Hinrup, Brian; Van Eylen, Vincent; Christensen-Dalsgaard, Jørgen; Kjeldsen, Hans; Silva Aguirre, Víctor; Tingley, Brandon
2014-02-01
Since the discovery of Kepler-10, the system has received considerable interest because it contains a small, rocky planet which orbits the star in less than a day. The system's parameters, announced by the Kepler team and subsequently used in further research, were based on only five months of data. We have reanalyzed this system using the full span of 29 months of Kepler photometric data, and obtained improved information about its star and the planets. A detailed asteroseismic analysis of the extended time series provides a significant improvement on the stellar parameters: not only can we state that Kepler-10 is the oldest known rocky-planet-harboring system at 10.41 ± 1.36 Gyr, but these parameters combined with improved planetary parameters from new transit fits gives us the radius of Kepler-10b to within just 125 km. A new analysis of the full planetary phase curve leads to new estimates on the planetary temperature and albedo, which remain degenerate in the Kepler band. Our modeling suggests that the flux level during the occultation is slightly lower than at the transit wings, which would imply that the nightside of this planet has a non-negligible temperature.
NASA Astrophysics Data System (ADS)
Hochlaf, M.; Puzzarini, C.; Senent, M. L.
2015-07-01
We present multi-component computations for rotational constants, vibrational and torsional levels of medium-sized molecules. Through the treatment of two organic sulphur molecules, ethyl mercaptan and dimethyl sulphide, which are relevant for atmospheric and astrophysical media, we point out the outstanding capabilities of explicitly correlated coupled clusters (CCSD(T)-F12) method in conjunction with the cc-pVTZ-F12 basis set for the accurate predictions of such quantities. Indeed, we show that the CCSD(T)-F12/cc-pVTZ-F12 equilibrium rotational constants are in good agreement with those obtained by means of a composite scheme based on CCSD(T) calculations that accounts for the extrapolation to the complete basis set (CBS) limit and core-correlation effects [CCSD(T)/CBS+CV], thus leading to values of ground-state rotational constants rather close to the corresponding experimental data. For vibrational and torsional levels, our analysis reveals that the anharmonic frequencies derived from CCSD(T)-F12/cc-pVTZ-F12 harmonic frequencies and anharmonic corrections (Δν = ω - ν) at the CCSD/cc-pVTZ level closely agree with experimental results. The pattern of the torsional transitions and the shape of the potential energy surfaces along the torsional modes are also well reproduced using the CCSD(T)-F12/cc-pVTZ-F12 energies. Interestingly, this good accuracy is accompanied with a strong reduction of the computational costs. This makes the procedures proposed here as schemes of choice for effective and accurate prediction of spectroscopic properties of organic compounds. Finally, popular density functional approaches are compared with the coupled cluster (CC) methodologies in torsional studies. The long-range CAM-B3LYP functional of Handy and co-workers is recommended for large systems.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1991-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-01-01
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Accurate solutions, parameter studies and comparisons for the Euler and potential flow equations
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Batina, John T.
1988-01-01
Parameter studies are conducted using the Euler and potential flow equation models for steady and unsteady flows in both two and three dimensions. The Euler code is an implicit, upwind, finite volume code which uses the Van Leer method of flux vector splitting which has been recently extended for use on dynamic meshes and maintain all the properties of the original splitting. The potential flow code is an implicit, finite difference method for solving the transonic small disturbance equations and incorporates both entropy and vorticity corrections into the solution procedures thereby extending its applicability into regimes where shock strength normally precludes its use. Parameter studies resulting in benchmark type calculations include the effects of spatial and temporal refinement, spatial order of accuracy, far field boundary conditions for steady flow, frequency of oscillation, and the use of subiterations at each time step to reduce linearization and factorization errors. Comparisons between Euler and potential flow results are made, as well as with experimental data where available.
Accurate solutions, parameter studies and comparisons for the Euler and potential flow equations
NASA Technical Reports Server (NTRS)
Anderson, W. Kyle; Batina, John T.
1988-01-01
Parameter studies are conducted using the Euler and potential flow equation models for unsteady and steady flows in both two and three dimensions. The Euler code is an implicit, upwind, finite volume code which uses the Van Leer method of flux-vector-splitting which has been recently extended for use on dynamic meshes and maintain all the properties of the original splitting. The potential flow code is an implicit, finite difference method for solving the transonic small disturbance equations and incorporates both entropy and vorticity corrections into the solution procedures thereby extending its applicability into regimes where shock strength normally precludes its use. Parameter studies resulting in benchmark type calculations include the effects of spatial and temporal refinement, spatial order of accuracy, far field boundary conditions for steady flow, frequency of oscillation, and the use of subiterations at each time step to reduce linearization and factorization errors. Comparisons between Euler and potential flows results are made as well as with experimental data where available.
Cartwright, Michael S; Dupuis, Janae E; Bargoil, Jessica M; Foster, Dana C
2015-09-01
Mild traumatic brain injury, often referred to as concussion, is a common, potentially debilitating, and costly condition. One of the main challenges in diagnosing and managing concussion is that there is not currently an objective test to determine the presence of a concussion and to guide return-to-play decisions for athletes. Traditional neuroimaging tests, such as brain magnetic resonance imaging, are normal in concussion, and therefore diagnosis and management are guided by reported symptoms. Some athletes will under-report symptoms to accelerate their return-to-play and others will over-report symptoms out of fear of further injury or misinterpretation of underlying conditions, such as migraine headache. Therefore, an objective measure is needed to assist in several facets of concussion management. Limited data in animal and human testing indicates that intracranial pressure increases slightly and cerebrovascular reactivity (the ability of the cerebral arteries to auto-regulate in response to changes in carbon dioxide) decreases slightly following mild traumatic brain injury. We hypothesize that a combination of ultrasonographic measurements (optic nerve sheath diameter and transcranial Doppler assessment of cerebrovascular reactivity) into a single index will allow for an accurate and non-invasive measurement of intracranial pressure and cerebrovascular reactivity, and this index will be clinically relevant and useful for guiding concussion diagnosis and management. Ultrasound is an ideal modality for the evaluation of concussion because it is portable (allowing for evaluation in many settings, such as on the playing field or in a combat zone), radiation-free (making repeat scans safe), and relatively inexpensive (resulting in nearly universal availability). This paper reviews the literature supporting our hypothesis that an ultrasonographic index can assist in the diagnosis and management of concussion, and it also presents limited data regarding the
Methods to Register Models and Input/Output Parameters for Integrated Modeling
Droppo, James G.; Whelan, Gene; Tryby, Michael E.; Pelton, Mitchell A.; Taira, Randal Y.; Dorow, Kevin E.
2010-07-10
Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s knowledge of the components being linked. The framework, therefore, needs the capability to assimilate a wide range of model-specific input/output requirements as well as their associated assumptions and constraints. The process of assimilating such disparate components into an integrated modeling framework varies in complexity and difficulty. Several factors influence the relative ease of assimilating components, including, but not limited to, familiarity with the components being assimilated, familiarity with the framework and its tools that support the assimilation process, level of documentation associated with the components and the framework, and design structure of the components and framework. This initial effort reviews different approaches for assimilating models and their model-specific input/output requirements: 1) modifying component models to directly communicate with the framework (i.e., through an Application Programming Interface), 2) developing model-specific external wrappers such that no component model modifications are required, 3) using parsing tools to visually map pre-existing input/output files, and 4) describing and linking models as dynamic link libraries. Most of these approaches are illustrated using the widely distributed modeling system called Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES). The review concludes that each has its strengths and weakness, the factors that determine which approaches work best in a given application.
2012-01-01
A natural bond orbital (NBO) analysis of unpaired electron spin density in metalloproteins is presented, which allows a fast and robust calculation of paramagnetic NMR parameters. Approximately 90% of the unpaired electron spin density occupies metal–ligand NBOs, allowing the majority of the density to be modeled by only a few NBOs that reflect the chemical bonding environment. We show that the paramagnetic relaxation rate of protons can be calculated accurately using only the metal–ligand NBOs and that these rates are in good agreement with corresponding rates measured experimentally. This holds, in particular, for protons of ligand residues where the point-dipole approximation breaks down. To describe the paramagnetic relaxation of heavy nuclei, also the electron spin density in the local orbitals must be taken into account. Geometric distance restraints for 15N can be derived from the paramagnetic relaxation enhancement and the Fermi contact shift when local NBOs are included in the analysis. Thus, the NBO approach allows us to include experimental paramagnetic NMR parameters of 15N nuclei as restraints in a structure optimization protocol. We performed a molecular dynamics simulation and structure determination of oxidized rubredoxin using the experimentally obtained paramagnetic NMR parameters of 15N. The corresponding structures obtained are in good agreement with the crystal structure of rubredoxin. Thus, the NBO approach allows an accurate description of the geometric structure and the dynamics of metalloproteins, when NMR parameters are available of nuclei in the immediate vicinity of the metal-site. PMID:22329704
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
NASA Astrophysics Data System (ADS)
Filioglou, M.; Balis, D.; Siomos, N.; Poupkou, A.; Dimopoulos, S.; Chaikovsky, A.
2016-06-01
A targeted sensitivity study of the LIRIC algorithm was considered necessary to estimate the uncertainty introduced to the volume concentration profiles, due to the arbitrary selection of user-defined input parameters. For this purpose three different tests were performed using Thessaloniki's Lidar data. Overall, tests in the selection of the regularization parameters, an upper and a lower limit test were performed. The different sensitivity tests were applied on two cases with different predominant aerosol types, a dust episode and a typical urban case.
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-01-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. PMID:25883146
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
Breeding, R.J.; Harper, F.T.; Brown, T.D.; Gregory, J.J.; Payne, A.C.; Gorham, E.D.; Murfin, W.; Amos, C.N.
1992-03-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom, and Grand Gulf. The emphasis in this risk analysis was not on determining a ``so-called`` point estimate of risk. Rather, it was to determine the distribution of risk, and to discover the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station were assessed. Much of the important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Structural Response Panel.
NASA Astrophysics Data System (ADS)
Ponizko, A. S.
1987-04-01
The specific metal input requirements for the construction of explosion-proof electric motors can be reduced by improving the forced-air cooling and relieving the explosion hazardous pressure through the use of gas permeable fire barriers. Quantitative estimates of the cooling efficiency for explosion-proof, asynchronous motors cooled by a twin blower mounted on the motor shaft are provided. The ventilation inside the explosion-proof containment is accomplished by the air from the inside blower of the fan assembly being sucked through the porous elements from the working end of the shaft, passed through the rotor channels, through the porous elements of the second bearing shield plate, and directed by the vanes of the fan into the air flow coming from the outer forced-air circulating blower. Calculations of the air flow, temperature and cooling efficiency are given for a four-pole 160 kW VAO315M-4 motor. The performance of the porous fire barriers in industry environments is also discussed.
Ajami, N K; Duan, Q; Sorooshian, S
2006-05-05
This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186
Identification of the battery state-of-health parameter from input-output pairs of time series data
NASA Astrophysics Data System (ADS)
Li, Yue; Chattopadhyay, Pritthi; Ray, Asok; Rahn, Christopher D.
2015-07-01
As a paradigm of dynamic data-driven application systems (DDDAS), this paper addresses real-time identification of the State of Health (SOH) parameter over the life span of a battery that is subjected to approximately repeated cycles of discharging/recharging current. In the proposed method, finite-length data of interest are selected via wavelet-based segmentation from the time series of synchronized input-output (i.e., current-voltage) pairs in the respective two-dimensional space. Then, symbol strings are generated by partitioning the selected segments of the input-output time series to construct a special class of probabilistic finite state automata (PFSA), called D-Markov machines. Pertinent features of the statistics of battery dynamics are extracted as the state emission matrices of these PFSA. This real-time method of SOH parameter identification relies on the divergence between extracted features. The underlying concept has been validated on (approximately periodic) experimental data, generated from a commercial-scale lead-acid battery. It is demonstrated by real-time analysis of the acquired current-voltage data on in-situ computational platforms that the proposed method is capable of distinguishing battery current-voltage dynamics at different aging stages, as an alternative to computation-intensive and electrochemistry-dependent analysis via physics-based modeling.
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2010-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will make use of distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. Research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique and validating this technique through simulation and flight test of the X-48B aircraft. The X-48B aircraft is an 8.5 percent-scale hybrid wing body aircraft demonstrator designed by The Boeing Company (Chicago, Illinois, USA), built by Cranfield Aerospace Limited (Cranfield, Bedford, United Kingdom) and flight tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California, USA). Based on data from flight test maneuvers performed at Dryden Flight Research Center, aerodynamic parameter estimation was performed using linear regression and output error techniques. An input design technique that uses temporal separation for de-correlation of control surfaces is proposed, and simulation and flight test results are compared with the aerodynamic database. This paper will present a method to determine individual control surface aerodynamic derivatives.
Subramanian, Swetha; Mast, T Douglas
2015-10-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature. PMID:26352462
NASA Astrophysics Data System (ADS)
Subramanian, Swetha; Mast, T. Douglas
2015-09-01
Computational finite element models are commonly used for the simulation of radiofrequency ablation (RFA) treatments. However, the accuracy of these simulations is limited by the lack of precise knowledge of tissue parameters. In this technical note, an inverse solver based on the unscented Kalman filter (UKF) is proposed to optimize values for specific heat, thermal conductivity, and electrical conductivity resulting in accurately simulated temperature elevations. A total of 15 RFA treatments were performed on ex vivo bovine liver tissue. For each RFA treatment, 15 finite-element simulations were performed using a set of deterministically chosen tissue parameters to estimate the mean and variance of the resulting tissue ablation. The UKF was implemented as an inverse solver to recover the specific heat, thermal conductivity, and electrical conductivity corresponding to the measured area of the ablated tissue region, as determined from gross tissue histology. These tissue parameters were then employed in the finite element model to simulate the position- and time-dependent tissue temperature. Results show good agreement between simulated and measured temperature.
NASA Astrophysics Data System (ADS)
Villalba, Jesus Daniel; Gomez, Ivan Dario; Laier, Jose Elias
2010-09-01
Structural damage detection is a very important research topic and, currently, there are not specific tools to solve it. A promising tool that can be used is the artificial neural network, ANN, which can deal with hard problems. This paper uses a back propagation ANN with Bayesian regularization training to locate and quantify damage in truss structures. The input parameters corresponded to natural frequencies combined with shape modes, modal flexibilities or modal strain energies. The ANN was trained by considering only simple damage scenarios, random multiple damage scenarios or a combination of them. The results are shown in terms of the percentage of cases in which the ANN trained achieves a determined performance in assessing both the damage extension and the presence of damaged elements. The best performance for the ANN is obtained by using modal strain energies and multiple damage scenarios.
NASA Astrophysics Data System (ADS)
Iorio, L.
2016-01-01
By using the most recently published Doppler tomography measurements and accurate theoretical modelling of the oblateness-driven orbital precessions, we tightly constrain some of the physical and orbital parameters of the planetary system hosted by the fast rotating star WASP-33. In particular, the measurements of the orbital inclination ip to the plane of the sky and of the sky-projected spin-orbit misalignment λ at two epochs about six years apart allowed for the determination of the longitude of the ascending node Ω and of the orbital inclination I to the apparent equatorial plane at the same epochs. As a consequence, average rates of change dot{Ω }_exp, dot{I}_exp of this two orbital elements, accurate to a ≈10-2 deg yr-1 level, were calculated as well. By comparing them to general theoretical expressions dot{Ω }_{J_2}, dot{I}_{J_2} for their precessions induced by an oblate star whose symmetry axis is arbitrarily oriented, we were able to determine the angle i⋆ between the line of sight the star's spin {S}^{star } and its first even zonal harmonic J_2^{star } obtaining i^{star } = {142}^{+10}_{-11} deg, J_2^{star } = 2.1^{+0.8}_{-0.5}times; 10^{-4}. As a by-product, the angle between {S}^{star } and the orbital angular momentum L is as large as about ψ ≈ 100 ° psi; ^{2008} = 99^{+5}_{-4} deg, ψ ^{{2014}} = 103^{+5}_{-4} deg and changes at a rate dot{ψ }= 0.{7}^{+1.5}_{-1.6} deg {yr}^{-1}. The predicted general relativistic Lense-Thirring precessions, of the order of ≈10-3deg yr-1, are, at present, about one order of magnitude below the measurability threshold.
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the
NASA Astrophysics Data System (ADS)
Rezaei, Meisam; Seuntjens, Piet; Shahidi, Reihaneh; Joris, Ingeborg; Boënne, Wesley; Cornelis, Wim
2016-04-01
Soil hydraulic parameters, which can be derived from in situ and/or laboratory experiments, are key input parameters for modeling water flow in the vadose zone. In this study, we measured soil hydraulic properties with typical laboratory measurements and field tension infiltration experiments using Wooding's analytical solution and inverse optimization along the vertical direction within two typical podzol profiles with sand texture in a potato field. The objective was to identify proper sets of hydraulic parameters and to evaluate their relevance on hydrological model performance for irrigation management purposes. Tension disc infiltration experiments were carried out at five different depths for both profiles at consecutive negative pressure heads of 12, 6, 3 and 0.1 cm. At the same locations and depths undisturbed samples were taken to determine the water retention curve with hanging water column and pressure extractors and lab saturated hydraulic conductivity with the constant head method. Both approaches allowed to determine the Mualem-van Genuchten (MVG) hydraulic parameters (residual water content θr, saturated water content θs,, shape parameters α and n, and field or lab saturated hydraulic conductivity Kfs and Kls). Results demonstrated horizontal differences and vertical variability of hydraulic properties. Inverse optimization resulted in excellent matches between observed and fitted infiltration rates in combination with final water content at the end of the experiment, θf, using Hydrus 2D/3D. It also resulted in close correspondence of and Kfs with those from Logsdon and Jaynes' (1993) solution of Wooding's equation. The MVG parameters Kfs and α estimated from the inverse solution (θr set to zero), were relatively similar to values from Wooding's solution which were used as initial value and the estimated θs corresponded to (effective) field saturated water content θf. We found the Gardner parameter αG to be related to the optimized van
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1995-01-01
Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the
Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.
2013-09-16
Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.
Monette, F.; Biwer, B.; LePoire, D.; Chen, S.Y.
1994-02-01
The U.S. Department of Energy is considering a broad range of alternatives for the future configuration of radioactive waste management at its network of facilities. Because the transportation of radioactive waste is an integral component of the management alternatives being considered, the estimated human health risks associated with both routine and accident transportation conditions must be assessed to allow a complete appraisal of the alternatives. This paper provides an overview of the technical approach being used to assess the radiological risks from the transportation of radioactive wastes. The approach presented employs the RADTRAN 4 computer code to estimate the collective population risk during routine and accident transportation conditions. Supplemental analyses are conducted using the RISKIND computer code to address areas of specific concern to individuals or population subgroups. RISKIND is used for estimating routine doses to maximally exposed individuals and for assessing the consequences of the most severe credible transportation accidents. The transportation risk assessment is designed to ensure -- through uniform and judicious selection of models, data, and assumptions -- that relative comparisons of risk among the various alternatives are meaningful. This is accomplished by uniformly applying common input parameters and assumptions to each waste type for all alternatives. The approach presented can be applied to all radioactive waste types and provides a consistent and comprehensive evaluation of transportation-related risk.
NASA Astrophysics Data System (ADS)
Kazemi, Mohsen; Aghakhani, Masood; Haghshenas-Jazi, Ehsan; Behmaneshfar, Ali
2016-02-01
The aim of this paper is to optimize the depth of penetration with regard to the effect of MgO nanoparticles and welding input parameters. For this purpose, response surface methodology (RSM) with central composite rotatable design (CCRD) was used. The welding current, arc voltage, nozzle-to-plate distance, welding speed, and thickness of MgO nanoparticles were determined as the factors, and depth of penetration was considered as the response. Quadratic polynomial model was used for determining the relationship between the response and factors. A reduced model was obtained from the data which the values of R 2, R 2 (pred), and R 2 (adj) of this model were 92.05, 69.05, and 86.31 pct, respectively. Thus, this model was suitable, and it was used to determine the optimum levels of factors. The results show that the welding current, arc voltage, and nozzle-to-plate distance factors should be adjusted in high level, and welding speed and thickness of MgO nanoparticles factors should be adjusted in low level.
NASA Astrophysics Data System (ADS)
Kimura, H.; Asano, Y.; Matsumoto, T.
2012-12-01
The rapid determination of hypocentral parameters and their transmission to the public are valuable components of disaster mitigation. We have operated an automatic system for this purpose—termed the Accurate and QUick Analysis system for source parameters (AQUA)—since 2005 (Matsumura et al., 2006). In this system, the initial hypocenter, the moment tensor (MT), and the centroid moment tensor (CMT) solutions are automatically determined and posted on the NIED Hi-net Web site (www.hinet.bosai.go.jp). This paper describes improvements made to the AQUA to overcome limitations that became apparent after the 2011 Tohoku Earthquake (05:46:17, March 11, 2011 in UTC). The improvements included the processing of NIED F-net velocity-type strong motion records, because NIED F-net broadband seismographs are saturated for great earthquakes such as the 2011 Tohoku Earthquake. These velocity-type strong motion seismographs provide unsaturated records not only for the 2011 Tohoku Earthquake, but also for recording stations located close to the epicenters of M>7 earthquakes. We used 0.005-0.020 Hz records for M>7.5 earthquakes, in contrast to the 0.01-0.05 Hz records employed in the original system. The initial hypocenters determined based on arrival times picked by using seismograms recorded by NIED Hi-net stations can have large errors in terms of magnitude and hypocenter location, especially for great earthquakes or earthquakes located far from the onland Hi-net network. The size of the 2011 Tohoku Earthquake was initially underestimated in the AQUA to be around M5 at the initial stage of rupture. Numerous aftershocks occurred at the outer rise east of the Japan trench, where a great earthquake is anticipated to occur. Hence, we modified the system to repeat the MT analyses assuming a larger size, for all earthquakes for which the magnitude was initially underestimated. We also broadened the search range of centroid depth for earthquakes located far from the onland Hi
NASA Technical Reports Server (NTRS)
Kanning, G.
1975-01-01
A digital computer program written in FORTRAN is presented that implements the system identification theory for deterministic systems using input-output measurements. The user supplies programs simulating the mathematical model of the physical plant whose parameters are to be identified. The user may choose any one of three options. The first option allows for a complete model simulation for fixed input forcing functions. The second option identifies up to 36 parameters of the model from wind tunnel or flight measurements. The third option performs a sensitivity analysis for up to 36 parameters. The use of each option is illustrated with an example using input-output measurements for a helicopter rotor tested in a wind tunnel.
NASA Astrophysics Data System (ADS)
Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda
2009-02-01
The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Panzetta, Francesca; Müller, Horst; Gerstl, Michael
2016-04-01
The GGOS vision is to integrate geometric and gravimetric observation techniques to estimate consistent geodetic-geophysical parameters. In order to reach this goal, the common estimation of station coordinates, Stokes coefficients and Earth Orientation Parameters (EOP) is necessary. Satellite Laser Ranging (SLR) provides the ability to study correlations between the different parameter groups since the observed satellite orbit dynamics are sensitive to the above mentioned geodetic parameters. To decrease the correlations, SLR observations to multiple satellites have to be combined. In this paper, we compare the estimated EOP of (i) single satellite SLR solutions and (ii) multi-satellite SLR solutions. Therefore, we jointly estimate station coordinates, EOP, Stokes coefficients and orbit parameters using different satellite constellations. A special focus in this investigation is put on the de-correlation of different geodetic parameter groups due to the combination of SLR observations. Besides SLR observations to spherical satellites (commonly used), we discuss the impact of SLR observations to non-spherical satellites such as, e.g., the JASON-2 satellite. The goal of this study is to discuss the existing parameter interactions and to present a strategy how to obtain reliable estimates of station coordinates, EOP, orbit parameter and Stokes coefficients in one common adjustment. Thereby, the benefits of a multi-satellite SLR solution are evaluated.
NASA Astrophysics Data System (ADS)
Moore, Christopher; Hopkins, Matthew; Moore, Stan; Boerner, Jeremiah; Cartwright, Keith
2015-09-01
Simulation of breakdown is important for understanding and designing a variety of applications such as mitigating undesirable discharge events. Such simulations need to be accurate through early time arc initiation to late time stable arc behavior. Here we examine constraints on the timestep and mesh size required for arc simulations using the particle-in-cell (PIC) method with direct simulation Monte Carlo (DMSC) collisions. Accurate simulation of electron avalanche across a fixed voltage drop and constant neutral density (reduced field of 1000 Td) was found to require a timestep ~ 1/100 of the mean time between collisions and a mesh size ~ 1/25 the mean free path. These constraints are much smaller than the typical PIC-DSMC requirements for timestep and mesh size. Both constraints are related to the fact that charged particles are accelerated by the external field. Thus gradients in the electron energy distribution function can exist at scales smaller than the mean free path and these must be resolved by the mesh size for accurate collision rates. Additionally, the timestep must be small enough that the particle energy change due to the fields be small in order to capture gradients in the cross sections versus energy. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Brawand, Nicholas; Vörös, Márton; Govoni, Marco; Galli, Giulia
The accurate prediction of optoelectronic properties of molecules and solids is a persisting challenge for current density functional theory (DFT) based methods. We propose a hybrid functional where the mixing fraction of exact and local exchange is determined by a non-empirical, system dependent function. This functional yields ionization potentials, fundamental and optical gaps of many, diverse systems in excellent agreement with experiments, including organic and inorganic molecules and nanocrystals. We further demonstrate that the newly defined hybrid functional gives the correct alignment between the energy level of the exemplary TTF-TCNQ donor-acceptor system. DOE-BES: DE-FG02-06ER46262.
Jiang, Bin; Guo, Hua
2016-08-01
In search for an accurate description of the dissociative chemisorption of water on the Ni(111) surface, we report a new nine-dimensional potential energy surface (PES) based on a large number of density functional theory points using the RPBE functional. Seven-dimensional quantum dynamical calculations have been carried out on the RPBE PES, followed by site averaging and lattice effect corrections, yielding sticking probabilities that are compared with both the previous theoretical results based on a PW91 PES and experiment. It is shown that the RPBE functional increases the reaction barrier, but has otherwise a minor impact on the PES topography. Better agreement with experimental results is obtained with the new PES, but the agreement is still not quantitative. Possible sources of the remaining discrepancies are discussed. PMID:27436348
NASA Astrophysics Data System (ADS)
Deb, S.; Maitra, K.; Roychoudhuri, A.
1985-06-01
In the wake of the energy crisis, attempts are being made to develop a variety of energy conversion devices, such as solar cells. The single most important operational characteristic for a conversion element generating electricity is the V against I curve. Three points on this characteristic curve are of paramount importance, including the short-circuit, the open-circuit, and the maximum power point. The present paper has the objective to propose a new simple and accurate method of determining the maximum power point (Vm, Im) of the V against I characteristics, based on a geometrical interpretation. The method is general enough to be applicable to any energy conversion device having a nonlinear V against I characteristic. The paper provides also a method for determining the fill factor (FF), the series resistance (Rs), and the diode ideality factor (A) from a single set of connected observations.
NASA Astrophysics Data System (ADS)
Marin, Andrew T.; Musselman, Kevin P.; MacManus-Driscoll, Judith L.
2013-04-01
This work shows that when a Schottky barrier is present in a photovoltaic device, such as in a device with an ITO/ZnO contact, equivalent circuit analysis must be performed with admittance spectroscopy to accurately determine the pn junction interface recombination parameters (i.e., capture cross section and density of trap states). Without equivalent circuit analysis, a Schottky barrier can produce an error of ˜4-orders of magnitude in the capture cross section and ˜50% error in the measured density of trap states. Using a solution processed ZnO/Cu2O photovoltaic test system, we apply our analysis to clearly separate the contributions of interface states at the pn junction from the Schottky barrier at the ITO/ZnO contact so that the interface state recombination parameters can be accurately characterized. This work is widely applicable to the multitude of photovoltaic devices, which use ZnO adjacent to ITO.
Rosen, I.G.; Luczak, Susan E.; Weiss, Jordan
2014-01-01
We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented. PMID:24707065
Kostylev, Maxim; Wilson, David
2014-01-01
Lignocellulosic biomass is a potential source of renewable, low-carbon-footprint liquid fuels. Biomass recalcitrance and enzyme cost are key challenges associated with the large-scale production of cellulosic fuel. Kinetic modeling of enzymatic cellulose digestion has been complicated by the heterogeneous nature of the substrate and by the fact that a true steady state cannot be attained. We present a two-parameter kinetic model based on the Michaelis-Menten scheme (Michaelis L and Menten ML. (1913) Biochem Z 49:333–369), but with a time-dependent activity coefficient analogous to fractal-like kinetics formulated by Kopelman (Kopelman R. (1988) Science 241:1620–1626). We provide a mathematical derivation and experimental support to show that one of the parameters is a total activity coefficient and the other is an intrinsic constant that reflects the ability of the cellulases to overcome substrate recalcitrance. The model is applicable to individual cellulases and their mixtures at low-to-medium enzyme loads. Using biomass degrading enzymes from a cellulolytic bacterium Thermobifida fusca we show that the model can be used for mechanistic studies of enzymatic cellulose digestion. We also demonstrate that it applies to the crude supernatant of the widely studied cellulolytic fungus Trichoderma reesei and can thus be used to compare cellulases from different organisms. The two parameters may serve a similar role to Vmax, KM, and kcat in classical kinetics. A similar approach may be applicable to other enzymes with heterogeneous substrates and where a steady state is not achievable. PMID:23837567
Harbaugh, Arien W.
2011-01-01
The MFI2005 data-input (entry) program was developed for use with the U.S. Geological Survey modular three-dimensional finite-difference groundwater model, MODFLOW-2005. MFI2005 runs on personal computers and is designed to be easy to use; data are entered interactively through a series of display screens. MFI2005 supports parameter estimation using the UCODE_2005 program for parameter estimation. Data for MODPATH, a particle-tracking program for use with MODFLOW-2005, also can be entered using MFI2005. MFI2005 can be used in conjunction with other data-input programs so that the different parts of a model dataset can be entered by using the most suitable program.
Coffield, T; Patricia Lee, P
2007-01-31
The purpose of this report is to update parameters utilized in Human Health Exposure calculations and Bioaccumulation Transfer Factors utilized at SRS for Performance Assessment modeling. The reason for the update is to utilize more recent information issued, validate information currently used and correct minor inconsistencies between modeling efforts performed in SRS contiguous areas of the heavy industrialized central site usage areas called the General Separations Area (GSA). SRS parameters utilized were compared to a number of other DOE facilities and generic national/global references to establish relevance of the parameters selected and/or verify the regional differences of the southeast USA. The parameters selected were specifically chosen to be expected values along with identifying a range for these values versus the overly conservative specification of parameters for estimating an annual dose to the maximum exposed individual (MEI). The end uses are to establish a standardized source for these parameters that is up to date with existing data and maintain it via review of any future issued national references to evaluate the need for changes as new information is released. These reviews are to be added to this document by revision.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.
2014-11-01
The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.
NASA Astrophysics Data System (ADS)
Di Giovanni, P.; Ahearn, T. S.; Semple, S. I.; Azlan, C. A.; Lloyd, W. K. C.; Gilbert, F. J.; Redpath, T. W.
2011-03-01
The objective of this work was to propose and demonstrate a novel technique for the assessment of tumour pharmacokinetic parameters together with a regionally estimated vascular input function. A breast cancer patient T2*-weighted dynamic contrast enhanced MRI (DCE-MRI) dataset acquired at high temporal resolution during the first-pass bolus perfusion was used for testing the technique. Extraction of the lesion volume transfer constant Ktrans together with the intravascular plasma volume fraction vp was achieved by optimizing a capillary input function with a measure of cardiac output using the principle of intravascular indicator dilution theory. For a region of interest drawn within the breast lesion a vp of 0.16 and a Ktrans of 0.70 min-1 were estimated. Despite the value of vp being higher than expected, estimated Ktrans was in accordance with the literature values. In conclusion, the technique proposed here, has the main advantage of allowing the estimation of breast tumour pharmacokinetic parameters from first-pass perfusion T2*-weighted DCE-MRI data without the need of measuring an arterial input function. The technique may also have applicability to T1-weighted DCE-MRI data.
NASA Astrophysics Data System (ADS)
Fuchs, Sven; Bording, Thue S.; Balling, Niels
2015-04-01
Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties is prerequisites for the parameterisation of boundary conditions and layer properties. In contrast to hydrogeological ground-water models, where parameterization of the major rock property (i.e. hydraulic conductivity) is generally conducted considering lateral variations within geological layers, parameterization of thermal models (in particular regarding thermal conductivity but also radiogenic heat production and specific heat capacity) in most cases is conducted using constant parameters for each modelled layer. For such constant thermal parameter values, moreover, initial values are normally obtained from rare core measurements and/or literature values, which raise questions for their representativeness. Some few studies have considered lithological composition or well log information, but still keeping the layer values constant. In the present thermal-modelling scenario analysis, we demonstrate how the use of different parameter input type (from literature, well logs and lithology) and parameter input style (constant or laterally varying layer values) affects the temperature model prediction in sedimentary basins. For this purpose, rock thermal properties are deduced from standard petrophysical well logs and lithological descriptions for several wells in a project area. Statistical values of thermal properties (mean, standard deviation, moments, etc.) are calculated at each borehole location for each geological formation and, moreover, for the entire dataset. Our case study is located at the Danish-German border region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (i
Faulkner, William B; Shaw, Bryan W; Grosch, Tom
2008-10-01
As of December 2006, the American Meteorological Society/U.S. Environmental Protection Agency (EPA) Regulatory Model with Plume Rise Model Enhancements (AERMOD-PRIME; hereafter AERMOD) replaced the Industrial Source Complex Short Term Version 3 (ISCST3) as the EPA-preferred regulatory model. The change from ISCST3 to AERMOD will affect Prevention of Significant Deterioration (PSD) increment consumption as well as permit compliance in states where regulatory agencies limit property line concentrations using modeling analysis. Because of differences in model formulation and the treatment of terrain features, one cannot predict a priori whether ISCST3 or AERMOD will predict higher or lower pollutant concentrations downwind of a source. The objectives of this paper were to determine the sensitivity of AERMOD to various inputs and compare the highest downwind concentrations from a ground-level area source (GLAS) predicted by AERMOD to those predicted by ISCST3. Concentrations predicted using ISCST3 were sensitive to changes in wind speed, temperature, solar radiation (as it affects stability class), and mixing heights below 160 m. Surface roughness also affected downwind concentrations predicted by ISCST3. AERMOD was sensitive to changes in albedo, surface roughness, wind speed, temperature, and cloud cover. Bowen ratio did not affect the results from AERMOD. These results demonstrate AERMOD's sensitivity to small changes in wind speed and surface roughness. When AERMOD is used to determine property line concentrations, small changes in these variables may affect the distance within which concentration limits are exceeded by several hundred meters. PMID:18939775
Baker, Christopher M.; Lopes, Pedro E. M.; Zhu, Xiao; Roux, Benoît; MacKerell, Alexander D.
2010-01-01
Lennard-Jones (LJ) parameters for a variety of model compounds have previously been optimized within the CHARMM Drude polarizable force field to reproduce accurately pure liquid phase thermodynamic properties as well as additional target data. While the polarizable force field resulting from this optimization procedure has been shown to satisfactorily reproduce a wide range of experimental reference data across numerous series of small molecules, a slight but systematic overestimate of the hydration free energies has also been noted. Here, the reproduction of experimental hydration free energies is greatly improved by the introduction of pair-specific LJ parameters between solute heavy atoms and water oxygen atoms that override the standard LJ parameters obtained from combining rules. The changes are small and a systematic protocol is developed for the optimization of pair-specific LJ parameters and applied to the development of pair-specific LJ parameters for alkanes, alcohols and ethers. The resulting parameters not only yield hydration free energies in good agreement with experimental values, but also provide a framework upon which other pair-specific LJ parameters can be added as new compounds are parametrized within the CHARMM Drude polarizable force field. Detailed analysis of the contributions to the hydration free energies reveals that the dispersion interaction is the main source of the systematic errors in the hydration free energies. This information suggests that the systematic error may result from problems with the LJ combining rules and is combined with analysis of the pair-specific LJ parameters obtained in this work to identify a preliminary improved combining rule. PMID:20401166
NASA Astrophysics Data System (ADS)
Cabassa-Miranda, E.; Garnett Marques Brum, C.
2013-12-01
We are presenting a statistical study of the behavior of the noontime F2 peak parameters (foF2 and hmF2) to the variation of solar energy input based on digisonde data and EUV-UV solar emissions registered by SOHO satellite for geomagnetic quiet-to-normal condition. For this, we selected digisonde data from fourteen different stations spread along the American sector (ten of them located above and four below the equator). These registers were collected from 2000 to 2012 and encompass the last unusual super minimum period.
NASA Technical Reports Server (NTRS)
Boothroyd, Arnold I.; Sackmann, I.-Juliana
2001-01-01
Helioseismic frequency observations provide an extremely accurate window into the solar interior; frequencies from the Michaelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) spacecraft, enable the adiabatic sound speed and adiabatic index to be inferred with an accuracy of a few parts in 10(exp 4) and the density with an accuracy of a few parts in 10(exp 3). This has become a Serious challenge to theoretical models of the Sun. Therefore, we have undertaken a self-consistent, systematic study of the sources of uncertainties in the standard solar models. We found that the largest effect on the interior structure arises from the observational uncertainties in the photospheric abundances of the elements, which affect the sound speed profile at the level of 3 parts in 10(exp 3). The estimated 4% uncertainty in the OPAL opacities could lead to effects of 1 part in 10(exp 3); the approximately 5%, uncertainty in the basic pp nuclear reaction rate would have a similar effect, as would uncertainties of approximately 15% in the diffusion constants for the gravitational settling of helium. The approximately 50% uncertainties in diffusion constants for the heavier elements would have nearly as large an effect. Different observational methods for determining the solar radius yield results differing by as much as 7 parts in 10(exp 4); we found that this leads to uncertainties of a few parts in 10(exp 3) in the sound speed int the solar convective envelope, but has negligible effect on the interior. Our reference standard solar model yielded a convective envelope position of 0.7135 solar radius, in excellent agreement with the observed value of 0.713 +/- 0.001 solar radius and was significantly affected only by Z/X, the pp rate, and the uncertainties in helium diffusion constants. Our reference model also yielded envelope helium abundance of 0.2424, in good agreement with the approximate range of 0.24 to 0.25 inferred from helioseismic observations; only
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2015-06-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals ( P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms ( P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale ( P < 0.05). White and brown-white patched cows had significantly longer ( P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower ( P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different ( P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Astrophysics Data System (ADS)
Katiyatiya, C. L. F.; Muchenje, V.; Mushunje, A.
2014-08-01
Seasonal variations in hair length, tick loads, cortisol levels, haematological parameters (HP) and temperature humidity index (THI) in Nguni cows of different colours raised in two low-input farms, and a commercial stud was determined. The sites were chosen based on their production systems, climatic characteristics and geographical locations. Zazulwana and Komga are low-input, humid-coastal areas, while Honeydale is a high-input, dry-inland Nguni stud farm. A total of 103 cows, grouped according to parity, location and coat colour, were used in the study. The effects of location, coat colour, hair length and season were used to determine tick loads on different body parts, cortisol levels and HP in blood from Nguni cows. Highest tick loads were recorded under the tail and the lowest on the head of each of the animals (P < 0.05). Zazulwana cows recorded the highest tick loads under the tails of all the cows used in the study from the three farms (P < 0.05). High tick loads were recorded for cows with long hairs. Hair lengths were longest during the winter season in the coastal areas of Zazulwana and Honeydale (P < 0.05). White and brown-white patched cows had significantly longer (P < 0.05) hair strands than those having a combination of red, black and white colour. Cortisol and THI were significantly lower (P < 0.05) in summer season. Red blood cells, haematoglobin, haematocrit, mean cell volumes, white blood cells, neutrophils, lymphocytes, eosinophils and basophils were significantly different (P < 0.05) as some associated with age across all seasons and correlated to THI. It was concluded that the location, coat colour and season had effects on hair length, cortisol levels, THI, HP and tick loads on different body parts and heat stress in Nguni cows.
NASA Technical Reports Server (NTRS)
Reddy C. J.
1998-01-01
Model Based Parameter Estimation (MBPE) is presented in conjunction with the hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique for fast computation of the input characteristics of cavity-backed aperture antennas over a frequency range. The hybrid FENI/MoM technique is used to form an integro-partial- differential equation to compute the electric field distribution of a cavity-backed aperture antenna. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency derivatives of the integro-partial-differential equation formed by the hybrid FEM/ MoM technique. Using the rational function approximation, the electric field is obtained over a frequency range. Using the electric field at different frequencies, the input characteristics of the antenna are obtained over a wide frequency range. Numerical results for an open coaxial line, probe-fed coaxial cavity and cavity-backed microstrip patch antennas are presented. Good agreement between MBPE and the solutions over individual frequencies is observed.
NASA Astrophysics Data System (ADS)
Vergara, H. J.; Kirstetter, P.; Hong, Y.; Gourley, J. J.; Wang, X.
2013-12-01
The Ensemble Kalman Filter (EnKF) is arguably the assimilation approach that has found the widest application in hydrologic modeling. Its relatively easy implementation and computational efficiency makes it an attractive method for research and operational purposes. However, the scientific literature featuring this approach lacks guidance on how the errors in the forecast need to be characterized so as to get the required corrections from the assimilation process. Moreover, several studies have indicated that the performance of the EnKF is 'sub-optimal' when assimilating certain hydrologic observations. Likewise, some authors have suggested that the underlying assumptions of the Kalman Filter and its dependence on linear dynamics make the EnKF unsuitable for hydrologic modeling. Such assertions are often based on ineffectiveness and poor robustness of EnKF implementations resulting from restrictive specification of error characteristics and the absence of a-priori information of error magnitudes. Therefore, understanding the capabilities and limitations of the EnKF to improve hydrologic forecasts require studying its sensitivity to the manner in which errors in the hydrologic modeling system are represented through ensembles. This study presents a methodology that explores various uncertainty representation configurations to characterize the errors in the hydrologic forecasts in a data assimilation context. The uncertainty in rainfall inputs is represented through a Generalized Additive Model for Location, Scale, and Shape (GAMLSS), which provides information about second-order statistics of quantitative precipitation estimates (QPE) error. The uncertainty in model parameters is described adding perturbations based on parameters covariance information. The method allows for the identification of rainfall and parameter perturbation combinations for which the performance of the EnKF is 'optimal' given a set of objective functions. In this process, information about
Bakker, Chris J G; de Leeuw, Hendrik; van de Maat, Gerrit H; van Gorp, Jetse S; Bouwman, Job G; Seevinck, Peter R
2013-01-01
Lack of spatial accuracy is a recognized problem in magnetic resonance imaging (MRI) which severely detracts from its value as a stand-alone modality for applications that put high demands on geometric fidelity, such as radiotherapy treatment planning and stereotactic neurosurgery. In this paper, we illustrate the potential and discuss the limitations of spectroscopic imaging as a tool for generating purely phase-encoded MR images and parameter maps that preserve the geometry of an object and allow localization of object features in world coordinates. Experiments were done on a clinical system with standard facilities for imaging and spectroscopy. Images were acquired with a regular spin echo sequence and a corresponding spectroscopic imaging sequence. In the latter, successive samples of the acquired echo were used for the reconstruction of a series of evenly spaced images in the time and frequency domain. Experiments were done with a spatial linearity phantom and a series of test objects representing a wide range of susceptibility- and chemical-shift-induced off-resonance conditions. In contrast to regular spin echo imaging, spectroscopic imaging was shown to be immune to off-resonance effects, such as those caused by field inhomogeneity, susceptibility, chemical shift, f(0) offset and field drift, and to yield geometrically accurate images and parameter maps that allowed object structures to be localized in world coordinates. From these illustrative examples and a discussion of the limitations of purely phase-encoded imaging techniques, it is concluded that spectroscopic imaging offers a fundamental solution to the geometric deficiencies of MRI which may evolve toward a practical solution when full advantage will be taken of current developments with regard to scan time reduction. This perspective is backed up by a demonstration of the significant scan time reduction that may be achieved by the use of compressed sensing for a simple phantom. PMID:22898694
Sandala, Gregory M.; Hopmann, Kathrin H.; Ghosh, Abhik
2011-01-01
structure. Significant improvements to the isomer shift calibrations are obtained for B3LYP and B3LYP* when geometries obtained with the OLYP functional are used. In addition, greatly improved performance of these functionals is found if the complete test set is grouped separately into Fe–NO and Fe–S complexes. Calibration fits including only Fe–NO complexes are found to be excellent, while those containing the non-nitrosyl Fe–S complexes alone are found to demonstrate less accurate correlations. Similar trends are also found with OLYP, OPBE, PW91, and BP86. Correlations between experimental and calculated QSs were also investigated. Generally, universal and separate Fe–NO and Fe–S fit parameters obtained to determine QSs are found to be of good to excellent quality for every density functional examined, especially if [Fe4(NO)4(μ3-S)4]− is removed from the test set. PMID:22039359
NASA Astrophysics Data System (ADS)
Hsieh, H. P.; Sung, K. B.; Hsu, F. W.
2014-05-01
Diffuse reflectance spectroscopy has been applied as a non-invasive method to measure tissue optical properties, which are associate with anatomical information. The algorithm widely used to extract, optical parameters from reflectance spectra is the regression method, which is time-consuming and frequently converge to local maxima. In this study, the effects of parameters changes on spectra are analyzed in different fiber geometries, source-detector separations and wavelengths. In the end of this paper, a new fitting algorithm is proposed base on parameters features found. The new algorithm is expected to enhance the accuracy of parameters extracted and save 75% of the process time.
Input design for identification of aircraft stability and control derivatives
NASA Technical Reports Server (NTRS)
Gupta, N. K.; Hall, W. E., Jr.
1975-01-01
An approach for designing inputs to identify stability and control derivatives from flight test data is presented. This approach is based on finding inputs which provide the maximum possible accuracy of derivative estimates. Two techniques of input specification are implemented for this objective - a time domain technique and a frequency domain technique. The time domain technique gives the control input time history and can be used for any allowable duration of test maneuver, including those where data lengths can only be of short duration. The frequency domain technique specifies the input frequency spectrum, and is best applied for tests where extended data lengths, much longer than the time constants of the modes of interest, are possible. These technqiues are used to design inputs to identify parameters in longitudinal and lateral linear models of conventional aircraft. The constraints of aircraft response limits, such as on structural loads, are realized indirectly through a total energy constraint on the input. Tests with simulated data and theoretical predictions show that the new approaches give input signals which can provide more accurate parameter estimates than can conventional inputs of the same total energy. Results obtained indicate that the approach has been brought to the point where it should be used on flight tests for further evaluation.
NASA Astrophysics Data System (ADS)
Bruntt, H.
2009-10-01
Context: The CoRoT satellite has provided high-quality light curves of several solar-like stars. Analysis of these light curves provides oscillation frequencies that make it possible to probe the interior of the stars. However, additional constraints on the fundamental parameters of the stars are important for the theoretical modelling to be successful. Aims: We estimate the fundamental parameters (mass, radius, and luminosity) of the first four solar-like targets to be observed in the asteroseismic field. In addition, we determine their effective temperature, metallicity, and detailed abundance patterns. Methods: To constrain the stellar mass, radius and age we used the shotgun software, which compares the location of the stars in the Hertzsprung-Russell diagram with theoretical evolution models. This method takes the uncertainties of the observed parameters into account, including the large separation determined from the solar-like oscillations. We determined the effective temperatures and abundance patterns in the stars from the analysis of high-resolution spectra obtained with the HARPS, NARVAL, ELODIE and FEROS spectrographs. Results: We determined the mass, radius, and luminosity of the four CoRoT targets to within 5{-}10%, 2{-}4% and 5{-}13%, respectively. The quality of the stellar spectra determines how well we can constrain the effective temperature. For the two best spectra we get 1-σ uncertainties below 60 K and 100{-}150 K for the other two. The uncertainty on the surface gravity is less than 0.08 dex for three stars, while it is 0.15 dex for HD 181906. The reason for the larger uncertainty is that the spectrum has two components with a luminosity ratio of L_p/Ls = 0.50±0.15. While Hipparcos astrometric data strongly suggest it is a binary star, we find evidence that the fainter star may be a background star, since it is less luminous but hotter.
Shi, Deheng; Liu, Qionglan; Sun, Jinfeng; Zhu, Zunlue
2014-03-25
The potential energy curves (PECs) of 28 Ω states generated from the 12 states (X(4)Σ(-), 1(2)Π, 1(2)Σ(-), 1(2)Δ, 1(2)Σ(+), 2(2)Π, A(4)Π, B(4)Σ(-), 3(2)Π, 1(6)Σ(-), 2(2)Σ(-) and 1(6)Π) of the BN(+) cation are studied for the first time for internuclear separations from about 0.1 to 1.0 nm using an ab initio quantum chemical method. All the Λ-S states correlate to the first four dissociation channels. The 1(6)Σ(-), 3(2)Π and A(4)Π states are found to be the inverted ones. The 1(2)Σ(+), 2(2)Π, 3(2)Π and 2(2)Σ(-) states are found to possess the double well. The PECs are calculated by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson correction. Core-valence correlation correction is included by a cc-pCV5Z basis set. Scalar relativistic correction is calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. The convergent behavior of present calculations is discussed with respect to the basis set and level of theory. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian using the all-electron cc-pCV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The spectroscopic parameters are obtained, and the vibrational properties of 1(2)Σ(+), 2(2)Π, 3(2)Π and 2(2)Σ(-) states are evaluated. Analyses demonstrate that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The conclusion is gained that the effect of spin-orbit coupling on the spectroscopic parameters are not obvious almost for all the Λ-S states involved in the present paper. PMID:24334021
Shi, Deheng; Li, Peiling; Sun, Jinfeng; Zhu, Zunlue
2014-01-01
The potential energy curves (PECs) of 28 Ω states generated from 9 Λ-S states (X(2)Π, 1(4)Π, 1(6)Π, 1(2)Σ(+), 1(4)Σ(+), 1(6)Σ(+), 1(4)Σ(-), 2(4)Π and 1(4)Δ) are studied for the first time using an ab initio quantum chemical method. All the 9 Λ-S states correlate to the first two dissociation limits, N((4)Su)+Se((3)Pg) and N((4)Su)+Se((3)Dg), of NSe radical. Of these Λ-S states, the 1(6)Σ(+), 1(4)Σ(+), 1(6)Π, 2(4)Π and 1(4)Δ are found to be rather weakly bound states. The 1(2)Σ(+) is found to be unstable and has double wells. And the 1(6)Σ(+), 1(4)Σ(+), 1(4)Π and 1(6)Π are found to be the inverted ones with the SO coupling included. The PEC calculations are made by the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson modification. The spin-orbit coupling is accounted for by the state interaction approach with the Breit-Pauli Hamiltonian. The convergence of the present calculations is discussed with respect to the basis set and the level of theory. Core-valence correlation corrections are included with a cc-pCVTZ basis set. Scalar relativistic corrections are calculated by the third-order Douglas-Kroll Hamiltonian approximation at the level of a cc-pV5Z basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of spin-orbit coupling constants is discussed in brief for some Λ-S states with one shallow well on each PEC. The spectroscopic parameters of 9 Λ-S and 28 Ω states are determined by fitting the first ten vibrational levels whenever available, which are calculated by solving the rovibrational Schrödinger equation with Numerov's method. The splitting energy in the X(2)Π Λ-S state is determined to be about 864.92 cm(-1), which agrees favorably with the measurements of 891.80 cm(-1). Moreover, other spectroscopic parameters of Λ-S and Ω states involved here are
Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim
2016-01-01
Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192
NASA Astrophysics Data System (ADS)
Suchomska, K.; Graczyk, D.; Smolec, R.; Pietrzyński, G.; Gieren, W.; Stȩpień, K.; Konorski, P.; Pilecki, B.; Villanova, S.; Thompson, I. B.; Górski, M.; Karczmarek, P.; Wielgórski, P.; Anderson, R. I.
2015-07-01
We have analyzed the double-lined eclipsing binary system ASAS J180057-2333.8 from the All Sky Automated Survey (ASAS) catalogue. We measure absolute physical and orbital parameters for this system based on archival V-band and I-band ASAS photometry, as well as on high-resolution spectroscopic data obtained with ESO 3.6 m/HARPS and CORALIE spectrographs. The physical and orbital parameters of the system were derived with an accuracy of about 0.5-3 per cent. The system is a very rare configuration of two bright well-detached giants of spectral types K1 and K4 and luminosity class II. The radii of the stars are R1 = 52.12 ± 1.38 and R2 = 67.63 ± 1.40 R⊙ and their masses are M1 = 4.914 ± 0.021 and M2 = 4.875 ± 0.021 M⊙. The exquisite accuracy of 0.5 per cent obtained for the masses of the components is one of the best mass determinations for giants. We derived a precise distance to the system of 2.14 ± 0.06 kpc (stat.) ± 0.05 (syst.) which places the star in the Sagittarius-Carina arm. The Galactic rotational velocity of the star is Θs = 258 ± 26 km s-1 assuming Θ0 = 238 km s-1. A comparison with PARSEC isochrones places the system at the early phase of core helium burning with an age of slightly larger than 100 million years. The effect of overshooting on stellar evolutionary tracks was explored using the MESA star code.
NASA Astrophysics Data System (ADS)
Montes, D.; Caballero, J. A.; Alonso-Floriano, F. J.; Cortes Contreras, M.; Gonzalez-Alvarez, E.; Hidalgo, D.; Holgado, G.; Llamas, M.; Martinez-Rodriguez, H.; Sanz-Forcada, J.
2015-01-01
We help compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES `input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers (see the other contributions in this volume), we compile a huge pile of public data on over 2100 M dwarfs, and analyze them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master and grade students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.
Harper, F.T.; Breeding, R.J.; Brown, T.D.; Gregory, J.J.; Jow, H.N.; Payne, A.C.; Gorham, E.D.; Amos, C.N.; Helton, J.; Boyd, G.
1992-06-01
In support of the Nuclear Regulatory Commission`s (NRC`s) assessment of the risk from severe accidents at commercial nuclear power plants in the US reported in NUREG-1150, the Severe Accident Risk Reduction Program (SAARP) has completed a revised calculation of the risk to the general public from severe accidents at five nuclear power plants: Surry, Sequoyah, Zion, Peach Bottom and Grand Gulf. The emphasis in this risk analysis was not on determining a point estimate of risk, but to determine the distribution of risk, and to assess the uncertainties that account for the breadth of this distribution. Off-site risk initiation by events, both internal to the power station and external to the power station. Much of this important input to the logic models was generated by expert panels. This document presents the distributions and the rationale supporting the distributions for the questions posed to the Source Term Panel.
NASA Astrophysics Data System (ADS)
Joosten, S.; Pammler, K.; Silny, J.
2009-02-01
The problem of electromagnetic interference of electronic implants such as cardiac pacemakers has been well known for many years. An increasing number of field sources in everyday life and occupational environment leads unavoidably to an increased risk for patients with electronic implants. However, no obligatory national or international safety regulations exist for the protection of this patient group. The aim of this study is to find out the anatomical and physiological worst-case conditions for patients with an implanted pacemaker adjusted to unipolar sensing in external time-varying electric fields. The results of this study with 15 volunteers show that, in electric fields, variation of the interference voltage at the input of a cardiac pacemaker adds up to 200% only because of individual factors. These factors should be considered in human studies and in the setting of safety regulations.
NASA Astrophysics Data System (ADS)
Orkin, V. L.; Khamaganov, V. G.; Martynova, L. E.; Kurylo, M. J.
2012-12-01
The emissions of halogenated (Cl, Br containing) organics of both natural and anthropogenic origin contribute to the balance of and changes in the stratospheric ozone concentration. The associated chemical cycles are initiated by the photochemical decomposition of the portion of source gases that reaches the stratosphere. Reactions with hydroxyl radicals and photolysis are the main processes dictating the compound lifetime in the troposphere and release of active halogen in the stratosphere for a majority of halogen source gases. Therefore, the accuracy of photochemical data is of primary importance for the purpose of comprehensive atmospheric modeling and for simplified kinetic estimations of global impacts on the atmosphere, such as in ozone depletion (i.e., the Ozone Depletion Potential, ODP) and climate change (i.e., the Global Warming Potential, GWP). The sources of critically evaluated photochemical data for atmospheric modeling, NASA/JPL Publications and IUPAC Publications, recommend uncertainties within 10%-60% for the majority of OH reaction rate constants with only a few cases where uncertainties lie at the low end of this range. These uncertainties can be somewhat conservative because evaluations are based on the data from various laboratories obtained during the last few decades. Nevertheless, even the authors of the original experimental works rarely estimate the total combined uncertainties of the published OH reaction rate constants to be less than ca. 10%. Thus, uncertainties in the photochemical properties of potential and current atmospheric trace gases obtained under controlled laboratory conditions still may constitute a major source of uncertainty in estimating the compound's environmental impact. One of the purposes of the presentation is to illustrate the potential for obtaining accurate laboratory measurements of the OH reaction rate constant over the temperature range of atmospheric interest. A detailed inventory of accountable sources of
Liu, Hui; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue; Shulin, Zhang
2014-04-24
The potential energy curves (PECs) of 54 spin-orbit states generated from the 22 electronic states of O2 molecule are investigated for the first time for internuclear separations from about 0.1 to 1.0nm. Of the 22 electronic states, the X(3)Σg(-), A(')(3)Δu, A(3)Σu(+), B(3)Σu(-), C(3)Πg, a(1)Δg, b(1)Σg(+), c(1)Σu(-), d(1)Πg, f(1)Σu(+), 1(5)Πg, 1(3)Πu, 2(3)Σg(-), 1(5)Σu(-), 2(1)Σu(-) and 2(1)Δg are found to be bound, whereas the 1(5)Σg(+), 2(5)Σg(+), 1(1)Πu, 1(5)Δg, 1(5)Πu and 2(1)Πu are found to be repulsive ones. The B(3)Σu(-) and d(1)Πg states possess the double well. And the 1(3)Πu, C(3)Πg, A'(3)Δu, 1(5)Δg and 2(5)Σg(+) states are the inverted ones when the spin-orbit coupling is included. The PEC calculations are done by the complete active space self-consistent field (CASSCF) method, which is followed by the internally contracted multireference configuration interaction (icMRCI) approach with the Davidson correction. Core-valence correlation and scalar relativistic corrections are taken into account. The convergence of present calculations is evaluated with respect to the basis set and level of theory. The vibrational properties are discussed for the 1(5)Πg, 1(3)Πu, d(1)Πg and 1(5)Σu(-) states and for the second well of the B(3)Σu(-) state. The spin-orbit coupling effect is accounted for by the state interaction method with the Breit-Pauli Hamiltonian. The PECs of all the electronic states and spin-orbit states are extrapolated to the complete basis set limit. The spectroscopic parameters are obtained, and compared with available experimental and other theoretical results. Analyses demonstrate that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The conclusion is obtained that the effect of spin-orbit coupling on the spectroscopic parameters are small almost for all the electronic states involved in this paper except for the 1(5)Σu(-), 1(5)Πg and 1(3)Πu. PMID:24486866
NASA Astrophysics Data System (ADS)
Shi, De-Heng; Liu, Qionglan; Yu, Wei; Sun, Jinfeng; Zhu, Zunlue
2014-05-01
The potential energy curves (PECs) of 23 Ω states generated from the 12 electronic states (X1 Σ +, 21 Σ +, 11 Σ -, 11 Π, 21 Π, 11 Δ, 13 Σ +, 23 Σ +, 13 Σ -, a3 Π, 23 Π and 13 Δ) are studied for the first time. All the states correlate to the first dissociation channel of the SiBr+ cation. Of these electronic states, the 23 Σ + is the repulsive one without the spin-orbit coupling, whereas it becomes the bound one with the spin-orbit coupling added. On the one hand, without the spin-orbit coupling, the 11 Π, 21 Π and 23 Π are the rather weakly bound states, and only the 11 Π state possesses the double well; on the other hand, with the spin-orbit coupling included, the a3 Π and 11 Π states possess the double well, and the 13 Σ + and 13 Σ - are the inverted states. The PECs are calculated by the CASSCF method, which is followed by the internally contracted MRCI approach with the Davidson modification. Scalar relativistic correction is calculated by the third-order Douglas-Kroll Hamiltonian approximation with a cc-pVTZ-DK basis set. Core-valence correlation correction is included with a cc-pCVTZ basis set. The spin-orbit coupling is accounted for by the state interaction method with the Breit-Pauli Hamiltonian using the all-electron aug-cc-pCVTZ basis set. All the PECs are extrapolated to the complete basis set limit. The variation with internuclear separation of the spin-orbit coupling constant is discussed in brief. The spectroscopic parameters are evaluated for the 11 bound electronic states and the 23 bound Ω states, and are compared with available measurements. Excellent agreement has been found between the present results and the experimental data. It demonstrates that the spectroscopic parameters reported here can be expected to be reliably predicted ones. The Franck-Condon factors and radiative lifetimes of the transitions from the a3 Π 0 + and a3 Π 1 states to the X1 Σ + 0+ state are calculated for several low vibrational levels, and
Twelve example local data support files are automatically downloaded when the SDMProjectBuilder is installed on a computer. They allow the user to modify values to parameters that impact the release, migration, fate, and transport of microbes within a watershed, and control delin...
Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.
Jannik, T.; Karapatakis, D.; Lee, P.; Farfan, E.
2010-08-06
Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) Regulatory Guides. Within the regulatory guides, default values are provided for many of the dose model parameters but the use of site-specific values by the applicant is encouraged. A detailed survey of land and water use parameters was conducted in 1991 and is being updated here. These parameters include local characteristics of meat, milk and vegetable production; river recreational activities; and meat, milk and vegetable consumption rates as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors to be used in human health exposure calculations at SRS are documented. Based on comparisons to the 2009 SRS environmental compliance doses, the following effects are expected in future SRS compliance dose calculations: (1) Aquatic all-pathway maximally exposed individual doses may go up about 10 percent due to changes in the aquatic bioaccumulation factors; (2) Aquatic all-pathway collective doses may go up about 5 percent due to changes in the aquatic bioaccumulation factors that offset the reduction in average individual water consumption rates; (3) Irrigation pathway doses to the maximally exposed individual may go up about 40 percent due to increases in the element-specific transfer factors; (4) Irrigation pathway collective doses may go down about 50 percent due to changes in food productivity and production within the 50-mile radius of SRS; (5) Air pathway doses to the maximally exposed individual may go down about 10 percent due to the changes in food productivity in the SRS area and to the changes in element-specific transfer factors; and (6
Input to the PRAST computer code used in the SRS probabilistic risk assessment
Kearnaghan, D.P.
1992-10-15
The PRAST (Production Reactor Algorithm for Source Terms) computer code was developed by Westinghouse Savannah River Company and Science Application International Corporation for the quantification of source terms for the SRS Savannah River Site (SRS) Reactor Probabilistic Risk Assessment. PRAST requires as input a set of release fractions, decontamination factors, transfer fractions and source term characteristics that accurately reflect the conditions that are evaluated by PRAST. This document links the analyses which form the basis for the PRAST input parameters. In addition, it gives the distribution of the input parameters that are uncertain and considered to be important to the evaluation of the source terms to the environment.
Wang, Yong; Goh, Wang Ling; Chai, Kevin T-C; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators. PMID:27131699
NASA Astrophysics Data System (ADS)
Wang, Yong; Goh, Wang Ling; Chai, Kevin T.-C.; Mu, Xiaojing; Hong, Yan; Kropelnicki, Piotr; Je, Minkyu
2016-04-01
The parasitic effects from electromechanical resonance, coupling, and substrate losses were collected to derive a new two-port equivalent-circuit model for Lamb wave resonators, especially for those fabricated on silicon technology. The proposed model is a hybrid π-type Butterworth-Van Dyke (PiBVD) model that accounts for the above mentioned parasitic effects which are commonly observed in Lamb-wave resonators. It is a combination of interdigital capacitor of both plate capacitance and fringe capacitance, interdigital resistance, Ohmic losses in substrate, and the acoustic motional behavior of typical Modified Butterworth-Van Dyke (MBVD) model. In the case studies presented in this paper using two-port Y-parameters, the PiBVD model fitted significantly better than the typical MBVD model, strengthening the capability on characterizing both magnitude and phase of either Y11 or Y21. The accurate modelling on two-port Y-parameters makes the PiBVD model beneficial in the characterization of Lamb-wave resonators, providing accurate simulation to Lamb-wave resonators and oscillators.
NASA Astrophysics Data System (ADS)
de la Paz, Mercedes; Gómez-Parra, Abelardo; Forja, Jesús
2008-06-01
The main objective of the present study is to assess the temporal variability of the carbonate system, and the mechanisms driving that variability, in the Rio San Pedro, a tidal creek located in the Bay of Cadiz (SW Iberian Peninsula). This shallow tidal creek is affected by effluents of organic matter and nutrients from surrounding marine fish farms. In 2004, 11 tidal samplings, seasonally distributed, were carried out for the measurement of total alkalinity (TA), pH, dissolved oxygen and Chlorophyll- a (Chl- a) using a fixed station. In addition, several longitudinal samplings were carried out both in the tidal creek and in the adjacent waters of the Bay of Cadiz, in order to obtain a spatial distribution of the carbonate parameters. Tidal mixing is the main factor controlling the dissolved inorganic carbon (DIC) variability, showing almost conservative behaviour on a tidal time scale. The amplitude of the daily oscillations of DIC, pH and chlorophyll show a high dependence on the spring-neap tide sequence, with the maximum amplitude associated with spring tides. Additionally, a marked seasonality has been found in the DIC, pH and oxygen concentrations. This seasonality seems to be related to the increase in metabolic rates with the temperature, the alternation of storm events and high evaporation rates, together with intense seasonal variability in the discharges from fish farms. In addition, the export of DIC from the Rio San Pedro to the adjacent coastal area has been evaluated using the tidal prism model, obtaining a net export of 1.05×10 10 g C yr -1.
NASA Astrophysics Data System (ADS)
Bag, S.; de, A.
2008-11-01
An accurate estimation of the temperature field in weld pool and its surrounding area is important for a priori determination of the weld-pool dimensions and the weld thermal cycles. A finite element based three-dimensional (3-D) quasi-steady heat-transfer model is developed in the present work to compute temperature field in gas tungsten arc welding (GTAW) process. The numerical model considers temperature-dependent material properties and latent heat of melting and solidification. A novelty of the numerical model is that the welding heat source is considered in the form of an adaptive volumetric heat source that confirms to the size and the shape of the weld pool. The need to predefine the dimensions of the volumetric heat source is thus overcome. The numerical model is further integrated with a parent-centric recombination (PCX) operated generalized generation gap (G3) model based genetic algorithm to identify the magnitudes of process efficiency and arc radius that are usually unknown but required for the accurate estimation of the net heat input into the workpiece. The complete numerical model and the genetic algorithm based optimization code are developed indigenously using an Intel Fortran Compiler. The integrated model is validated further with a number of experimentally measured weld dimensions in GTA-welded samples in stainless steels.
Liebetrau, A.M.
1983-10-01
Work is underway at Pacific Northwest Laboratory (PNL) to improve the probabilistic analysis used to model pressurized thermal shock (PTS) incidents in reactor pressure vessels, and, further, to incorporate these improvements into the existing Vessel Integrity Simulation Analysis (VISA) code. Two topics related to work on input distributions in VISA are discussed in this paper. The first involves the treatment of flaw size distributions and the second concerns errors in the parameters in the (Guthrie) equation which is used to compute ..delta..RT/sub NDT/, the shift in reference temperature for nil ductility transition.
Toward an inventory of nitrogen input to the United States
Accurate accounting of nitrogen inputs is increasingly necessary for policy decisions related to aquatic nutrient pollution. Here we synthesize available data to provide the first integrated estimates of the amount and uncertainty of nitrogen inputs to the United States. Abou...
Input/output system identification - Learning from repeated experiments
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Horta, Lucas G.; Longman, Richard W.
1990-01-01
The paper describes three approaches and possible variations for the determination of the Markov parameters for forced response data using general inputs. It is shown that, when the parameters in the solution procedure are bootstrapped, the results can be obtained very efficiently, but the errors propagate throughout all parameters. By arranging the data in a different form and using singular value decomposition, the resulting identified parameters are more accurate, in the least number of successive experiments, at the expense of a large matrix singular value decomposition. When a recursive procedure is employed, the calculations can be performed very efficiently, but the number of repetitions of the experiments is much greater for a given accuracy than for any of the previous approaches. An alternative formulation is proposed to combine the advantages of each of the approaches.
Mackay, Donald; Hughes, Lauren; Powell, David E; Kim, Jaeshin
2014-09-01
The QWASI fugacity mass balance model has been widely used since 1983 for both scientific and regulatory purposes to estimate the concentrations of organic chemicals in water and sediment, given an assumed rate of chemical emission, advective inflow in water or deposition from the atmosphere. It has become apparent that an updated version is required, especially to incorporate improved methods of obtaining input parameters such as partition coefficients. Accordingly, the model has been revised and it is now available in spreadsheet format. Changes to the model are described and the new version is applied to two chemicals, D5 (decamethylcyclopentasiloxane) and PCB-180, in two lakes, Lake Pepin (MN, USA) and Lake Ontario, showing the model's capability of illustrating both the chemical to chemical differences and lake to lake differences. Since there are now increased regulatory demands for rigorous sensitivity and uncertainty analyses, these aspects are discussed and two approaches are illustrated. It is concluded that the new QWASI water quality model can be of value for both evaluative and simulation purposes, thus providing a tool for obtaining an improved understanding of chemical mass balances in lakes, as a contribution to the assessment of fate and exposure and as a step towards the assessment of risk. PMID:24997940
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Albert, C.; Reichert, P.; Rieckermann, J.
2015-12-01
Rainfall is the main driver of hydrological systems. Unfortunately, it is highly variable in space and time and therefore difficult to observe accurately. This poses a serious challenge to correctly estimate the catchment-averaged precipitation, a key factor for hydrological models. As biased precipitation leads to biased parameter estimation and thus to biased runoff predictions, it is very important to have a realistic description of precipitation uncertainty. Rainfall multipliers (RM), which correct each observed storm with a random factor, provide a first step into this direction. Nevertheless, they often fail when the estimated input has a different temporal pattern from the true one or when a storm is not detected by the raingauge. In this study we propose a more realistic input error model, which is able to overcome these challenges and increase our certainty by better estimating model input and parameters. We formulate the average precipitation over the watershed as a stochastic input process (SIP). We suggest a transformed Gauss-Markov process, which is estimated in a Bayesian framework by using input (rainfall) and output (runoff) data. We tested the methodology in a 28.6 ha urban catchment represented by an accurate conceptual model. Specifically, we perform calibration and predictions with SIP and RM using accurate data from nearby raingauges (R1) and inaccurate data from a distant gauge (R2). Results show that using SIP, the estimated model parameters are "protected" from the corrupting impact of inaccurate rainfall. Additionally, SIP can correct input biases during calibration (Figure) and reliably quantify rainfall and runoff uncertainties during both calibration (Figure) and validation. In our real-word application with non-trivial rainfall errors, this was not the case with RM. We therefore recommend SIP in all cases where the input is the predominant source of uncertainty. Furthermore, the high-resolution rainfall intensities obtained with this
A new generalized correlation for accurate vapor pressure prediction
NASA Astrophysics Data System (ADS)
An, Hui; Yang, Wenming
2012-08-01
An accurate knowledge of the vapor pressure of organic liquids is very important for the oil and gas processing operations. In combustion modeling, the accuracy of numerical predictions is also highly dependent on the fuel properties such as vapor pressure. In this Letter, a new generalized correlation is proposed based on the Lee-Kesler's method where a fuel dependent parameter 'A' is introduced. The proposed method only requires the input parameters of critical temperature, normal boiling temperature and the acentric factor of the fluid. With this method, vapor pressures have been calculated and compared with the data reported in data compilation for 42 organic liquids over 1366 data points, and the overall average absolute percentage deviation is only 1.95%.
Deridder, Sander; Desmet, Gert
2012-02-01
Using computational fluid dynamics (CFD), the effective B-term diffusion constant γ(eff) has been calculated for four different random sphere packings with different particle size distributions and packing geometries. Both fully porous and porous-shell sphere packings are considered. The obtained γ(eff)-values have subsequently been used to determine the value of the three-point geometrical constant (ζ₂) appearing in the 2nd-order accurate effective medium theory expression for γ(eff). It was found that, whereas the 1st-order accurate effective medium theory expression is accurate to within 5% over most part of the retention factor range, the 2nd-order accurate expression is accurate to within 1% when calculated with the best-fit ζ₂-value. Depending on the exact microscopic geometry, the best-fit ζ₂-values typically lie in the range of 0.20-0.30, holding over the entire range of intra-particle diffusion coefficients typically encountered for small molecules (0.1 ≤ D(pz)/D(m) ≤ 0.5). These values are in agreement with the ζ₂-value proposed by Thovert et al. for the random packing they considered. PMID:22236565
INDES User's guide multistep input design with nonlinear rotorcraft modeling
NASA Technical Reports Server (NTRS)
1979-01-01
The INDES computer program, a multistep input design program used as part of a data processing technique for rotorcraft systems identification, is described. Flight test inputs base on INDES improve the accuracy of parameter estimates. The input design algorithm, program input, and program output are presented.
Analysis of Stochastic Response of Neural Networks with Stochastic Input
1996-10-10
Software permits the user to extend capability of his/her neural network to include probablistic characteristics of input parameter. User inputs topology and weights associated with neural network along with distributional characteristics of input parameters. Network response is provided via a cumulative density function of network response variable.
ERIC Educational Resources Information Center
Berliss-Vincent, Jane; Whitford, Gigi
2002-01-01
This article presents both the factors involved in successful speech input use and the potential barriers that may suggest that other access technologies could be more appropriate for a given individual. Speech input options that are available are reviewed and strategies for optimizing use of speech recognition technology are discussed. (Contains…
NASA Technical Reports Server (NTRS)
Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette
2006-01-01
This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.
High input impedance amplifier
NASA Technical Reports Server (NTRS)
Kleinberg, Leonard L.
1995-01-01
High input impedance amplifiers are provided which reduce the input impedance solely to a capacitive reactance, or, in a somewhat more complex design, provide an extremely high essentially infinite, capacitive reactance. In one embodiment, where the input impedance is reduced in essence, to solely a capacitive reactance, an operational amplifier in a follower configuration is driven at its non-inverting input and a resistor with a predetermined magnitude is connected between the inverting and non-inverting inputs. A second embodiment eliminates the capacitance from the input by adding a second stage to the first embodiment. The second stage is a second operational amplifier in a non-inverting gain-stage configuration where the output of the first follower stage drives the non-inverting input of the second stage and the output of the second stage is fed back to the non-inverting input of the first stage through a capacitor of a predetermined magnitude. These amplifiers, while generally useful, are very useful as sensor buffer amplifiers that may eliminate significant sources of error.
Signal Prediction With Input Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin
1999-01-01
A novel coding technique is presented for signal prediction with applications including speech coding, system identification, and estimation of input excitation. The approach is based on the blind equalization method for speech signal processing in conjunction with the geometric subspace projection theory to formulate the basic prediction equation. The speech-coding problem is often divided into two parts, a linear prediction model and excitation input. The parameter coefficients of the linear predictor and the input excitation are solved simultaneously and recursively by a conventional recursive least-squares algorithm. The excitation input is computed by coding all possible outcomes into a binary codebook. The coefficients of the linear predictor and excitation, and the index of the codebook can then be used to represent the signal. In addition, a variable-frame concept is proposed to block the same excitation signal in sequence in order to reduce the storage size and increase the transmission rate. The results of this work can be easily extended to the problem of disturbance identification. The basic principles are outlined in this report and differences from other existing methods are discussed. Simulations are included to demonstrate the proposed method.
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Inferring Indel Parameters using a Simulation-based Approach.
Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A; Pupko, Tal
2015-12-01
In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226
Inferring Indel Parameters using a Simulation-based Approach
Levy Karin, Eli; Rabin, Avigayel; Ashkenazy, Haim; Shkedy, Dafna; Avram, Oren; Cartwright, Reed A.; Pupko, Tal
2015-01-01
In this study, we present a novel methodology to infer indel parameters from multiple sequence alignments (MSAs) based on simulations. Our algorithm searches for the set of evolutionary parameters describing indel dynamics which best fits a given input MSA. In each step of the search, we use parametric bootstraps and the Mahalanobis distance to estimate how well a proposed set of parameters fits input data. Using simulations, we demonstrate that our methodology can accurately infer the indel parameters for a large variety of plausible settings. Moreover, using our methodology, we show that indel parameters substantially vary between three genomic data sets: Mammals, bacteria, and retroviruses. Finally, we demonstrate how our methodology can be used to simulate MSAs based on indel parameters inferred from real data sets. PMID:26537226
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Moussa, D.; Damache, S.; Ouichaoui, S.
2015-01-01
The stopping powers of thin Al foils for H+ and 4He+ ions have been measured over the energy range E = (206.03- 2680.05) keV/amu with an overall relative uncertainty better than 1% using the transmission method. The derived S (E) experimental data are compared to previous ones from the literature, to values derived by the SRIM-2008 code or compiled in the ICRU-49 report, and to the predictions of Sigmund-Schinner binary collision stopping theory. Besides, the S (E) data for H+ ions together with those for He2+ ions reported by Andersen et al. (1977) have been analyzed over the energy interval E > 1.0 MeV using the modified Bethe-Bloch stopping theory. The following sets of values have been inferred for the mean excitation potential, I, and the Barkas-Andersen parameter, b, for H+ and He+ projectiles, respectively: { (I = 164 ± 3) eV, b = 1.40 } and { (I = 163 ± 2.5) eV, b = 1.38 } . As expected, the I parameter is found to be independent of the projectile electronic structure presumably indicating that the contribution of charge exchange effects becomes negligible as the projectile velocity increases. Therefore, the I parameter must be determined from precise stopping power measurements performed at high projectile energies where the Bethe stopping theory is fully valid.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
Multiple-input experimental modal analysis
NASA Technical Reports Server (NTRS)
Allemang, R. J.; Brown, D. L.
1985-01-01
The development of experimental modal analysis techniques is reviewed. System and excitation assumptions are discussed. The methods examined include the forced normal mode excitation method, the frequency response function method, the damped complex exponential response method, the Ibrahim time domain approach, the polyreference approach, and mathematical input-output model methods. The current trend toward multiple input utilization in the estimation of system parameters is noted.
Inverse Tasks In The Tsunami Problem: Nonlinear Regression With Inaccurate Input Data
NASA Astrophysics Data System (ADS)
Lavrentiev, M.; Shchemel, A.; Simonov, K.
problem can be formally propounded this way: A distribution of various combinations of observed values should be estimated. Totality of the combinations is represented by the set of variables. The results of observations determine excerption of outputs. In the scope of the propounded problem continuous (along with its derivations) homomorphic reflec- tion of the space of hidden parameters to the space of observed parameters should be found. It allows to reconstruct lack information of the inputs when the number of the 1 inputs is not less than the number of hidden parameters and to estimate the distribution if information for synonymous prediction of unknown inputs is not sufficient. The following approach to build approximation based on the excerption is suggested: the excerption is supplemented with the hidden parameters, which are distributed uni- formly in a multidimensional limited space. Then one should find correspondence of model and observed outputs. Therefore the correspondence will provide that the best approximation is the most accurate. In the odd iterations dependence between hid- den inputs and outputs is being optimized (like the conventional problem is solved). Correspondence between tasks is changing in the case when the error is reducing and distribution of inputs remains intact. Therefore, a special transform is applied to reduce error at every iteration. If the mea- sure of distribution is constant, then the condition of transformations is simplified. Such transforms are named "canonical" or "volume invariant transforms" and, there- fore, are well known. This approach is suggested for solving main inverse task of the tsunami problem. Basing on registered tsunami in seaside and shelf to estimate parameters of tsunami's hearth. 2
Optical input impedance of nanostrip antennas
NASA Astrophysics Data System (ADS)
Wang, Ivan; Du, Ya-ping
2012-05-01
We conduct an investigation into optical nanoantennas in the form of a strip dipole made from aluminum. With the finite-difference time domain simulation both optical input impedance and radiation efficiency of nanostrip antennas are addressed. An equivalent circuit is presented as well for the nanostrip antennas at optical resonances. The optical input resistance can be adjusted by varying the geometric parameters of antenna strips. By changing both strip area and strip length simultaneously, optical input resistance can be adjusted for matching impedance with an external feeding or loading circuit. It is found that the optical radiation efficiency does not change significantly when the size of a nanostrip antenna varies moderately.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Laumer, Bernhard; Schuster, Fabian; Stutzmann, Martin; Bergmaier, Andreas; Dollinger, Guenther; Eickhoff, Martin
2013-06-21
Zn{sub 1-x}Mg{sub x}O epitaxial films with Mg concentrations 0{<=}x{<=}0.3 were grown by plasma-assisted molecular beam epitaxy on a-plane sapphire substrates. Precise determination of the Mg concentration x was performed by elastic recoil detection analysis. The bandgap energy was extracted from absorption measurements with high accuracy taking electron-hole interaction and exciton-phonon complexes into account. From these results a linear relationship between bandgap energy and Mg concentration is established for x{<=}0.3. Due to alloy disorder, the increase of the photoluminescence emission energy with Mg concentration is less pronounced. An analysis of the lattice parameters reveals that the epitaxial films grow biaxially strained on a-plane sapphire.
VizieR Online Data Catalog: CARMENES input catalogue of M dwarfs. I (Alonso-Floriano+, 2015)
NASA Astrophysics Data System (ADS)
Alonso-Floriano, F. J.; Morales, J. C.; Caballero, J. A.; Montes, D.; Klutsch, A.; Mundt, R.; Cortes-Contreras, M.; Ribas, I.; Reiners, A.; Amado, P. J.; Quirrenbach, A.; Jeffers, S. V.
2015-03-01
List of 753 late-type stars, mostly M dwarfs, observed with the low-resolution optical spectrograph CAFOS at the 2.2m Calar Alto telescope for the preparation of the CARMENES input catalogue (http://carmenes.caha.es/). We provide basic data, observation parameters, spectral-typing indices, zeta metallicity index, Hα pseudo-equivalent width, spectral type from the literature, and our accurate adopted spectral type. (4 data files).
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Measuring Input Thresholds on an Existing Board
NASA Technical Reports Server (NTRS)
Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.
2011-01-01
A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and
Crespo, Cristina; Fernández, José R; Aboy, Mateo; Mojón, Artemio
2013-03-01
This paper reports the results of a study designed to determine whether there are statistically significant differences between the values of ambulatory blood pressure monitoring (ABPM) parameters obtained using different methods-fixed schedule, diary, and automatic algorithm based on actigraphy-of defining the main activity and rest periods, and to determine the clinical relevance of such differences. We studied 233 patients (98 men/135 women), 61.29 ± .83 yrs of age (mean ± SD). Statistical methods were used to measure agreement in the diagnosis and classification of subjects within the context of ABPM and cardiovascular disease risk assessment. The results show that there are statistically significant differences both at the group and individual levels. Those at the individual level have clinically significant implications, as they can result in a different classification, and, therefore, different diagnosis and treatment for individual subjects. The use of an automatic algorithm based on actigraphy can lead to better individual treatment by correcting the accuracy problems associated with the fixed schedule on patients whose actual activity/rest routine differs from the fixed schedule assumed, and it also overcomes the limitations and reliability issues associated with the use of diaries. PMID:23130607
Blind estimation of compartmental model parameters.
Di Bella, E V; Clackdoyle, R; Gullberg, G T
1999-03-01
Computation of physiologically relevant kinetic parameters from dynamic PET or SPECT imaging requires knowledge of the blood input function. This work is concerned with developing methods to accurately estimate these kinetic parameters blindly; that is, without use of a directly measured blood input function. Instead, only measurements of the output functions--the tissue time-activity curves--are used. The blind estimation method employed here minimizes a set of cross-relation equations, from which the blood term has been factored out, to determine compartmental model parameters. The method was tested with simulated data appropriate for dynamic SPECT cardiac perfusion imaging with 99mTc-teboroxime and for dynamic PET cerebral blood flow imaging with 15O water. The simulations did not model the tomographic process. Noise levels typical of the respective modalities were employed. From three to eight different regions were simulated, each with different time-activity curves. The time-activity curve (24 or 70 time points) for each region was simulated with a compartment model. The simulation used a biexponential blood input function and washin rates between 0.2 and 1.3 min(-1) and washout rates between 0.2 and 1.0 min(-1). The system of equations was solved numerically and included constraints to bound the range of possible solutions. From the cardiac simulations, washin was determined to within a scale factor of the true washin parameters with less than 6% bias and 12% variability. 99mTc-teboroxime washout results had less than 5% bias, but variability ranged from 14% to 43%. The cerebral blood flow washin parameters were determined with less than 5% bias and 4% variability. The washout parameters were determined with less than 4% bias, but had 15-30% variability. Since washin is often the parameter of most use in clinical studies, the blind estimation approach may eliminate the current necessity of measuring the input function when performing certain dynamic studies
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-01-01
(P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). CONCLUSION: A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability. PMID:27053857
Factors Affecting the Item Parameter Estimation and Classification Accuracy of the DINA Model
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan; Deng, Weiling
2010-01-01
To better understand the statistical properties of the deterministic inputs, noisy "and" gate cognitive diagnosis (DINA) model, the impact of several factors on the quality of the item parameter estimates and classification accuracy was investigated. Results of the simulation study indicate that the fully Bayes approach is most accurate when the…
Hypermnesia using auditory input.
Allen, J
1992-07-01
The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564
Instrumentation for measuring energy inputs to implements
Tompkins, F.D.; Wilhelm, L.R.
1981-01-01
A microcomputer-based instrumentation system for monitoring tractor operating parameters and energy inputs to implements was developed and mounted on a 75-power-takeoff-KW tractor. The instrumentation system, including sensors and data handling equipment, is discussed. 10 refs.
Selecting training inputs via greedy rank covering
Buchsbaum, A.L.; Santen, J.P.H. van
1996-12-31
We present a general method for selecting a small set of training inputs, the observations of which will suffice to estimate the parameters of a given linear model. We exemplify the algorithm in terms of predicting segmental duration of phonetic-segment feature vectors in a text-to-speech synthesizer, but the algorithm will work for any linear model and its associated domain.
DO MODEL UNCERTAINTY WITH CORRELATED INPUTS
The effect of correlation among the input parameters and variables on the output uncertainty of the Streeter-Phelps water quality model is examined. hree uncertainty analysis techniques are used: sensitivity analysis, first-order error analysis, and Monte Carlo simulation. odifie...
NASA Astrophysics Data System (ADS)
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2006-12-01
defining the HYDRO1k metrics of aspect, flow direction, slope etc., we refine the grid scale from the current HYDRO1k GTOPO30 DEM dimension of 1 km to a local DEM for our study area having a grid scale of 0.25 km. We employ higher-order 9 point finite differences to compute local topographic gradients, then aggragate (or integrate) the "HYDRO1k-type" parameters to the 1 km pixel dimensions of the NDVI data. We then perform a multivariate comparison of the derived-hydrologic parameters with characteristic phenological behaviors from the interannual NDVI modeled time series. For example, as one would expect, in spite of similarities of peak NDVI values in a particularly "wet" year, irrigated agricultural sites are well- discriminated from natural semi-arid grassland due to the multivariate controls from observed precipitation, surface water runoff, topographic slope, and the intrinsic fine structure in the behavior of the interannual NDVI time series. NDVI time series from montane areas provide interesting insight into the time of disappearance of snow cover, as well as the relation of summertime phenology to elevation and slope. A striking pattern emerges regarding the similitude between seasonal surface water runoff and interannual trends in phenology that corroborates the potential of NDVI data to monitor and characterize long term trends in the response of phenology to hydrological processes.
NASA Astrophysics Data System (ADS)
The Arctic Research and Policy Act (Eos, June 26, 1984, p. 412) was signed into law by President Ronald Reagan this past July. One of its objectives is to develop a 5-year research plan for the Arctic. A request for input to this plan is being issued this week to nearly 500 people in science, engineering, and industry.To promote Arctic research and to recommend research policy in the Arctic, the new law establishes a five-member Arctic Research Commission, to be appointed by the President, and establishes an Interagency Arctic Research Policy Committee, to be composed of representatives from nearly a dozen agencies having interests in the region. The commission will make policy recommendations, and the interagency committee will implement those recommendations. The National Science Foundation (NSF) has been designated as the lead agency of the interagency committee.
Developing Accurate Spatial Maps of Cotton Fiber Quality Parameters
Technology Transfer Automated Retrieval System (TEKTRAN)
Awareness of the importance of cotton fiber quality (Gossypium, L. sps.) has increased as advances in spinning technology require better quality cotton fiber. Recent advances in geospatial information sciences allow an improved ability to study the extent and causes of spatial variability in fiber p...
Input Multiplicities in Process Control.
ERIC Educational Resources Information Center
Koppel, Lowell B.
1983-01-01
Describes research investigating potential effect of input multiplicity on multivariable chemical process control systems. Several simple processes are shown to exhibit the possibility of theoretical developments on input multiplicity and closely related phenomena are discussed. (JN)
Modeling and generating input processes
Johnson, M.E.
1987-01-01
This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.
Chaudhary, Naveed Ishtiaq; Raja, Muhammad Asif Zahoor; Khan, Junaid Ali; Aslam, Muhammad Saeed
2013-01-01
A novel algorithm is developed based on fractional signal processing approach for parameter estimation of input nonlinear control autoregressive (INCAR) models. The design scheme consists of parameterization of INCAR systems to obtain linear-in-parameter models and to use fractional least mean square algorithm (FLMS) for adaptation of unknown parameter vectors. The performance analyses of the proposed scheme are carried out with third-order Volterra least mean square (VLMS) and kernel least mean square (KLMS) algorithms based on convergence to the true values of INCAR systems. It is found that the proposed FLMS algorithm provides most accurate and convergent results than those of VLMS and KLMS under different scenarios and by taking the low-to-high signal-to-noise ratio. PMID:23853538
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
NASA Astrophysics Data System (ADS)
Andréassian, Vazken; Perrin, Charles; Michel, Claude
2004-01-01
This paper attempts to assess the impact of improved estimates of areal potential evapotranspiration (PE) on the results of two rainfall-runoff models. A network of 42 PE stations was used for a sample of 62 watersheds and two watershed models of different complexity (the four-parameter GR4J model and an eight-parameter modified version of TOPMODEL), to test how sensitive rainfall-runoff models were to watershed PE estimated with the Penman equation. First, Penman PE estimates were regionalized in the Massif Central highlands of France, a mountainous area where PE is known to vary greatly with elevation, latitude, and longitude. The two watershed models were then used to assess changes in model efficiency with the improved PE input. Finally, the behavior of one of the model's parameters was analyzed, to understand how watershed models cope with systematic errors in the estimated PE input. In terms of model efficiency, in both models it was found that very simple assumptions on watershed PE input (the same average input for all watersheds) yield the same results as more accurate input obtained from regionalization. The detailed evaluation of the GR4J model calibrated with different PE input scenarios showed that the model is clearly sensitive to PE input, but that it uses its two production parameters to adapt to the various PE scenarios.
Estimating nonstationary input signals from a single neuronal spike train
NASA Astrophysics Data System (ADS)
Kim, Hideaki; Shinomoto, Shigeru
2012-11-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.
Olivares, Alberto; Ruiz-Garcia, Gonzalo; Olivares, Gonzalo; Górriz, Juan Manuel; Ramirez, Javier
2013-01-01
Ellipsoid fitting algorithms are widely used to calibrate Magnetic Angular Rate and Gravity (MARG) sensors. These algorithms are based on the minimization of an error function that optimizes the parameters of a mathematical sensor model that is subsequently applied to calibrate the raw data. The convergence of this kind of algorithms to a correct solution is very sensitive to input data. Input calibration datasets must be properly distributed in space so data can be accurately fitted to the theoretical ellipsoid model. Gathering a well distributed set is not an easy task as it is difficult for the operator carrying out the maneuvers to keep a visual record of all the positions that have already been covered, as well as the remaining ones. It would be then desirable to have a system that gives feedback to the operator when the dataset is ready, or to enable the calibration process in auto-calibrated systems. In this work, we propose two different algorithms that analyze the goodness of the distributions by computing four different indicators. The first approach is based on a thresholding algorithm that uses only one indicator as its input and the second one is based on a Fuzzy Logic System (FLS) that estimates the calibration error for a given calibration set using a weighted combination of two indicators. Very accurate classification between valid and invalid datasets is achieved with average Area Under Curve (AUC) of up to 0.98. PMID:24013490
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
Waite, Anthony; /SLAC
2011-09-07
Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each
Solar astrophysical fundamental parameters
NASA Astrophysics Data System (ADS)
Meftah, M.; Irbah, A.; Hauchecorne, A.
2014-08-01
The accurate determination of the solar photospheric radius has been an important problem in astronomy for many centuries. From the measurements made by the PICARD spacecraft during the transit of Venus in 2012, we obtained a solar radius of 696,156±145 kilometres. This value is consistent with recent measurements carried out atmosphere. This observation leads us to propose a change of the canonical value obtained by Arthur Auwers in 1891. An accurate value for total solar irradiance (TSI) is crucial for the Sun-Earth connection, and represents another solar astrophysical fundamental parameter. Based on measurements collected from different space instruments over the past 35 years, the absolute value of the TSI, representative of a quiet Sun, has gradually decreased from 1,371W.m-2 in 1978 to around 1,362W.m-2 in 2013, mainly due to the radiometers calibration differences. Based on the PICARD data and in agreement with Total Irradiance Monitor measurements, we predicted the TSI input at the top of the Earth's atmosphere at a distance of one astronomical unit (149,597,870 kilometres) from the Sun to be 1,362±2.4W.m-2, which may be proposed as a reference value. To conclude, from the measurements made by the PICARD spacecraft, we obtained a solar photospheric equator-to-pole radius difference value of 5.9±0.5 kilometres. This value is consistent with measurements made by different space instruments, and can be given as a reference value.
Evaluation of severe accident risks: Quantification of major input parameters
Harper, F.T.; Payne, A.C.; Breeding, R.J.; Gorham, E.D.; Brown, T.D.; Rightley, G.S.; Gregory, J.J. ); Murfin, W. ); Amos, C.N. )
1991-04-01
This report records part of the vast amount of information received during the expert judgment elicitation process that took place in support of the NUREG-1150 effort sponsored by the U.S. Nuclear Regulatory Commission. The results of the Containment Loads and Molten Core/Containment Interaction Expert Panel Elicitation are presented in this part of Volume 2 of NUREG/CR-4551. The Containment Loads Expert Panel considered seven issues: (1) hydrogen phenomena at Grand Gulf; (2) hydrogen burn at vessel breach at Sequoyah; (3) BWR reactor building failure due to hydrogen; (4) Grand Gulf containment loads at vessel breach; (5) pressure increment in the Sequoyah containment at vessel breach; (6) loads at vessel breach: Surry; and (7) pressure increment in the Zion containment at vessel breach. The report begins with a brief discussion of the methods used to elicit the information from the experts. The information for each issue is then presented in five sections: (1) a brief definition of the issue, (2) a brief summary of the technical rationale supporting the distributions developed by each of the experts, (3) a brief description of the operations that the project staff performed on the raw elicitation results in order to aggregate the distributions, (4) the aggregated distributions, and (5) the individual expert elicitation summaries. The Molten Core/Containment Interaction Panel considered three issues. The results of the following two of these issues are presented in this document: (1) Peach Bottom drywell shell meltthrough; and (2) Grand Gulf pedestal erosion. 89 figs., 154 tabs.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Third order TRANSPORT with MAD (Methodical Accelerator Design) input
Carey, D.C.
1988-09-20
This paper describes computer-aided design codes for particle accelerators. Among the topics discussed are: input beam description; parameters and algebraic expressions; the physical elements; beam lines; operations; and third-order transfer matrix. (LSP)
An Integrative Method for Accurate Comparative Genome Mapping
Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y
2006-01-01
We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
A method for real-time estimation of parameters in a linear dynamic state-space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than one cycle of the dominant dynamic mode, using no a priori information, with control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements and could be implemented
Real-Time Parameter Estimation in the Frequency Domain
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1999-01-01
A method for real-time estimation of parameters in a linear dynamic state space model was developed and studied. The application is aircraft dynamic model parameter estimation from measured data in flight for indirect adaptive or reconfigurable control. Equation error in the frequency domain was used with a recursive Fourier transform for the real-time data analysis. Linear and nonlinear simulation examples and flight test data from the F-18 High Alpha Research Vehicle HARV) were used to demonstrate that the technique produces accurate model parameter estimates with appropriate error bounds. Parameter estimates converged in less than 1 cycle of the dominant dynamic mode natural frequencies, using control surface inputs measured in flight during ordinary piloted maneuvers. The real-time parameter estimation method has low computational requirements, and could be implemented aboard an aircraft in real time.
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
REL - English Bulk Data Input.
ERIC Educational Resources Information Center
Bigelow, Richard Henry
A bulk data input processor which is available for the Rapidly Extensible Language (REL) English versions is described. In REL English versions, statements that declare names of data items and their interrelationships normally are lines from a terminal or cards in a batch input stream. These statements provide a convenient means of declaring some…
Accurate wavelength calibration method for flat-field grating spectrometers.
Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping
2011-09-01
A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
PREVIMER : Meteorological inputs and outputs
NASA Astrophysics Data System (ADS)
Ravenel, H.; Lecornu, F.; Kerléguer, L.
2009-09-01
PREVIMER is a pre-operational system aiming to provide a wide range of users, from private individuals to professionals, with short-term forecasts about the coastal environment along the French coastlines bordering the English Channel, the Atlantic Ocean, and the Mediterranean Sea. Observation data and digital modelling tools first provide 48-hour (probably 96-hour by summer 2009) forecasts of sea states, currents, sea water levels and temperatures. The follow-up of an increasing number of biological parameters will, in time, complete this overview of coastal environment. Working in partnership with the French Naval Hydrographic and Oceanographic Service (Service Hydrographique et Océanographique de la Marine, SHOM), the French National Weather Service (Météo-France), the French public science and technology research institute (Institut de Recherche pour le Développement, IRD), the European Institute of Marine Studies (Institut Universitaire Européen de la Mer, IUEM) and many others, IFREMER (the French public institute fo marine research) is supplying the technologies needed to ensure this pertinent information, available daily on Internet at http://www.previmer.org, and stored at the Operational Coastal Oceanographic Data Centre. Since 2006, PREVIMER publishes the results of demonstrators assigned to limited geographic areas and to specific applications. This system remains experimental. The following topics are covered : Hydrodynamic circulation, sea states, follow-up of passive tracers, conservative or non-conservative (specifically of microbiological origin), biogeochemical state, primary production. Lastly, PREVIMER provides researchers and R&D departments with modelling tools and access to the database, in which the observation data and the modelling results are stored, to undertake environmental studies on new sites. The communication will focus on meteorological inputs to and outputs from PREVIMER. It will draw the lessons from almost 3 years during
Anomalous neuronal responses to fluctuated inputs
NASA Astrophysics Data System (ADS)
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain.
Anomalous neuronal responses to fluctuated inputs.
Hosaka, Ryosuke; Sakai, Yutaka
2015-10-01
The irregular firing of a cortical neuron is thought to result from a highly fluctuating drive that is generated by the balance of excitatory and inhibitory synaptic inputs. A previous study reported anomalous responses of the Hodgkin-Huxley neuron to the fluctuated inputs where an irregularity of spike trains is inversely proportional to an input irregularity. In the current study, we investigated the origin of these anomalous responses with the Hindmarsh-Rose neuron model, map-based models, and a simple mixture of interspike interval distributions. First, we specified the parameter regions for the bifurcations in the Hindmarsh-Rose model, and we confirmed that the model reproduced the anomalous responses in the dynamics of the saddle-node and subcritical Hopf bifurcations. For both bifurcations, the Hindmarsh-Rose model shows bistability in the resting state and the repetitive firing state, which indicated that the bistability was the origin of the anomalous input-output relationship. Similarly, the map-based model that contained bistability reproduced the anomalous responses, while the model without bistability did not. These results were supported by additional findings that the anomalous responses were reproduced by mimicking the bistable firing with a mixture of two different interspike interval distributions. Decorrelation of spike trains is important for neural information processing. For such spike train decorrelation, irregular firing is key. Our results indicated that irregular firing can emerge from fluctuating drives, even weak ones, under conditions involving bistability. The anomalous responses, therefore, contribute to efficient processing in the brain. PMID:26565270
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Dynamic Susceptibility Contrast MRI with Localized Arterial Input Functions
Lee, J.J.; Bretthorst, G.L.; Derdeyn, C.P.; Powers, W.J.; Videen, T.O.; Snyder, A.Z.; Markham, J.; Shimony, J.S.
2010-01-01
Compared to gold-standard measurements of cerebral perfusion with positron emission tomography (PET) using H2[15O] tracers, measurements with dynamic susceptibility contrast (DSC) MR are more accessible, less expensive and less invasive. However, existing methods for analyzing and interpreting data from DSC MR have characteristic disadvantages that include sensitivity to incorrectly modeled delay and dispersion in a single, global arterial input function (AIF). We describe a model of tissue microcirculation derived from tracer kinetics which estimates for each voxel a unique, localized AIF (LAIF). Parameters of the model were estimated using Bayesian probability theory and Markov-chain Monte Carlo, circumventing difficulties arising from numerical deconvolution. Applying the new method to imaging studies from a cohort of fourteen patients with chronic, atherosclerotic, occlusive disease showed strong correlations between perfusion measured by DSC MR with LAIF and perfusion measured by quantitative PET with H2[15O]. Regression to PET measurements enabled conversion of DSC MR to a physiological scale. Regression analysis for LAIF gave estimates of a scaling factor for quantitation which described perfusion accurately in patients with substantial variability in hemodynamic impairment. PMID:20432301
Nonlinear input-output systems
NASA Technical Reports Server (NTRS)
Hunt, L. R.; Luksic, Mladen; Su, Renjeng
1987-01-01
Necessary and sufficient conditions that the nonlinear system dot-x = f(x) + ug(x) and y = h(x) be locally feedback equivalent to the controllable linear system dot-xi = A xi + bv and y = C xi having linear output are found. Only the single input and single output case is considered, however, the results generalize to multi-input and multi-output systems.
NASA Technical Reports Server (NTRS)
Sidar, M.
1976-01-01
The problem of identifying constant and variable parameters in multi-input, multi-output, linear and nonlinear systems is considered, using the maximum likelihood approach. An iterative algorithm, leading to recursive identification and tracking of the unknown parameters and the noise covariance matrix, is developed. Agile tracking and accurate and unbiased identified parameters are obtained. Necessary conditions for a globally asymptotically stable identification process are provided; the conditions proved to be useful and efficient. Among different cases studied, the stability derivatives of an aircraft were identified and some of the results are shown as examples.
Guidance laws with input saturation and nonlinear robust H∞ observers.
Liao, Fei; Luo, Qiang; Ji, Haibo; Gai, Wen
2016-07-01
A novel three-dimensional law based on input-to-state stability (ISS) and nonlinear robust H∞ filtering is proposed for interception of maneuvering targets in the presence of input saturation. A dead zone operator model is introduced to design an ISS-based guidance law to guarantee robust tracking of a maneuvering target. Input saturation and system stability are considered simultaneously, and the globally input-to-state stability have been ensured in theory. Since in practice line-of-sight (LOS) rate is difficult for a pursuer to measure accurately, the nonlinear robust H∞ filtering method is utilized to estimate it. The stability analyses and performed simulation results show that the presented approach is effective. PMID:27018143
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, J. A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1998-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional (3-D) electromagnetic computer code, MAxwell's equations by the Finite Integration Algorithm (MAFIA). Cold-test parameters have been calculated for several helical traveLing-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making It possible, for the first time, to design complete TWT via computer simulation.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Robust fault-tolerant tracking control design for spacecraft under control input saturation.
Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini
2014-07-01
In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. PMID:24751476
Strategy Guideline. Accurate Heating and Cooling Load Calculations
Burdick, Arlan
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Strategy Guideline: Accurate Heating and Cooling Load Calculations
Burdick, A.
2011-06-01
This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.
Input-output dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Annoni, Jennifer; Jovanovic, Mihailo; Nichols, Joseph; Seiler, Peter
2015-11-01
The objective of this work is to obtain reduced-order models for fluid flows that can be used for control design. High-fidelity computational fluid dynamic models provide accurate characterizations of complex flow dynamics but are not suitable for control design due to their prohibitive computational complexity. A variety of methods, including proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD), can be used to extract the dominant flow structures and obtain reduced-order models. In this presentation, we introduce an extension to DMD that can handle problems with inputs and outputs. The proposed method, termed input-output dynamic mode decomposition (IODMD), utilizes a subspace identification technique to obtain models of low-complexity. We show that, relative to standard DMD, the introduction of the external forcing in IODMD provides robustness with respect to small disturbances and noise. We use the linearized Navier-Stokes equations in a channel flow to demonstrate the utility of the proposed approach and to provide a comparison with standard techniques for obtaining reduced-order dynamical representations. NSF Career Grant No. NSFCMMI-1254129.
Solar wind-magnetosphere energy input functions
Bargatze, L.F.; McPherron, R.L.; Baker, D.N.
1985-01-01
A new formula for the solar wind-magnetosphere energy input parameter, P/sub i/, is sought by applying the constraints imposed by dimensional analysis. Applying these constraints yields a general equation for P/sub i/ which is equal to rho V/sup 3/l/sub CF//sup 2/F(M/sub A/,theta) where, rho V/sup 3/ is the solar wind kinetic energy density and l/sub CF//sup 2/ is the scale size of the magnetosphere's effective energy ''collection'' region. The function F which depends on M/sub A/, the Alfven Mach number, and on theta, the interplanetary magnetic field clock angle is included in the general equation for P/sub i/ in order to model the magnetohydrodynamic processes which are responsible for solar wind-magnetosphere energy transfer. By assuming the form of the function F, it is possible to further constrain the formula for P/sub i/. This is accomplished by using solar wind data, geomagnetic activity indices, and simple statistical methods. It is found that P/sub i/ is proportional to (rho V/sup 2/)/sup 1/6/VBG(theta) where, rho V/sup 2/ is the solar wind dynamic pressure and VBG(theta) is a rectified version of the solar wind motional electric field. Furthermore, it is found that G(theta), the gating function which modulates the energy input to the magnetosphere, is well represented by a ''leaky'' rectifier function such as sin/sup 4/(theta/2). This function allows for enhanced energy input when the interplanetary magnetic field is oriented southward. This function also allows for some energy input when the interplanetary magnetic field is oriented northward. 9 refs., 4 figs.
ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS
Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.
2013-08-01
We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss
The advanced LIGO input optics.
Mueller, Chris L; Arain, Muzammil A; Ciani, Giacomo; DeRosa, Ryan T; Effler, Anamaria; Feldbaum, David; Frolov, Valery V; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J; Kokeyama, Keiko; Korth, William Z; Martin, Rodica M; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H; Tanner, David B; Vorvick, Cheryl; Williams, Luke F; Mueller, Guido
2016-01-01
The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design. PMID:26827334
The advanced LIGO input optics
NASA Astrophysics Data System (ADS)
Mueller, Chris L.; Arain, Muzammil A.; Ciani, Giacomo; DeRosa, Ryan. T.; Effler, Anamaria; Feldbaum, David; Frolov, Valery V.; Fulda, Paul; Gleason, Joseph; Heintze, Matthew; Kawabe, Keita; King, Eleanor J.; Kokeyama, Keiko; Korth, William Z.; Martin, Rodica M.; Mullavey, Adam; Peold, Jan; Quetschke, Volker; Reitze, David H.; Tanner, David B.; Vorvick, Cheryl; Williams, Luke F.; Mueller, Guido
2016-01-01
The advanced LIGO gravitational wave detectors are nearing their design sensitivity and should begin taking meaningful astrophysical data in the fall of 2015. These resonant optical interferometers will have unprecedented sensitivity to the strains caused by passing gravitational waves. The input optics play a significant part in allowing these devices to reach such sensitivities. Residing between the pre-stabilized laser and the main interferometer, the input optics subsystem is tasked with preparing the laser beam for interferometry at the sub-attometer level while operating at continuous wave input power levels ranging from 100 mW to 150 W. These extreme operating conditions required every major component to be custom designed. These designs draw heavily on the experience and understanding gained during the operation of Initial LIGO and Enhanced LIGO. In this article, we report on how the components of the input optics were designed to meet their stringent requirements and present measurements showing how well they have lived up to their design.
Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389
Regional Hospital Input Price Indexes
Freeland, Mark S.; Schendler, Carol Ellen; Anderson, Gerard
1981-01-01
This paper describes the development of regional hospital input price indexes that is consistent with the general methodology used for the National Hospital Input Price Index. The feasibility of developing regional indexes was investigated because individuals inquired whether different regions experienced different rates of increase in hospital input prices. The regional indexes incorporate variations in cost-share weights (the amount an expense category contributes to total spending) associated with hospital type and location, and variations in the rate of input price increases for various regions. We found that between 1972 and 1979 none of the regional price indexes increased at average annual rates significantly different from the national rate. For the more recent period 1977 through 1979, the increase in one Census Region was significantly below the national rate. Further analyses indicated that variations in cost-share weights for various types of hospitals produced no substantial variations in the regional price indexes relative to the national index. We consider these findings preliminary because of limitations in the availability of current, relevant, and reliable data, especially for local area wage rate increases. PMID:10309557
Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries. PMID:26222389
Analog Input Data Acquisition Software
NASA Technical Reports Server (NTRS)
Arens, Ellen
2009-01-01
DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.
NASA Technical Reports Server (NTRS)
Ozyazici, E. M.
1980-01-01
Module detects level changes in any of its 16 inputs, transfers changes to its outputs, and generates interrupts when changes are detected. Up to four changes-in-state per line are stored for later retrieval by controlling computer. Using standard TTL logic, module fits 19-inch rack-mounted console.
Moran, Robert F.; McKay, David; Pickard, Chris J.; Berry, Andrew J.; Griffin, John M.
2016-01-01
The structural chemistry of materials containing low levels of nonstoichiometric hydrogen is difficult to determine, and producing structural models is challenging where hydrogen has no fixed crystallographic site. Here we demonstrate a computational approach employing ab initio random structure searching (AIRSS) to generate a series of candidate structures for hydrous wadsleyite (β-Mg2SiO4 with 1.6 wt% H2O), a high-pressure mineral proposed as a repository for water in the Earth's transition zone. Aligning with previous experimental work, we solely consider models with Mg3 (over Mg1, Mg2 or Si) vacancies. We adapt the AIRSS method by starting with anhydrous wadsleyite, removing a single Mg2+ and randomly placing two H+ in a unit cell model, generating 819 candidate structures. 103 geometries were then subjected to more accurate optimisation under periodic DFT. Using this approach, we find the most favourable hydration mechanism involves protonation of two O1 sites around the Mg3 vacancy. The formation of silanol groups on O3 or O4 sites (with loss of stable O1–H hydroxyls) coincides with an increase in total enthalpy. Importantly, the approach we employ allows observables such as NMR parameters to be computed for each structure. We consider hydrous wadsleyite (∼1.6 wt%) to be dominated by protonated O1 sites, with O3/O4–H silanol groups present as defects, a model that maps well onto experimental studies at higher levels of hydration (J. M. Griffin et al., Chem. Sci., 2013, 4, 1523). The AIRSS approach adopted herein provides the crucial link between atomic-scale structure and experimental studies. PMID:27020937
Incorporating uncertainty in RADTRAN 6.0 input files.
Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John
2010-02-01
Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine is required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
Sensitivity of piping seismic responses to input factors
O'Connell, W.J.
1985-05-01
This report summarizes the sensitivity of peak dynamic seismic responses to input parameters. The responses have been modeled and calculated for the Zion Unit 1 plant as part of a seismic probabilistic risk assessment (PRA) performed by the US NRC Seismic Safety Margins Research Program (SSMRP). The SSMRP was supported by the US NRC Office of Nuclear Regulatory Research. Two sensitivity topics motivated the study. The first is the sensitivity of piping response to the mean value of piping damping. The second is the sensitivity of all the responses to the earthquake and model input parameters including soil, structure and piping parameters; this information is required for another study, the sensitivity of the plant system response (in terms of risk) to these dynamic input parameters and to other input factors. We evaluate the response sensitivities by performing a linear regression analysis (LRA) of the computer code SMACS. With SMACS we have a detailed model of the Zion plant and of the important dynamic processes in the soil, structures and piping systems. The qualitative results change with the location of the individual response. Different responses are in locations where the many potential influences have different effectiveness. The results give an overview of the complexity of the seismic dyanmic response of a plant. Within the diversity trends are evident in the influences of the input variables on the responses.
Systems and methods for reconfiguring input devices
NASA Technical Reports Server (NTRS)
Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)
2012-01-01
A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
NASA Astrophysics Data System (ADS)
Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu
2016-07-01
Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought
National Hospital Input Price Index
Freeland, Mark S.; Anderson, Gerard; Schendler, Carol Ellen
1979-01-01
The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 percent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies. PMID:10309052
Accurate ab Initio Spin Densities
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921
NASA Astrophysics Data System (ADS)
Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong
2015-08-01
For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.
Dynamic Input Conductances Shape Neuronal Spiking1,2
Franci, Alessio; Dethier, Julie; Sepulchre, Rodolphe
2015-01-01
Abstract Assessing the role of biophysical parameter variations in neuronal activity is critical to the understanding of modulation, robustness, and homeostasis of neuronal signalling. The paper proposes that this question can be addressed through the analysis of dynamic input conductances. Those voltage-dependent curves aggregate the concomitant activity of all ion channels in distinct timescales. They are shown to shape the current−voltage dynamical relationships that determine neuronal spiking. We propose an experimental protocol to measure dynamic input conductances in neurons. In addition, we provide a computational method to extract dynamic input conductances from arbitrary conductance-based models and to analyze their sensitivity to arbitrary parameters. We illustrate the relevance of the proposed approach for modulation, compensation, and robustness studies in a published neuron model based on data of the stomatogastric ganglion of the crab Cancer borealis. PMID:26464969
The IVS data input to ITRF2014
NASA Astrophysics Data System (ADS)
Nothnagel, Axel; Alef, Walter; Amagai, Jun; Andersen, Per Helge; Andreeva, Tatiana; Artz, Thomas; Bachmann, Sabine; Barache, Christophe; Baudry, Alain; Bauernfeind, Erhard; Baver, Karen; Beaudoin, Christopher; Behrend, Dirk; Bellanger, Antoine; Berdnikov, Anton; Bergman, Per; Bernhart, Simone; Bertarini, Alessandra; Bianco, Giuseppe; Bielmaier, Ewald; Boboltz, David; Böhm, Johannes; Böhm, Sigrid; Boer, Armin; Bolotin, Sergei; Bougeard, Mireille; Bourda, Geraldine; Buttaccio, Salvo; Cannizzaro, Letizia; Cappallo, Roger; Carlson, Brent; Carter, Merri Sue; Charlot, Patrick; Chen, Chenyu; Chen, Maozheng; Cho, Jungho; Clark, Thomas; Collioud, Arnaud; Colomer, Francisco; Colucci, Giuseppe; Combrinck, Ludwig; Conway, John; Corey, Brian; Curtis, Ronald; Dassing, Reiner; Davis, Maria; de-Vicente, Pablo; De Witt, Aletha; Diakov, Alexey; Dickey, John; Diegel, Irv; Doi, Koichiro; Drewes, Hermann; Dube, Maurice; Elgered, Gunnar; Engelhardt, Gerald; Evangelista, Mark; Fan, Qingyuan; Fedotov, Leonid; Fey, Alan; Figueroa, Ricardo; Fukuzaki, Yoshihiro; Gambis, Daniel; Garcia-Espada, Susana; Gaume, Ralph; Gaylard, Michael; Geiger, Nicole; Gipson, John; Gomez, Frank; Gomez-Gonzalez, Jesus; Gordon, David; Govind, Ramesh; Gubanov, Vadim; Gulyaev, Sergei; Haas, Ruediger; Hall, David; Halsig, Sebastian; Hammargren, Roger; Hase, Hayo; Heinkelmann, Robert; Helldner, Leif; Herrera, Cristian; Himwich, Ed; Hobiger, Thomas; Holst, Christoph; Hong, Xiaoyu; Honma, Mareki; Huang, Xinyong; Hugentobler, Urs; Ichikawa, Ryuichi; Iddink, Andreas; Ihde, Johannes; Ilijin, Gennadiy; Ipatov, Alexander; Ipatova, Irina; Ishihara, Misao; Ivanov, D. V.; Jacobs, Chris; Jike, Takaaki; Johansson, Karl-Ake; Johnson, Heidi; Johnston, Kenneth; Ju, Hyunhee; Karasawa, Masao; Kaufmann, Pierre; Kawabata, Ryoji; Kawaguchi, Noriyuki; Kawai, Eiji; Kaydanovsky, Michael; Kharinov, Mikhail; Kobayashi, Hideyuki; Kokado, Kensuke; Kondo, Tetsuro; Korkin, Edward; Koyama, Yasuhiro; Krasna, Hana; Kronschnabl, Gerhard; Kurdubov, Sergey; Kurihara, Shinobu; Kuroda, Jiro; Kwak, Younghee; La Porta, Laura; Labelle, Ruth; Lamb, Doug; Lambert, Sébastien; Langkaas, Line; Lanotte, Roberto; Lavrov, Alexey; Le Bail, Karine; Leek, Judith; Li, Bing; Li, Huihua; Li, Jinling; Liang, Shiguang; Lindqvist, Michael; Liu, Xiang; Loesler, Michael; Long, Jim; Lonsdale, Colin; Lovell, Jim; Lowe, Stephen; Lucena, Antonio; Luzum, Brian; Ma, Chopo; Ma, Jun; Maccaferri, Giuseppe; Machida, Morito; MacMillan, Dan; Madzak, Matthias; Malkin, Zinovy; Manabe, Seiji; Mantovani, Franco; Mardyshkin, Vyacheslav; Marshalov, Dmitry; Mathiassen, Geir; Matsuzaka, Shigeru; McCarthy, Dennis; Melnikov, Alexey; Michailov, Andrey; Miller, Natalia; Mitchell, Donald; Mora-Diaz, Julian Andres; Mueskens, Arno; Mukai, Yasuko; Nanni, Mauro; Natusch, Tim; Negusini, Monia; Neidhardt, Alexander; Nickola, Marisa; Nicolson, George; Niell, Arthur; Nikitin, Pavel; Nilsson, Tobias; Ning, Tong; Nishikawa, Takashi; Noll, Carey; Nozawa, Kentarou; Ogaja, Clement; Oh, Hongjong; Olofsson, Hans; Opseth, Per Erik; Orfei, Sandro; Pacione, Rosa; Pazamickas, Katherine; Petrachenko, William; Pettersson, Lars; Pino, Pedro; Plank, Lucia; Ploetz, Christian; Poirier, Michael; Poutanen, Markku; Qian, Zhihan; Quick, Jonathan; Rahimov, Ismail; Redmond, Jay; Reid, Brett; Reynolds, John; Richter, Bernd; Rioja, Maria; Romero-Wolf, Andres; Ruszczyk, Chester; Salnikov, Alexander; Sarti, Pierguido; Schatz, Raimund; Scherneck, Hans-Georg; Schiavone, Francesco; Schreiber, Ulrich; Schuh, Harald; Schwarz, Walter; Sciarretta, Cecilia; Searle, Anthony; Sekido, Mamoru; Seitz, Manuela; Shao, Minghui; Shibuya, Kazuo; Shu, Fengchun; Sieber, Moritz; Skjaeveland, Asmund; Skurikhina, Elena; Smolentsev, Sergey; Smythe, Dan; Sousa, Don; Sovers, Ojars; Stanford, Laura; Stanghellini, Carlo; Steppe, Alan; Strand, Rich; Sun, Jing; Surkis, Igor; Takashima, Kazuhiro; Takefuji, Kazuhiro; Takiguchi, Hiroshi; Tamura, Yoshiaki; Tanabe, Tadashi; Tanir, Emine; Tao, An; Tateyama, Claudio; Teke, Kamil; Thomas, Cynthia; Thorandt, Volkmar; Thornton, Bruce; Tierno Ros, Claudia; Titov, Oleg; Titus, Mike; Tomasi, Paolo; Tornatore, Vincenza; Trigilio, Corrado; Trofimov, Dmitriy; Tsutsumi, Masanori; Tuccari, Gino; Tzioumis, Tasso; Ujihara, Hideki; Ullrich, Dieter; Uunila, Minttu; Venturi, Tiziana; Vespe, Francesco; Vityazev, Veniamin; Volvach, Alexandr; Vytnov, Alexander; Wang, Guangli; Wang, Jinqing; Wang, Lingling; Wang, Na; Wang, Shiqiang; Wei, Wenren; Weston, Stuart; Whitney, Alan; Wojdziak, Reiner; Yatskiv, Yaroslav; Yang, Wenjun; Ye, Shuhua; Yi, Sangoh; Yusup, Aili; Zapata, Octavio; Zeitlhoefler, Reinhard; Zhang, Hua; Zhang, Ming; Zhang, Xiuzhong; Zhao, Rongbing; Zheng, Weimin; Zhou, Ruixian; Zubko, Nataliya
2015-01-01
Very Long Baseline Interferometry (VLBI) is a primary space-geodetic technique for determining precise coordinates on the Earth, for monitoring the variable Earth rotation and orientation with highest precision, and for deriving many other parameters of the Earth system. The International VLBI Service for Geodesy and Astrometry (IVS, http://ivscc.gsfc.nasa.gov/) is a service of the International Association of Geodesy (IAG) and the International Astronomical Union (IAU). The datasets published here are the results of individual Very Long Baseline Interferometry (VLBI) sessions in the form of normal equations in SINEX 2.0 format (http://www.iers.org/IERS/EN/Organization/AnalysisCoordinator/SinexFormat/sinex.html, the SINEX 2.0 description is attached as pdf) provided by IVS as the input for the next release of the International Terrestrial Reference System (ITRF): ITRF2014. This is a new version of the ITRF2008 release (Bockmann et al., 2009). For each session/ file, the normal equation systems contain elements for the coordinate components of all stations having participated in the respective session as well as for the Earth orientation parameters (x-pole, y-pole, UT1 and its time derivatives plus offset to the IAU2006 precession-nutation components dX, dY (https://www.iau.org/static/resolutions/IAU2006_Resol1.pdf). The terrestrial part is free of datum. The data sets are the result of a weighted combination of the input of several IVS Analysis Centers. The IVS contribution for ITRF2014 is described in Bachmann et al (2015), Schuh and Behrend (2012) provide a general overview on the VLBI method, details on the internal data handling can be found at Behrend (2013).
EFFECT OF CORRELATED INPUTS ON DO (DISSOLVED OXYGEN) UNCERTAINTY
Although uncertainty analysis has been discussed in recent water quality modeling literature, much of the work has assumed that all input variables and parameters are mutually independent. The objective of the paper is to evaluate the importance of correlation among the model inp...
Rapid Airplane Parametric Input Design (RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.
1995-01-01
RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool
Remote sensing inputs to water demand modeling
NASA Technical Reports Server (NTRS)
Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.
1975-01-01
In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.
Four-parameter model for polarization-resolved rough-surface BRDF.
Renhorn, Ingmar G E; Hallberg, Tomas; Bergström, David; Boreman, Glenn D
2011-01-17
A modeling procedure is demonstrated, which allows representation of polarization-resolved BRDF data using only four parameters: the real and imaginary parts of an effective refractive index with an added parameter taking grazing incidence absorption into account and an angular-scattering parameter determined from the BRDF measurement of a chosen angle of incidence, preferably close to normal incidence. These parameters allow accurate predictions of s- and p-polarized BRDF for a painted rough surface, over three decades of variation in BRDF magnitude. To characterize any particular surface of interest, the measurements required to determine these four parameters are the directional hemispherical reflectance (DHR) for s- and p-polarized input radiation and the BRDF at a selected angle of incidence. The DHR data describes the angular and polarization dependence, as well as providing the overall normalization constraint. The resulting model conserves energy and fulfills the reciprocity criteria. PMID:21263641
Optimization of precipitation inputs for SWAT modeling in mountainous catchment
NASA Astrophysics Data System (ADS)
Tuo, Ye; Chiogna, Gabriele; Disse, Markus
2016-04-01
Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.
Towards an accurate bioimpedance identification
NASA Astrophysics Data System (ADS)
Sanchez, B.; Louarroudi, E.; Bragos, R.; Pintelon, R.
2013-04-01
This paper describes the local polynomial method (LPM) for estimating the time-invariant bioimpedance frequency response function (FRF) considering both the output-error (OE) and the errors-in-variables (EIV) identification framework and compare it with the traditional cross— and autocorrelation spectral analysis techniques. The bioimpedance FRF is measured with the multisine electrical impedance spectroscopy (EIS) technique. To show the overwhelming accuracy of the LPM approach, both the LPM and the classical cross— and autocorrelation spectral analysis technique are evaluated through the same experimental data coming from a nonsteady-state measurement of time-varying in vivo myocardial tissue. The estimated error sources at the measurement frequencies due to noise, σnZ, and the stochastic nonlinear distortions, σZNL, have been converted to Ω and plotted over the bioimpedance spectrum for each framework. Ultimately, the impedance spectra have been fitted to a Cole impedance model using both an unweighted and a weighted complex nonlinear least square (CNLS) algorithm. A table is provided with the relative standard errors on the estimated parameters to reveal the importance of which system identification frameworks should be used.
Shapiro, R.E.; Evans, A. Jr.
1981-01-01
This document is intended as an introduction to the use of RMS facilities via Praxis (this interface hereafter called Praxis-RMS). It is presumed that the reader is familiar with Praxis conventions as well as with RMS use (at the MACRO level). Since Praxis-RMS was designed to be functionally equivalent to MACRO-RMS, the explanations follow the pattern of the DEC MACRO-RMS documentation (particularly the programmer's reference manual). A complete list of the procedures that make up Praxis-RMS appears at the end of this document (with parameters), along with the constants (grouped by type) that can be used as actual parameters.
New input data for synthetic AGB evolution
NASA Astrophysics Data System (ADS)
Wagenhuber, J.; Groenewegen, M. A. T.
1998-12-01
Analytic formulae are presented to construct detailed secular lightcurves of both early asymptotic giant branch (AGB) and thermally pulsing AGB stars. They are based on an extensive grid of evolutionary calculations, performed with an updated stellar evolution code. Basic input parameters are the initial mass MI i, 0.8 <= MI i/Msun <= 7, metallicity ZI i =0.0001, 0.008, 0.02, and the mixing length theory (MLT) parameter. The formulae allow for two important effects, namely that the first pulses do not reach the full amplitude, and hot bottom burning (HBB) in massive stars, which are both not accounted for by core mass - luminosity relations of the usual type. Furthermore, the dependence of the effective temperature and a few other quantities characterizing the conditions at the base of the convective envelope, which are relevant for HBB, are investigated as functions of luminosity, total and core mass for different formulations of the convection theory applied, MLT or Canuto & Mazzitelli's (\\cite{can:maz}) theory.
Investigation into on-road vehicle parameter identification based on subspace methods
NASA Astrophysics Data System (ADS)
Dong, Guangming; Chen, Jin; Zhang, Nong
2014-12-01
The randomness of road-tyre excitations can excite the low frequency ride vibrations of bounce, pitch and roll modes of an on-road vehicle. In this paper, modal parameters and mass moments of inertia of an on-road vehicle are estimated with an acceptable accuracy only by measuring accelerations of vehicle sprung mass and unsprung masses, which is based on subspace identification methods. The vehicle bounce, pitch and roll modes are characterized by their large damping (damping ratio 0.2-0.3). Two kinds of subspace identification methods, one that uses input/output data and the other that uses output data only, are compared for the highly damped modes. It is shown that, when the same data length is given, larger error of modal identification results can be clearly observed for the method using output data only; while additional use of input data will significantly reduce estimation variance. Instead of using tyre forces as inputs, which are difficult to be measured or estimated, vertical accelerations of unsprung masses are used as inputs. Theoretical analysis and Monte Carlo experiments show that, when the vehicle speed is not very high, subspace identification method using accelerations of unsprung masses as inputs can give more accurate results compared with the method using road-tyre forces as inputs. After the modal parameters are identified, and if vehicle mass and its center of gravity are pre-determined, roll and pitch moments of inertia of an on-road vehicle can be directly computed using the identified frequencies only, without requiring accurate estimation of mode shape vectors and multi-variable optimization algorithms.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Modeling the Meteoroid Input Function at Mid-Latitude Using Meteor Observations by the MU Radar
NASA Technical Reports Server (NTRS)
Pifko, Steven; Janches, Diego; Close, Sigrid; Sparks, Jonathan; Nakamura, Takuji; Nesvorny, David
2012-01-01
The Meteoroid Input Function (MIF) model has been developed with the purpose of understanding the temporal and spatial variability of the meteoroid impact in the atmosphere. This model includes the assessment of potential observational biases, namely through the use of empirical measurements to characterize the minimum detectable radar cross-section (RCS) for the particular High Power Large Aperture (HPLA) radar utilized. This RCS sensitivity threshold allows for the characterization of the radar system s ability to detect particles at a given mass and velocity. The MIF has been shown to accurately predict the meteor detection rate of several HPLA radar systems, including the Arecibo Observatory (AO) and the Poker Flat Incoherent Scatter Radar (PFISR), as well as the seasonal and diurnal variations of the meteor flux at various geographic locations. In this paper, the MIF model is used to predict several properties of the meteors observed by the Middle and Upper atmosphere (MU) radar, including the distributions of meteor areal density, speed, and radiant location. This study offers new insight into the accuracy of the MIF, as it addresses the ability of the model to predict meteor observations at middle geographic latitudes and for a radar operating frequency in the low VHF band. Furthermore, the interferometry capability of the MU radar allows for the assessment of the model s ability to capture information about the fundamental input parameters of meteoroid source and speed. This paper demonstrates that the MIF is applicable to a wide range of HPLA radar instruments and increases the confidence of using the MIF as a global model, and it shows that the model accurately considers the speed and sporadic source distributions for the portion of the meteoroid population observable by MU.
Enhancing e-waste estimates: improving data quality by multivariate Input-Output Analysis.
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-11-01
Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input-Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies. PMID:23899476
NASA Technical Reports Server (NTRS)
Fox, Geoffrey C.; Ou, Chao-Wei
1997-01-01
The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.
AC-DC converter with an improved input current waveform
Yuvarajan, S.; Weng, D.F.; Chen, M.S.
1995-12-31
The paper proposes a new control scheme for an ac-dc converter that will reduce the total harmonic distortion in the input current while operating at an improved power factor. The circuit uses a diode rectifier whose output is varied by a boost regulator with a second-harmonic injected PWM. An approximate analysis shows that the addition of a second harmonic component in the PWM helps to reduce the third harmonic in the input current. The design parameters are obtained using digital simulation. The results obtained on an experimental converter are compared with the ones obtained from a conventional scheme.
Identification of an object by input and output spectral characteristics
NASA Technical Reports Server (NTRS)
Redko, S. F.; Ushkalov, V. F.
1973-01-01
The problem is discussed of identification of a linear object of known structure, the movement of which is described by a system of differential equations of the type y = Ay + Bu, where y is an n-dimensional output vector, u is an m-dimensional vector of stationary, random disturbances (inputs), A and B are matrices of unknown parameters of the dimension, n x n and n x m, respectively. The spectral and reciprocal spectral densities of the inputs and outputs are used as the initial information on the object.
Repositioning Recitation Input in College English Teaching
ERIC Educational Resources Information Center
Xu, Qing
2009-01-01
This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.
A robust parameter design for multi-response problems
NASA Astrophysics Data System (ADS)
Zandieh, M.; Amiri, M.; Vahdani, B.; Soltani, R.
2009-08-01
Most real world search and optimization problems naturally involve multiple responses. In this paper we investigate a multiple response problem within desirability function framework and try to determine values of input variables that achieve a target value for each response through three meta-heuristic algorithms such as genetic algorithm (GA), simulated annealing (SA) and tabu search (TS). Each algorithm has some parameters that need to be accurately calibrated to ensure the best performance. For this purpose, a robust calibration is applied to the parameters by means of Taguchi method. The computational results of these three algorithms are compared against each others. The superior performance of SA over TS and TS over GA is inferred from the obtained results in various situations.
XINPUT: a program to edit "DOSRZ" input files.
Lauterbach, M H; Lehmann, J; Rosenow, U F
1999-05-01
The DOSRZ user code, which is part of the EGS4 standard distribution, is widely used in medical physics for the calculation of dose deposition in cylindrical geometries. The code provides the use of advanced Monte-Carlo techniques (PRESTA) and variance reduction methods. In the case of complex cylinder geometries the input of coordinates and radii is not only tedious but also prone to a high error rate. Coordinates are to be stated in absolute numbers. A change of one number, e.g., the slab thickness, requires the change of all subsequent numbers. Furthermore, parameters are only stated as numbers with no indication of their meaning. Obviously, there is a need for a user interface to facilitate the input for DOSRZ and to largely reduce the possibility for errors. We, therefore, wrote a graphical user interface (GUI) consisting of an input mask, a coordinate input interpreter, and a two-dimensional and/or pseudo-three-dimensional display section. The GUI is based on the scripting language Tcl/Tk, which runs under various platforms such as UNIX (LINUX), w95, and WIN NT. It consists of a main window which provides common-style menus and buttons to navigate through the edit dialog boxes. The most important tools are the region input, which enables the user to create the simulation geometry, and the graphics section where the scaled output can be displayed. Different media are shown in different colors which are user defined. Furthermore, the program contains some tools to reduce the probability of an erroneous input in the EGS4 input file. Since Tcl/Tk is a modern scripting language, it offers advanced tools to create the GUI and to "glue" different applications to it. XINPUT may also be considered as a model program for the development of a more general interface to other input areas of the EGS4 simulation code. PMID:10360538
Estimating Photometric Redshifts with Artificial Neural Networks and Multi-Parameters
NASA Astrophysics Data System (ADS)
Li, Li-Li; Zhang, Yan-Xia; Zhao, Yong-Heng; Yang, Da-Wei
2007-06-01
We calculate photometric redshifts from the Sloan Digital Sky Survey Data Release 2 (SDSS DR2) Galaxy Sample using artificial neural networks (ANNs). Different input sets based on various parameters (e.g. magnitude, color index, flux information) are explored. Mainly, parameters from broadband photometry are utilized and their performances in redshift prediction are compared. While any parameter may be easily incorporated in the input, our results indicate that using the dereddened magnitudes often produces more accurate photometric redshifts than using the Petrosian magnitudes or model magnitudes as input, but the model magnitudes are superior to the Petrosian magnitudes. Also, better performance results when more effective parameters are used in the training set. The method is tested on a sample of 79 346 galaxies from the SDSS DR2. When using 19 parameters based on the dereddened magnitudes, the rms error in redshift estimation is σz = 0.020184. The ANN is highly competitive tool compared to the traditional template-fitting methods when a large and representative training set is available.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Effects of Auditory Input in Individuation Tasks
ERIC Educational Resources Information Center
Robinson, Christopher W.; Sloutsky, Vladimir M.
2008-01-01
Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Lee, F. C.
1984-01-01
Problems caused by input filter interaction and conventional input filter design techniques are discussed. The concept of feedforward control is modeled with an input filter and a buck regulator. Experimental measurement and comparison to the analytical predictions is carried out. Transient response and the use of a feedforward loop to stabilize the regulator system is described. Other possible applications for feedforward control are included.
Textual Enhancement of Input: Issues and Possibilities
ERIC Educational Resources Information Center
Han, ZhaoHong; Park, Eun Sung; Combs, Charles
2008-01-01
The input enhancement hypothesis proposed by Sharwood Smith (1991, 1993) has stimulated considerable research over the last 15 years. This article reviews the research on textual enhancement of input (TE), an area where the majority of input enhancement studies have aggregated. Methodological idiosyncrasies are the norm of this body of research.…
Input Devices for Young Handicapped Children.
ERIC Educational Resources Information Center
Morris, Karen
The versatility of the computer can be expanded considerably for young handicapped children by using input devices other than the typewriter-style keyboard. Input devices appropriate for young children can be classified into four categories: alternative keyboards, contact switches, speech input devices, and cursor control devices. Described are…
Accurate projector calibration method by using an optical coaxial camera.
Huang, Shujun; Xie, Lili; Wang, Zhangying; Zhang, Zonghua; Gao, Feng; Jiang, Xiangqian
2015-02-01
Digital light processing (DLP) projectors have been widely utilized to project digital structured-light patterns in 3D imaging systems. In order to obtain accurate 3D shape data, it is important to calibrate DLP projectors to obtain the internal parameters. The existing projector calibration methods have complicated procedures or low accuracy of the obtained parameters. This paper presents a novel method to accurately calibrate a DLP projector by using an optical coaxial camera. The optical coaxial geometry is realized by a plate beam splitter, so the DLP projector can be treated as a true inverse camera. A plate having discrete markers on the surface is used to calibrate the projector. The corresponding projector pixel coordinate of each marker on the plate is determined by projecting vertical and horizontal sinusoidal fringe patterns on the plate surface and calculating the absolute phase. The internal parameters of the DLP projector are obtained by the corresponding point pair between the projector pixel coordinate and the world coordinate of discrete markers. Experimental results show that the proposed method can accurately calibrate the internal parameters of a DLP projector. PMID:25967789
NASA Astrophysics Data System (ADS)
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
NASA Astrophysics Data System (ADS)
Daly, Peter M.; Hebenstreit, Gerald T.
2003-04-01
Deterministic source localization using matched-field processing (MFP) has yielded good results in propagation scenarios where the nonrandom model parameter input assumption is valid. In many shallow water environments, inputs to acoustic propagation models may be better represented using random distributions rather than fixed quantities. One can estimate the negative effect of random source inputs on deterministic MFP by (1) obtaining a realistic statistical representation of a signal model parameter, then (2) using the mean of the parameter as input to the MFP signal model (the so-called ``replica vector''), (3) synthesizing a source signal using multiple realizations of the random parameter, and (4) estimating the source localization error by correlating the synthesized signal vector with the replica vector over a three dimensional space. This approach allows one to quantify deterministic localization error introduced by random model parameters, including sound velocity profile, hydrophone locations, and sediment thickness and speed. [Work supported by DARPA Advanced Technology Office.
COSMIC/NASTRAN Free-field Input
NASA Technical Reports Server (NTRS)
Chan, G. C.
1984-01-01
A user's guide to the COSMIC/NASTRAN free field input for the Bulk Data section of the NASTRAN program is proposed. The free field input is designed to be user friendly and the user is not forced out of the computer system due to input errors. It is easy to use, with only a few simple rules to follow. A stand alone version of the COSMIC/NASTRAN free field input is also available. The use of free field input is illustrated by a number of examples.
Turn customer input into innovation.
Ulwick, Anthony W
2002-01-01
It's difficult to find a company these days that doesn't strive to be customer-driven. Too bad, then, that most companies go about the process of listening to customers all wrong--so wrong, in fact, that they undermine innovation and, ultimately, the bottom line. What usually happens is this: Companies ask their customers what they want. Customers offer solutions in the form of products or services. Companies then deliver these tangibles, and customers just don't buy. The reason is simple--customers aren't expert or informed enough to come up with solutions. That's what your R&D team is for. Rather, customers should be asked only for outcomes--what they want a new product or service to do for them. The form the solutions take should be up to you, and you alone. Using Cordis Corporation as an example, this article describes, in fine detail, a series of effective steps for capturing, analyzing, and utilizing customer input. First come indepth interviews, in which a moderator works with customers to deconstruct a process or activity in order to unearth "desired outcomes." Addressing participants' comments one at a time, the moderator rephrases them to be both unambiguous and measurable. Once the interviews are complete, researchers then compile a comprehensive list of outcomes that participants rank in order of importance and degree to which they are satisfied by existing products. Finally, using a simple mathematical formula called the "opportunity calculation," researchers can learn the relative attractiveness of key opportunity areas. These data can be used to uncover opportunities for product development, to properly segment markets, and to conduct competitive analysis. PMID:12964470
Volgushev, Maxim; Ilin, Vladimir; Stevenson, Ian H.
2015-01-01
Accurately describing synaptic interactions between neurons and how interactions change over time are key challenges for systems neuroscience. Although intracellular electrophysiology is a powerful tool for studying synaptic integration and plasticity, it is limited by the small number of neurons that can be recorded simultaneously in vitro and by the technical difficulty of intracellular recording in vivo. One way around these difficulties may be to use large-scale extracellular recording of spike trains and apply statistical methods to model and infer functional connections between neurons. These techniques have the potential to reveal large-scale connectivity structure based on the spike timing alone. However, the interpretation of functional connectivity is often approximate, since only a small fraction of presynaptic inputs are typically observed. Here we use in vitro current injection in layer 2/3 pyramidal neurons to validate methods for inferring functional connectivity in a setting where input to the neuron is controlled. In experiments with partially-defined input, we inject a single simulated input with known amplitude on a background of fluctuating noise. In a fully-defined input paradigm, we then control the synaptic weights and timing of many simulated presynaptic neurons. By analyzing the firing of neurons in response to these artificial inputs, we ask 1) How does functional connectivity inferred from spikes relate to simulated synaptic input? and 2) What are the limitations of connectivity inference? We find that individual current-based synaptic inputs are detectable over a broad range of amplitudes and conditions. Detectability depends on input amplitude and output firing rate, and excitatory inputs are detected more readily than inhibitory. Moreover, as we model increasing numbers of presynaptic inputs, we are able to estimate connection strengths more accurately and detect the presence of connections more quickly. These results illustrate the
Estimating Building Simulation Parameters via Bayesian Structure Learning
Edwards, Richard E; New, Joshua Ryan; Parker, Lynne Edwards
2013-01-01
Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.
An input shaping controller enabling cranes to move without sway
Singer, N.; Singhose, W.; Kriikku, E.
1997-06-01
A gantry crane at the Savannah River Technology Center was retrofitted with an Input Shaping controller. The controller intercepts the operator`s pendant commands and modifies them in real time so that the crane is moved without residual sway in the suspended load. Mechanical components on the crane were modified to make the crane suitable for the anti-sway algorithm. This paper will describe the required mechanical modifications to the crane, as well as, a new form of Input Shaping that was developed for use on the crane. Experimental results are presented which demonstrate the effectiveness of the new process. Several practical considerations will be discussed including a novel (patent pending) approach for making small, accurate moves without residual oscillations.
Accurate scatter compensation using neural networks in radionuclide imaging
Ogawa, Koichi; Nishizaki, N. . Dept. of Electrical Engineering)
1993-08-01
The paper presents a new method to estimate primary photons using an artificial neural network in radionuclide imaging. The neural network for [sup 99m]Tc had three layers, i.e., one input layer with five units, one hidden layer with five units, and one output layer with two units. As input values to the input units, the authors used count ratios which were the ratios of the counts acquired by narrow windows to the total count acquired by a broad window with the energy range from 125 to 154 keV. The outputs were a scatter count ratio and a primary count ratio. Using the primary count ratio and the total count they calculated the primary count of the pixel directly. The neural network was trained with a back-propagation algorithm using calculated true energy spectra obtained by a Monte Carlo method. The simulation showed that an accurate estimation of primary photons was accomplished within an error ratio of 5% for primary photons.
Pre-stack full wavefield inversion for elastic parameters of TI media
NASA Astrophysics Data System (ADS)
Zhang, Meigen; Huang, Zhongyu; Li, Xiaofan; Wang, Miaoyue; Xu, Guangyin
2006-03-01
Pre-stack full wavefield inversion for the elastic parameters of transversely isotropical media is implemented. The Jacobian matrix is derived directly with the finite element method, just like the full wavefield forward modelling. An absorbing boundary scheme combining Liao's transparent boundary condition with Sarma's attenuation boundary condition is applied to the forward modelling and Jacobian calculation. The input data are the complete ground-recorded wavefields containing full kinematic and dynamic information for the seismic waves. Inversion with such data is desirable as it should improve the accuracy of the estimated parameters and also reduce data pre-processing, such as wavefield identification and separation. A scheme called energy grading inversion is presented to deal with the instability caused by the large energy difference between different arrivals in the input data. With this method, parameters in the shallow areas, which mainly affect wave patterns with strong energy, converge before those of deeper media. Thus, the number of unknowns in each inversion step is reduced, and the stability and reliability of the inversion process is greatly improved. As a result, the scheme is helpful to reduce the non-uniqueness in the inversion. Two synthetic examples show that the inversion system is reliable and accurate even though initial models deviate significantly from the actual models. Also, the system can accurately invert for transversely isotropic model parameters even with the introduction of strong random noise.
Inferring Nonlinear Neuronal Computation Based on Physiologically Plausible Inputs
McFarland, James M.; Cui, Yuwei; Butts, Daniel A.
2013-01-01
The computation represented by a sensory neuron's response to stimuli is constructed from an array of physiological processes both belonging to that neuron and inherited from its inputs. Although many of these physiological processes are known to be nonlinear, linear approximations are commonly used to describe the stimulus selectivity of sensory neurons (i.e., linear receptive fields). Here we present an approach for modeling sensory processing, termed the Nonlinear Input Model (NIM), which is based on the hypothesis that the dominant nonlinearities imposed by physiological mechanisms arise from rectification of a neuron's inputs. Incorporating such ‘upstream nonlinearities’ within the standard linear-nonlinear (LN) cascade modeling structure implicitly allows for the identification of multiple stimulus features driving a neuron's response, which become directly interpretable as either excitatory or inhibitory. Because its form is analogous to an integrate-and-fire neuron receiving excitatory and inhibitory inputs, model fitting can be guided by prior knowledge about the inputs to a given neuron, and elements of the resulting model can often result in specific physiological predictions. Furthermore, by providing an explicit probabilistic model with a relatively simple nonlinear structure, its parameters can be efficiently optimized and appropriately regularized. Parameter estimation is robust and efficient even with large numbers of model components and in the context of high-dimensional stimuli with complex statistical structure (e.g. natural stimuli). We describe detailed methods for estimating the model parameters, and illustrate the advantages of the NIM using a range of example sensory neurons in the visual and auditory systems. We thus present a modeling framework that can capture a broad range of nonlinear response functions while providing physiologically interpretable descriptions of neural computation. PMID:23874185
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Handling Input and Output for COAMPS
NASA Technical Reports Server (NTRS)
Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine
2007-01-01
Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. PMID:26140748
Earth Reflected Solar Radiation Input to Spherical Satellites
NASA Technical Reports Server (NTRS)
Cunningham, F. G.
1961-01-01
A general calculation is given for the earth's albedo input to a spherical satellite, with the assumption that the earth can be considered a diffusely reflecting sphere. The results are presented in general form so that appropriate values for the solar constant and albedo of the earth can be used as more accurate values become available. The results are also presented graphically; the incident power is determined on the assumption that the mean solar constant is 1.353 x 10( exp 6) erg/(sq cm.sec) and the albedo of the earth is 0.34.
Accurate method of modeling cluster scaling relations in modified gravity
NASA Astrophysics Data System (ADS)
He, Jian-hua; Li, Baojiu
2016-06-01
We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.
NASA Astrophysics Data System (ADS)
Milanesio, D.; Maggiora, R.
2013-04-01
The successful design of an ion cyclotron antenna mainly relies on the capability of accurately predicting its behavior both in terms of input parameters, and therefore power coupled to plasma, and radiated fields. All these features essentially depend on the antenna itself (its geometry, the matching and tuning systems) and, obviously, on the faced loading. In this paper a number of plasma profiles is analysed with the help of the TOPICA code, a predictive tool for the design and optimization of radio frequency (RF) launchers in front of a plasma, in order to understand which plasma parameters have the most significant influence on the coupling performances of a typical IC antenna.
EVALUATION OF REMOTE SENSING DATA FOR INPUT INTO HYDROLOGICAL SIMULATION PROGRAM-FORTRAN (HSPF)
This report describes an evaluation of the feasibility of using a remotely sensed data base as input into the Hydrologic Simulation Program-Fortran (HSPF). Remotely sensed data from the satellite LANDSAT and conventionally obtained data were used to set up the input parameters of...
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Lee, F. C.; Kelkar, S. S.
1982-01-01
The problems caused by the interaction between the input filter, output filter, and the control loop are discussed. The input filter design is made more complicated because of the need to avoid performance degradation and also stay within the weight and loss limitations. Conventional input filter design techniques are then dicussed. The concept of pole zero cancellation is reviewed; this concept is the basis for an approach to control the peaking of the output impedance of the input filter and thus mitigate some of the problems caused by the input filter. The proposed approach for control of the peaking of the output impedance of the input filter is to use a feedforward loop working in conjunction with feedback loops, thus forming a total state control scheme. The design of the feedforward loop for a buck regulator is described. A possible implementation of the feedforward loop design is suggested.
Input estimation from measured structural response
Harvey, Dustin; Cross, Elizabeth; Silva, Ramon A; Farrar, Charles R; Bement, Matt
2009-01-01
This report will focus on the estimation of unmeasured dynamic inputs to a structure given a numerical model of the structure and measured response acquired at discrete locations. While the estimation of inputs has not received as much attention historically as state estimation, there are many applications where an improved understanding of the immeasurable input to a structure is vital (e.g. validating temporally varying and spatially-varying load models for large structures such as buildings and ships). In this paper, the introduction contains a brief summary of previous input estimation studies. Next, an adjoint-based optimization method is used to estimate dynamic inputs to two experimental structures. The technique is evaluated in simulation and with experimental data both on a cantilever beam and on a three-story frame structure. The performance and limitations of the adjoint-based input estimation technique are discussed.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
The role of the input scale in parton distribution analyses
Pedro Jimenez-Delgado
2012-08-01
A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.
UNCERTAINTY IN MODEL PREDICTIONS-PLAUSIBLE OUTCOMES FROM ESTIMATES OF INPUT RANGES
Models are commonly used to predict the future extent of contamination given estimates of hydraulic conductivity, porosity, hydraulic gradient, biodegradation rate, and other parameters. Often best estimates or averages of these are used as inputs to models, which then transform...
NASA Astrophysics Data System (ADS)
Harbert, W.; Hammack, R.; Veloski, G.; Hodge, G.
2011-12-01
In this study Airborne magnetic data was collected by Fugro Airborne Surveys from a helicopter platform (Figure 1) using the Midas II system over the 39 km2 NPR3 (Naval Petroleum Reserve No. 3) oilfield in east-central Wyoming. The Midas II system employs two Scintrex CS-2 cesium vapor magnetometers on opposite ends of a transversely mounted, 13.4-m long horizontal boom located amidships (Fig. 1). Each magnetic sensor had an in-flight sensitivity of 0.01 nT. Real time compensation of the magnetic data for magnetic noise induced by maneuvering of the aircraft was accomplished using two fluxgate magnetometers mounted just inboard of the cesium sensors. The total area surveyed was 40.5 km2 (NPR3) near Casper, Wyoming. The purpose of the survey was to accurately locate wells that had been drilled there during more than 90 years of continuous oilfield operation. The survey was conducted at low altitude and with closely spaced flight lines to improve the detection of wells with weak magnetic response and to increase the resolution of closely spaced wells. The survey was in preparation for a planned CO2 flood to enhance oil recovery, which requires a complete well inventory with accurate locations for all existing wells. The magnetic survey was intended to locate wells that are missing from the well database and to provide accurate locations for all wells. The well location method used combined an input dataset (for example, leveled total magnetic field reduced to the pole), combined with first and second horizontal spatial derivatives of this input dataset, which were then analyzed using focal statistics and finally combined using a fuzzy combination operation. Analytic signal and the Shi and Butt (2004) ZS attribute were also analyzed using this algorithm. A parameter could be adjusted to determine sensitivity. Depending on the input dataset 88% to 100% of the wells were located, with typical values being 95% to 99% for the NPR3 field site.
Input apparatus for dynamic signature verification systems
EerNisse, Errol P.; Land, Cecil E.; Snelling, Jay B.
1978-01-01
The disclosure relates to signature verification input apparatus comprising a writing instrument and platen containing piezoelectric transducers which generate signals in response to writing pressures.
NASA Astrophysics Data System (ADS)
Liu, T.; Miller, S. N.; Chitrakar, S.
2013-12-01
With the abundant online data sources, there are great advantages of hydrological simulation for the American watershed management. Not only are the conventional station-based data conveniently accessible, but also the spatial data provide the great possibility for the hydrological approaches. This case study demonstrates the possible applications and access source for the hydrological modeling, which might be used as reference. The modeling input time series or parameters origins from various sources: precipitation is from TRMM (as spatial input of hydrological model) and NOAA (station-based), evapotranspiration came from (NASA MODIS platform via ArcGIS access), temperature is delivered by NOAA database (station-based) and NASA MODIS (spatial input), the snow mask and depth also can be obtained from NOAA, NASA MODIS and NRCS, discharge data might be from USGS hydro-climate data network (HCDN). The parameters of static state are surely complete such as DEM contributed by STRM of NASA, soil related data from SSURGO and landuse related data obtained from USGS. The different institutes might focus on different aspects, temporal span, geo-locations. But supported by the various sources of data, the hydrological modeling can be setup solidly by interpolating the various data. The daily time step simulation is manually calibrated for 1 year period referred by 4 discharge gauging stations, as well as 1 year of validation period. Simulation resolution is uniformed to 200m*200m cells size according the 2600km2 of watershed domain. The case study demonstrates that the station-based and spatial data could cooperate each other and support the accurate hydrological modeling during. Based on the established model, it can be further extended for the assessment of water quality impact and sediment transportation simulation. The final goal the modeling approach is to serve the land management on hydrological response.
Discretely disordered photonic bandgap structures: a more accurate invariant measure calculation
NASA Astrophysics Data System (ADS)
Kissel, Glen J.
2009-02-01
In the one-dimensional optical analog to Anderson localization, a periodically layered medium has one or more parameters randomly disordered. Such a randomized system can be modeled by an infinite product of 2x2 random transfer matrices with the upper Lyapunov exponent of the matrix product identified as the localization factor (inverse localization length) for the model. The theorem of Furstenberg allows us, at least theoretically, to calculate this upper Lyapunov exponent. In Furstenberg's formula we not only integrate with respect to the probability measure of the random matrices, but also with respect to the invariant probability measure of the direction of the vector propagated by the random matrices. This invariant measure is difficult to find analytically, and, as a result, the most successful approach is to determine the invariant measure numerically. A Monte Carlo simulation which uses accumulated bin counts to track the direction of the propagated vector through a long chain of random matrices does a good job of estimating the invariant probability measure, but with a level of uncertainty. A potentially more accurate numerical technique by Froyland and Aihara obtains the invariant measure as a left eigenvector of a large sparse matrix containing probability values determined by the action of the random matrices on input vectors. We first apply these two techniques to a random Fibonacci sequence whose Lyapunov exponent was determined by Viswanath. We then demonstrate these techniques on a quarter-wave stack model with binary discrete disorder in layer thickness, and compare results to the continuously disordered counterpart.
Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R
2014-01-01
We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782
MODFLOW-Style parameters in underdetermined parameter estimation.
D'Oria, Marco; Fienen, Michael N
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. PMID:21352210
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, Marco D.; Fienen, Michael J.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW_2005 and MODFLOW_2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes.
MODFLOW-style parameters in underdetermined parameter estimation
D'Oria, M.; Fienen, M.N.
2012-01-01
In this article, we discuss the use of MODFLOW-Style parameters in the numerical codes MODFLOW-2005 and MODFLOW-2005-Adjoint for the definition of variables in the Layer Property Flow package. Parameters are a useful tool to represent aquifer properties in both codes and are the only option available in the adjoint version. Moreover, for overdetermined parameter estimation problems, the parameter approach for model input can make data input easier. We found that if each estimable parameter is defined by one parameter, the codes require a large computational effort and substantial gains in efficiency are achieved by removing logical comparison of character strings that represent the names and types of the parameters. An alternative formulation already available in the current implementation of the code can also alleviate the efficiency degradation due to character comparisons in the special case of distributed parameters defined through multiplication matrices. The authors also hope that lessons learned in analyzing the performance of the MODFLOW family codes will be enlightening to developers of other Fortran implementations of numerical codes. ?? 2011, National Ground Water Association.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
NASA Astrophysics Data System (ADS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations.
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-Lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is. PMID:27368786
EDP Applications to Musical Bibliography: Input Considerations
ERIC Educational Resources Information Center
Robbins, Donald C.
1972-01-01
The application of Electronic Data Processing (EDP) has been a boon in the analysis and bibliographic control of music. However, an extra step of encoding must be undertaken for input of music. The best hope to facilitate musical input is the development of an Optical Character Recognition (OCR) music-reading machine. (29 references) (Author/NH)
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND..., requests for input and/or Web site), as well as through a notice in the Federal Register, from...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 15 2012-01-01 2012-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or Web site), as well as through a notice in the Federal Register, from the following...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 15 2014-01-01 2014-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 15 2013-01-01 2013-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND... input and/or via Web site), as well as through a notice in the Federal Register, from the...
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Managing Input during Assistive Technology Product Design
ERIC Educational Resources Information Center
Choi, Young Mi
2011-01-01
Many different sources of input are available to assistive technology innovators during the course of designing products. However, there is little information on which ones may be most effective or how they may be efficiently utilized within the design process. The aim of this project was to compare how three types of input--from simulation tools,…
39 CFR 3020.92 - Public input.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Public input. 3020.92 Section 3020.92 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL PRODUCT LISTS Requests Initiated by the Postal Service to Change the Mail Classification Schedule § 3020.92 Public input. The Commission shall publish...
Statistical identification of effective input variables. [SCREEN
Vaurio, J.K.
1982-09-01
A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Remote balance weighs accurately amid high radiation
NASA Technical Reports Server (NTRS)
Eggenberger, D. N.; Shuck, A. B.
1969-01-01
Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.
NASA Astrophysics Data System (ADS)
Blackman, Jonathan; Field, Scott; Galley, Chad; Hemberger, Daniel; Scheel, Mark; Schmidt, Patricia; Smith, Rory; SXS Collaboration Collaboration
2016-03-01
We are now in the advanced detector era of gravitational wave astronomy, and the merger of two black holes (BHs) is one of the most promising sources of gravitational waves that could be detected on earth. To infer the BH masses and spins, the observed signal must be compared to waveforms predicted by general relativity for millions of binary configurations. Numerical relativity (NR) simulations can produce accurate waveforms, but are prohibitively expensive to use for parameter estimation. Other waveform models are fast enough but may lack accuracy in portions of the parameter space. Numerical relativity surrogate models attempt to rapidly predict the results of a NR code with a small or negligible modeling error, after being trained on a set of input waveforms. Such surrogate models are ideal for parameter estimation, as they are both fast and accurate, and have already been built for the case of non-spinning BHs. Using 250 input waveforms, we build a surrogate model for waveforms from the Spectral Einstein Code (SpEC) for a subspace of precessing systems.
Detailed map of a cis-regulatory input function
NASA Astrophysics Data System (ADS)
Setty, Y.; Mayo, A. E.; Surette, M. G.; Alon, U.
2003-06-01
Most genes are regulated by multiple transcription factors that bind specific sites in DNA regulatory regions. These cis-regulatory regions perform a computation: the rate of transcription is a function of the active concentrations of each of the input transcription factors. Here, we used accurate gene expression measurements from living cell cultures, bearing GFP reporters, to map in detail the input function of the classic lacZYA operon of Escherichia coli, as a function of about a hundred combinations of its two inducers, cAMP and isopropyl -D-thiogalactoside (IPTG). We found an unexpectedly intricate function with four plateau levels and four thresholds. This result compares well with a mathematical model of the binding of the regulatory proteins cAMP receptor protein (CRP) and LacI to the lac regulatory region. The model is also used to demonstrate that with few mutations, the same region could encode much purer AND-like or even OR-like functions. This possibility means that the wild-type region is selected to perform an elaborate computation in setting the transcription rate. The present approach can be generally used to map the input functions of other genes.
Understanding the Code: keeping accurate records.
Griffith, Richard
2015-10-01
In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404
2012-01-01
Background Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Methods Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Results Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height. A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Conclusions Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy. PMID
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Estimating soil organic carbon input to marine sediments (Invited)
NASA Astrophysics Data System (ADS)
Weijers, J.; Schouten, S.; Schefuss, E.; Schneider, R. R.; Sinninghe Damsté, J. S.
2009-12-01
Estimating (past) input of terrestrial organic carbon (OC) in marine sediments is complicated due to the heterogeneity of the OC. Two end member mixing models based on different parameters often give different results. This is in part due to the fact that terrestrial OC is only represented by one end member (often representing plant OC) where it in fact consists of two OC pools, i.e., plant and soil OC. The branched vs. isoprenoid tetraether (BIT) index is a new proxy for soil OC input, with the branched tetraether membrane lipids being derived from bacteria living in soils and peat bogs [1]. We have now applied this molecular proxy in a three end member mixing model, in conjunction with d13C and C/N values of total organic matter, in a marine sediment core from the Congo deep sea fan to estimate inputs of marine, soil and plant OC to this location over the last deglaciation. Results indicate an average of 45% of the OC being of soil origin, pointing to the importance of soil OC and the need for proper characterization of this fraction. [1] Hopmans et al. (2004) EPSL 224, 107-116. Figure 1: Composition of the organic carbon input to the Congo deep sea fan over the last 20 thousand years. YD = Younger Dryas; LGM = Last Glacial Maximum
Input space-dependent controller for multi-hazard mitigation
NASA Astrophysics Data System (ADS)
Cao, Liang; Laflamme, Simon
2016-04-01
Semi-active and active structural control systems are advanced mechanical devices and systems capable of high damping performance, ideal for mitigation of multi-hazards. The implementation of these devices within structural systems is still in its infancy, because of the complexity in designing a robust closed-loop control system that can ensure reliable and high mitigation performance. Particular challenges in designing a controller for multi-hazard mitigation include: 1) very large uncertainties on dynamic parameters and unknown excitations; 2) limited measurements with probabilities of sensor failure; 3) immediate performance requirements; and 4) unavailable sets of input-output during design. To facilitate the implementation of structural control systems, a new type of controllers with high adaptive capabilities is proposed. It is based on real-time identification of an embedding that represents the essential dynamics found in the input space, or in the sensors measurements. This type of controller is termed input-space dependent controllers (ISDC). In this paper, the principle of ISDC is presented, their stability and performance derived analytically for the case of harmonic inputs, and their performance demonstrated in the case of different types of hazards. Results show the promise of this new type of controller at mitigating multi-hazards by 1) relying on local and limited sensors only; 2) not requiring prior evaluation or training; and 3) adapting to systems non-stationarities.
Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights
NASA Astrophysics Data System (ADS)
Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato
2016-07-01
Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.
Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator
Asaad, Sameh W.; Kapur, Mohit
2016-03-15
A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.
CIGALEMC: GALAXY PARAMETER ESTIMATION USING A MARKOV CHAIN MONTE CARLO APPROACH WITH CIGALE
Serra, Paolo; Amblard, Alexandre; Temi, Pasquale; Im, Stephen; Noll, Stefan
2011-10-10
We introduce a fast Markov Chain Monte Carlo (MCMC) exploration of the astrophysical parameter space using a modified version of the publicly available code Code Investigating GALaxy Emission (CIGALE). The original CIGALE builds a grid of theoretical spectral energy distribution (SED) models and fits to photometric fluxes from ultraviolet to infrared to put constraints on parameters related to both formation and evolution of galaxies. Such a grid-based method can lead to a long and challenging parameter extraction since the computation time increases exponentially with the number of parameters considered and results can be dependent on the density of sampling points, which must be chosen in advance for each parameter. MCMC methods, on the other hand, scale approximately linearly with the number of parameters, allowing a faster and more accurate exploration of the parameter space by using a smaller number of efficiently chosen samples. We test our MCMC version of the code CIGALE (called CIGALEMC) with simulated data. After checking the ability of the code to retrieve the input parameters used to build the mock sample, we fit theoretical SEDs to real data from the well-known and -studied Spitzer Infrared Nearby Galaxy Survey sample. We discuss constraints on the parameters and show the advantages of our MCMC sampling method in terms of accuracy of the results and optimization of CPU time.
Cumulative distribution function solutions of advection–reaction equations with uncertain parameters
Boso, F.; Broyda, S. V.; Tartakovsky, D. M.
2014-01-01
We derive deterministic cumulative distribution function (CDF) equations that govern the evolution of CDFs of state variables whose dynamics are described by the first-order hyperbolic conservation laws with uncertain coefficients that parametrize the advective flux and reactive terms. The CDF equations are subjected to uniquely specified boundary conditions in the phase space, thus obviating one of the major challenges encountered by more commonly used probability density function equations. The computational burden of solving CDF equations is insensitive to the magnitude of the correlation lengths of random input parameters. This is in contrast to both Monte Carlo simulations (MCSs) and direct numerical algorithms, whose computational cost increases as correlation lengths of the input parameters decrease. The CDF equations are, however, not exact because they require a closure approximation. To verify the accuracy and robustness of the large-eddy-diffusivity closure, we conduct a set of numerical experiments which compare the CDFs computed with the CDF equations with those obtained via MCSs. This comparison demonstrates that the CDF equations remain accurate over a wide range of statistical properties of the two input parameters, such as their correlation lengths and variance of the coefficient that parametrizes the advective flux. PMID:24910529
Set Theory Applied to Uniquely Define the Inputs to Territorial Systems in Emergy Analyses
The language of set theory can be utilized to represent the emergy involved in all processes. In this paper we use set theory in an emergy evaluation to ensure an accurate representation of the inputs to territorial systems. We consider a generic territorial system and we describ...
The Effects of Input-Based Practice on Pragmatic Development of Requests in L2 Chinese
ERIC Educational Resources Information Center
Li, Shuai
2012-01-01
This study examined the effects of input-based practice on developing accurate and speedy requests in second-language Chinese. Thirty learners from intermediate-level Chinese classes were assigned to an intensive training group (IT), a regular training group (RT), and a control group. The IT and the RT groups practiced using four Chinese…
Reactive nitrogen inputs to US lands and waterways: how certain are we about sources and fluxes?
An overabundance of reactive nitrogen (N) as a result of anthropogenic activities has led to multiple human health and environmental concerns. Efforts to address these concerns require an accurate accounting of N inputs. Here, we present a novel synthesis of data describing N inp...
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods. PMID:16383795
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Magnetospheric Energy Input during Intense Geomagnetic Storms in SC23
NASA Astrophysics Data System (ADS)
Besliu-Ionescu, Diana; Maris Muntean, Georgeta; Dobrica, Venera; Mierla, Marilena
2015-04-01
Geomagnetic storm connections to solar eruptive phenomena in solar cycle 23 (SC23) have been intensively studied and it is a subject of great importance because of their various effects in our day-to-day life. We analyse the energy transfer from the solar wind into the magnetosphere during intense geomagnetic storms defined by Dst ≤ -150 nT. There were 29 intense storms during SC23. We will use the Akasofu parameter (Akasofu, 1981) to compute the ɛ function and study its time profile. We compute the energy input efficiency during the main phase of the geomagnetic storm. We compute the magnetospheric energy input using the formula introduced by Wang et al. (2014) and compare these results with the ɛ function for the geomagnetic storms of October 29-30, 2003.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Sinusoidal input describing function for hysteresis followed by elementary backlash
NASA Technical Reports Server (NTRS)
Ringland, R. F.
1976-01-01
The author proposes a new sinusoidal input describing function which accounts for the serial combination of hysteresis followed by elementary backlash in a single nonlinear element. The output of the hysteresis element drives the elementary backlash element. Various analytical forms of the describing function are given, depending on the a/A ratio, where a is the half width of the hysteresis band or backlash gap, and A is the amplitude of the assumed input sinusoid, and on the value of the parameter representing the fraction of a attributed to the backlash characteristic. The negative inverse describing function is plotted on a gain-phase plot, and it is seen that a relatively small amount of backlash leads to domination of the backlash character in the describing function. The extent of the region of the gain-phase plane covered by the describing function is such as to guarantee some form of limit cycle behavior in most closed-loop systems.
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
Accurate camera calibration method specialized for virtual studios
NASA Astrophysics Data System (ADS)
Okubo, Hidehiko; Yamanouchi, Yuko; Mitsumine, Hideki; Fukaya, Takashi; Inoue, Seiki
2008-02-01
Virtual studio is a popular technology for TV programs, that makes possible to synchronize computer graphics (CG) to realshot image in camera motion. Normally, the geometrical matching accuracy between CG and realshot image is not expected so much on real-time system, we sometimes compromise on directions, not to come out the problem. So we developed the hybrid camera calibration method and CG generating system to achieve the accurate geometrical matching of CG and realshot on virtual studio. Our calibration method is intended for the camera system on platform and tripod with rotary encoder, that can measure pan/tilt angles. To solve the camera model and initial pose, we enhanced the bundle adjustment algorithm to fit the camera model, using pan/tilt data as known parameters, and optimizing all other parameters invariant against pan/tilt value. This initialization yields high accurate camera position and orientation consistent with any pan/tilt values. Also we created CG generator implemented the lens distortion function with GPU programming. By applying the lens distortion parameters obtained by camera calibration process, we could get fair compositing results.
Robust ODF smoothing for accurate estimation of fiber orientation.
Beladi, Somaieh; Pathirana, Pubudu N; Brotchie, Peter
2010-01-01
Q-ball imaging was presented as a model free, linear and multimodal diffusion sensitive approach to reconstruct diffusion orientation distribution function (ODF) using diffusion weighted MRI data. The ODFs are widely used to estimate the fiber orientations. However, the smoothness constraint was proposed to achieve a balance between the angular resolution and noise stability for ODF constructs. Different regularization methods were proposed for this purpose. However, these methods are not robust and quite sensitive to the global regularization parameter. Although, numerical methods such as L-curve test are used to define a globally appropriate regularization parameter, it cannot serve as a universal value suitable for all regions of interest. This may result in over smoothing and potentially end up in neglecting an existing fiber population. In this paper, we propose to include an interpolation step prior to the spherical harmonic decomposition. This interpolation based approach is based on Delaunay triangulation provides a reliable, robust and accurate smoothing approach. This method is easy to implement and does not require other numerical methods to define the required parameters. Also, the fiber orientations estimated using this approach are more accurate compared to other common approaches. PMID:21096202
García, Paul S; Wright, Terrence M; Cunningham, Ian R; Calabrese, Ronald L
2008-09-01
Previously we presented a quantitative description of the spatiotemporal pattern of inhibitory synaptic input from the heartbeat central pattern generator (CPG) to segmental motor neurons that drive heartbeat in the medicinal leech and the resultant coordination of CPG interneurons and motor neurons. To begin elucidating the mechanisms of coordination, we explore intersegmental and side-to-side coordination in an ensemble model of all heart motor neurons and their known synaptic inputs and electrical coupling. Model motor neuron intrinsic properties were kept simple, enabling us to determine the extent to which input and electrical coupling acting together can account for observed coordination in the living system in the absence of a substantive contribution from the motor neurons themselves. The living system produces an asymmetric motor pattern: motor neurons on one side fire nearly in synchrony (synchronous), whereas on the other they fire in a rear-to-front progression (peristaltic). The model reproduces the general trends of intersegmental and side-to-side phase relations among motor neurons, but the match with the living system is not quantitatively accurate. Thus realistic (experimentally determined) inputs do not produce similarly realistic output in our model, suggesting that motor neuron intrinsic properties may contribute to their coordination. By varying parameters that determine electrical coupling, conduction delays, intraburst synaptic plasticity, and motor neuron excitability, we show that the most important determinant of intersegmental and side-to-side phase relations in the model was the spatiotemporal pattern of synaptic inputs, although phasing was influenced significantly by electrical coupling. PMID:18579654
Accurate and efficient reconstruction of deep phylogenies from structured RNAs
Stocsits, Roman R.; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F.
2009-01-01
Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687
Accurate and efficient reconstruction of deep phylogenies from structured RNAs.
Stocsits, Roman R; Letsch, Harald; Hertel, Jana; Misof, Bernhard; Stadler, Peter F
2009-10-01
Ribosomal RNA (rRNA) genes are probably the most frequently used data source in phylogenetic reconstruction. Individual columns of rRNA alignments are not independent as a consequence of their highly conserved secondary structures. Unless explicitly taken into account, these correlation can distort the phylogenetic signal and/or lead to gross overestimates of tree stability. Maximum likelihood and Bayesian approaches are of course amenable to using RNA-specific substitution models that treat conserved base pairs appropriately, but require accurate secondary structure models as input. So far, however, no accurate and easy-to-use tool has been available for computing structure-aware alignments and consensus structures that can deal with the large rRNAs. The RNAsalsa approach is designed to fill this gap. Capitalizing on the improved accuracy of pairwise consensus structures and informed by a priori knowledge of group-specific structural constraints, the tool provides both alignments and consensus structures that are of sufficient accuracy for routine phylogenetic analysis based on RNA-specific substitution models. The power of the approach is demonstrated using two rRNA data sets: a mitochondrial rRNA set of 26 Mammalia, and a collection of 28S nuclear rRNAs representative of the five major echinoderm groups. PMID:19723687
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Scaling of global input-output networks
NASA Astrophysics Data System (ADS)
Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming
2016-06-01
Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.
Multi-input distributed classifiers for synthetic genetic circuits.
Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail
2015-01-01
For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry
Multi-Input Distributed Classifiers for Synthetic Genetic Circuits
Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail
2015-01-01
For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry
Fast and accurate estimation for astrophysical problems in large databases
NASA Astrophysics Data System (ADS)
Richards, Joseph W.
2010-10-01
A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture
Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan
2015-01-01
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681
Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture.
Gao, Zhiquan; Yu, Yao; Zhou, Yu; Du, Sidan
2015-01-01
Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain. PMID:26402681
Input/output system for multiprocessors
Bernick, D.L.; Chan, K.K.; Chan, W.M.; Dan, Y.F.; Hoang, D.M.; Hussain, Z.; Iswandhi, G.I.; Korpi, J.E.; Sanner, M.W.; Zwangerman, J.A.
1989-04-11
A device controller is described, comprising: a first port-input/output controller coupled to a first input/output channel bus; and a second port-input/output controlled coupled to a second input/output channel bus; each of the first and second port-input/output controllers having: a first ownership latch means for granting shared ownership of the device controller to a first host processor to provide a first data path on a first I/O channel through the first port I/O controller between the first host processor and any peripheral, and at least a second ownership latch means operative independently of the first ownership latch means for granting shared ownership of the device controller to a second host processor independently of the first port input/output controller to provide a second data path on a second I/O channel through the second port I/O controller between the second host processor and any peripheral devices coupled to the device controller.
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
Significance of Input Correlations in Striatal Function
Yim, Man Yi; Aertsen, Ad; Kumar, Arvind
2011-01-01
The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480
Influential input classification in probabilistic multimedia models
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.; Geng, Shu
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions one should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.
A highly accurate interatomic potential for argon
NASA Astrophysics Data System (ADS)
Aziz, Ronald A.
1993-09-01
A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.
Accurate analysis of EBSD data for phase identification
NASA Astrophysics Data System (ADS)
Palizdar, Y.; Cochrane, R. C.; Brydson, R.; Leary, R.; Scott, A. J.
2010-07-01
This paper aims to investigate the reliability of software default settings in the analysis of EBSD results. To study the effect of software settings on the EBSD results, the presence of different phases in high Al steel has been investigated by EBSD. The results show the importance of appropriate automated analysis parameters for valid and reliable phase discrimination. Specifically, the importance of the minimum number of indexed bands and the maximum solution error have been investigated with values of 7-9 and 1.0-1.5° respectively, found to be needed for accurate analysis.
Zhang, Hao; Xiong, Jun; Luo, Jie; Qu, Anlian
2009-01-30
Accurate Cm measurements rely on accurate determination of specific parameters of a patch-clamp amplifier (PCA). Hardware-related parameters, such as the resistance Rf and the stray capacitance Cf of the feedback resistor, the input capacitance Ci, the injection capacitance Cj, and the extra capacitances introduced by the BNC connector, are of significance in the sense of obtaining absolute estimates of cell parameters. In the present paper, a frequency-domain method, or the f-method for simplicity, is put forward to experimentally determine the actual values of basic circuit elements for our self-developed PCA. The f-method makes use of sine waves and amplitude/phase measurements instead of the square-wave responses to determine the above parameters of a PCA, and thereby calibrates the PAC for capacitance measurements. Experimental results prove that the f-method is excellent in determining hardware-related parameters, with 3-5% error of the impedance of the "10 MOmega setting", and about 2% error of the impedance of the "model cell" of the model circuit for our PCA. The f-method enables us not only to picture components of fast capacitances, but also to guarantee complete fast capacitance compensation; it may be applicable for other PCAs. PMID:18789969
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
Winant, Celeste D; Aparici, Carina Mari; Zelnik, Yuval R; Reutter, Bryan W; Sitek, Arkadiusz; Bacharach, Stephen L; Gullberg, Grant T
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum
A new synthesis for terrestrial nitrogen inputs
NASA Astrophysics Data System (ADS)
Houlton, B. Z.; Morford, S. L.
2015-04-01
Nitrogen (N) inputs sustain many different aspects of local soil processes, their services, and their interactions with the broader Earth system. We present a new synthesis for terrestrial N inputs that explicitly considers both rock and atmospheric sources of N. We review evidence for state-factor regulation over biological fixation, deposition, and rock-weathering inputs from local to global scales and in transient vs. steady-state landscapes. Our investigation highlights strong organism and topographic (relief) controls over all three N input pathways, with the anthropogenic factor clearly important in rising N deposition rates. In addition, the climate, parent material, and time factors are shown to influence patterns of fixation and rock-weathering inputs of N in diverse soil systems. Data reanalysis suggests that weathering of N-rich parent material could resolve several known cases of "missing N inputs" in ecosystems, and demonstrates how the inclusion of rock N sources into modern concepts can lead to a richer understanding of spatial and temporal patterns of ecosystem N availability. For example, explicit consideration of rock N inputs into classic pedogenic models (e.g., the Walker and Syers model) yields a fundamentally different expectation from the standard case: weathering of N-rich parent material could enhance N availability and facilitate terrestrial succession in developmentally young sites even in the absence of N-fixing organisms. We conclude that a state-factor framework for N complements our growing understanding multiple-source controls on phosphorus and cation availability in Earth's soil, but with significant exceptions given the lack of an N fixation analogue in all other biogeochemical cycles. Rather, non-symmetrical feedbacks among input pathways in which high N inputs via deposition or rock-weathering sources have the potential to reduce biological fixation rates mark N as fundamentally different from other nutrients. The new synthesis
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.
2014-12-01
Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern
RECURSIVE PARAMETER ESTIMATION OF HYDROLOGIC MODELS
Proposed is a nonlinear filtering approach to recursive parameter estimation of conceptual watershed response models in state-space form. he conceptual model state is augmented by the vector of free parameters which are to be estimated from input-output data, and the extended Kal...
Optimizing Input/Output Using Adaptive File System Policies
NASA Technical Reports Server (NTRS)
Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.
1996-01-01
Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.
Transient analysis of intercalation electrodes for parameter estimation
NASA Astrophysics Data System (ADS)
Devan, Sheba
An essential part of integrating batteries as power sources in any application, be it a large scale automotive application or a small scale portable application, is an efficient Battery Management System (BMS). The combination of a battery with the microprocessor based BMS (called "smart battery") helps prolong the life of the battery by operating in the optimal regime and provides accurate information regarding the battery to the end user. The main purposes of BMS are cell protection, monitoring and control, and communication between different components. These purposes are fulfilled by tracking the change in the parameters of the intercalation electrodes in the batteries. Consequently, the functions of the BMS should be prompt, which requires the methodology of extracting the parameters to be efficient in time. The traditional transient techniques applied so far may not be suitable due to reasons such as the inability to apply these techniques when the battery is under operation, long experimental time, etc. The primary aim of this research work is to design a fast, accurate and reliable technique that can be used to extract parameter values of the intercalation electrodes. A methodology based on analysis of the short time response to a sinusoidal input perturbation, in the time domain is demonstrated using a porous electrode model for an intercalation electrode. It is shown that the parameters associated with the interfacial processes occurring in the electrode can be determined rapidly, within a few milliseconds, by measuring the response in the transient region. The short time analysis in the time domain is then extended to a single particle model that involves bulk diffusion in the solid phase in addition to interfacial processes. A systematic procedure for sequential parameter estimation using sensitivity analysis is described. Further, the short time response and the input perturbation are transformed into the frequency domain using Fast Fourier Transform
On Markov parameters in system identification
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A detailed discussion of Markov parameters in system identification is given. Different forms of input-output representation of linear discrete-time systems are reviewed and discussed. Interpretation of sampled response data as Markov parameters is presented. Relations between the state-space model and particular linear difference models via the Markov parameters are formulated. A generalization of Markov parameters to observer and Kalman filter Markov parameters for system identification is explained. These extended Markov parameters play an important role in providing not only a state-space realization, but also an observer/Kalman filter for the system of interest.
DREAM-3D and the importance of model inputs and boundary conditions
NASA Astrophysics Data System (ADS)
Friedel, Reiner; Tu, Weichao; Cunningham, Gregory; Jorgensen, Anders; Chen, Yue
2015-04-01
Recent work on radiation belt 3D diffusion codes such as the Los Alamos "DREAM-3D" code have demonstrated the ability of such codes to reproduce realistic magnetospheric storm events in the relativistic electron dynamics - as long as sufficient "event-oriented" boundary conditions and code inputs such as wave powers, low energy boundary conditions, background plasma densities, and last closed drift shell (outer boundary) are available. In this talk we will argue that the main limiting factor in our modeling ability is no longer our inability to represent key physical processes that govern the dynamics of the radiation belts (radial, pitch angle and energy diffusion) but rather our limitations in specifying accurate boundary conditions and code inputs. We use here DREAM-3D runs to show the sensitivity of the modeled outcomes to these boundary conditions and inputs, and also discuss alternate "proxy" approaches to obtain the required inputs from other (ground-based) sources.
On-line breakage monitoring of small drills with input impedance of driving motor
NASA Astrophysics Data System (ADS)
Fu, Lianyu; Ling, Shih-Fu; Tseng, Ching-Huan
2007-01-01
Input current of driving motor has been employed with success as monitoring signature for tool breakage and wear detection in manufacturing processes for more than a decade. In micro-drilling, however, the accuracy of current signature downgrades significantly owing to the disturbances often seen in electrical power supply. This paper reports the investigation results on the effectiveness of using the input impedance of the spindle motor as monitoring signature for detecting drill breakage in micro-drilling. As input impedance is an inherent property of a dynamic system and independent upon the system input such as voltage or current fluctuations and thus avoids the difficulties faced by the methods using current signature. Experimental results show that impedance signature reflects torque variations well and indicates health conditions of drills properly during micro-drilling processes. When associated with an artificial neural network to recognise its waveform, impedance signature is capable of identifying drill breakages promptly and accurately.
Scallops skeletons as tools for accurate proxy calibration
NASA Astrophysics Data System (ADS)
Lorrain, A.; Paulet, Y.-M.; Chauvaud, L.; Dunbar, R.; Mucciarone, D.; Pécheyran, C.; Amouroux, D.; Fontugne, M.
2003-04-01
Bivalves skeletons are able to produce great geochemical proxies. But general calibration of those proxies are based on approximate time basis because of misunderstanding of growth rhythm. In this context, the Great scallop, Pecten maximus, appears to be a powerful tool as a daily growth deposit has been clearly identified for this species (Chauvaud et al, 1998; Lorrain et al, 2000), allowing accurate environmental calibration. Indeed, using this species, a date can be affiliated to each growth increment, and as a consequence environmental parameters can be closely compared (at a daily scale) to observed chemical and structural shell variations. This daily record provides an unequivocal basis to calibrate proxies. Isotopic (Delta-13C and Delta-15N) and trace element analysis (LA-ICP-MS) have been performed on several individuals and different years depending on the analysed parameter. Seawater parameters measured one meter above the sea-bottom were compared to chemical variations in the calcitic shell. Their confrontation showed that even with a daily basis for data interpretation, calibration is still a challenge. Inter-individual variations are found and correlations are not always reproducible from one year to the others. The first explanation could be an inaccurate appreciation of the proximate environment of the animal, notably the water-sediment interface could best represent Pecten maximus environment. Secondly, physiological parameters could be inferred for those discrepancies. In particular, calcification takes places in the extrapallial fluid, which composition might be very different from external environment. Accurate calibration of chemical proxies should consider biological aspects to gain better insights into the processes controlling the incorporation of those chemical elements. The characterisation of isotopic and trace element composition of the extrapallial fluid and hemolymph could greatly help our understanding of chemical shell variations.
Accurate and occlusion-robust multi-view stereo
NASA Astrophysics Data System (ADS)
Zhu, Zhaokun; Stamatopoulos, Christos; Fraser, Clive S.
2015-11-01
This paper proposes an accurate multi-view stereo method for image-based 3D reconstruction that features robustness in the presence of occlusions. The new method offers improvements in dealing with two fundamental image matching problems. The first concerns the selection of the support window model, while the second centers upon accurate visibility estimation for each pixel. The support window model is based on an approximate 3D support plane described by a depth and two per-pixel depth offsets. For the visibility estimation, the multi-view constraint is initially relaxed by generating separate support plane maps for each support image using a modified PatchMatch algorithm. Then the most likely visible support image, which represents the minimum visibility of each pixel, is extracted via a discrete Markov Random Field model and it is further augmented by parameter clustering. Once the visibility is estimated, multi-view optimization taking into account all redundant observations is conducted to achieve optimal accuracy in the 3D surface generation for both depth and surface normal estimates. Finally, multi-view consistency is utilized to eliminate any remaining observational outliers. The proposed method is experimentally evaluated using well-known Middlebury datasets, and results obtained demonstrate that it is amongst the most accurate of the methods thus far reported via the Middlebury MVS website. Moreover, the new method exhibits a high completeness rate.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.
1997-01-01
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.
Method and apparatus for accurately manipulating an object during microelectrophoresis
Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.
1997-09-23
An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.
NASA Astrophysics Data System (ADS)
Prokop, Alexander; Schön, Peter; Singer, Florian; Pulfer, Gaëtan; Naaim, Mohamed; Thibert, Emmanuel
2015-04-01
Dynamic avalanche modeling requires as input the volumes and areas of the snow released, entrained and deposited, as well as the fracture heights. Determining these parameters requires high-resolution spatial snow surface data from before and after the avalanche. In snow and avalanche research, terrestrial laser scanners are used increasingly to efficiently and accurately map snow surfaces and depths over an area of several km². In practice however, several problems may occur, which must be recognized and accounted for during post-processing and interpretation, especially under the circumstances of surveying an artificially triggered avalanche at a test site, where time pressure due to operational time constraints may also cause less than ideal circumstances and surveying setups. Thus, we combine terrestrial laser scanning with photogrammetry, total station measurements and field snow observations to document and accurately survey an artificially triggered avalanche at the Col du Lautaret test site (2058 m) in the French Alps. The ability of TLS to determine avalanche modeling input parameters efficiently and accurately is shown, and we demonstrate how, merging TLS with the other methods facilitates and improves data post-processing and interpretation. Finally, we present for this avalanche the data required for the parameterization and validation of dynamic avalanche models and discuss using newest data, how the new laser scanning device generation (e.g Riegl VZ6000) further improves such surveying campaigns.
Noise facilitates transcriptional control under dynamic inputs.
Kellogg, Ryan A; Tay, Savaş
2015-01-29
Cells must respond sensitively to time-varying inputs in complex signaling environments. To understand how signaling networks process dynamic inputs into gene expression outputs and the role of noise in cellular information processing, we studied the immune pathway NF-κB under periodic cytokine inputs using microfluidic single-cell measurements and stochastic modeling. We find that NF-κB dynamics in fibroblasts synchronize with oscillating TNF signal and become entrained, leading to significantly increased NF-κB oscillation amplitude and mRNA output compared to non-entrained response. Simulations show that intrinsic biochemical noise in individual cells improves NF-κB oscillation and entrainment, whereas cell-to-cell variability in NF-κB natural frequency creates population robustness, together enabling entrainment over a wider range of dynamic inputs. This wide range is confirmed by experiments where entrained cells were measured under all input periods. These results indicate that synergy between oscillation and noise allows cells to achieve efficient gene expression in dynamically changing signaling environments. PMID:25635454
Six axis force feedback input device
NASA Technical Reports Server (NTRS)
Ohm, Timothy (Inventor)
1998-01-01
The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.
NASA Astrophysics Data System (ADS)
Katahira, Kentaro; Kawamura, Masaki; Okanoya, Kazuo; Okada, Masato
2007-04-01
We investigate a recurrent neural network model with common external and bias inputs that can retrieve branching sequences. Retrieval of memory sequences is one of the most important functions of the brain. A lot of research has been done on neural networks that process memory sequences. Most of it has focused on fixed memory sequences. However, many animals can remember and recall branching sequences. Therefore, we propose an associative memory model that can retrieve branching sequences. Our model has bias input and common external input. Kawamura and Okada reported that common external input enables sequential memory retrieval in an associative memory model with auto- and weak cross-correlation connections. We show that retrieval processes along branching sequences are controllable with both the bias input and the common external input. To analyze the behaviors of our model, we derived the macroscopic dynamical description as a probability density function. The results obtained by our theory agree with those obtained by computer simulations.
NASA Astrophysics Data System (ADS)
Udayashankar, Paniveni
2016-07-01
I study the complexity of supergranular cells using intensity patterns from Kodaikanal solar observatory. The chaotic and turbulent aspect of the solar supergranulation can be studied by examining the interrelationships amongst the parameters characterizing supergranular cells namely size, horizontal flow field, lifetime and physical dimensions of the cells and the fractal dimension deduced from the size data. The findings are supportive of Kolmogorov's theory of turbulence. The Data consists of visually identified supergranular cells, from which a fractal dimension 'D' for supergranulation is obtained according to the relation P α AD/2 where 'A' is the area and 'P' is the perimeter of the supergranular cells. I find a fractal dimension close to about 1.3 which is consistent with that for isobars and suggests a possible turbulent origin. The cell circularity shows a dependence on the perimeter with a peak around (1.1-1.2) x 105 m. The findings are supportive of Kolmogorov's theory of turbulence.
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.
Puzzarini, Cristina
2015-11-25
The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434
The input optics of Advanced LIGO
NASA Astrophysics Data System (ADS)
Tanner, D. B.; Arain, M. A.; Ciani, G.; Feldbaum, D.; Fulda, P.; Gleason, J.; Goetz, R.; Heintze, M.; Martin, R. M.; Mueller, C. L.; Williams, L. F.; Mueller, G.; Quetschke, V.; Korth, W. Z.; Reitze, D. H.; Derosa, R. T.; Effler, A.; Kokeyama, K.; Frolov, V. V.; Mullavey, A.; Poeld, J.
2016-03-01
The Input Optics (IO) of advanced LIGO will be described. The IO consists of all the optics between the laser and the power recycling mirror. The scope of the IO includes the following hardware: phase modulators, power control, input mode cleaner, an in-vacuum Faraday isolator, and mode matching telescopes. The IO group has developed and characterized RTP-based phase modulators capable of operation at 180 W cw input power. In addition, the Faraday isolator is compensated for depolarization and thermal lensing effects up to the same power and is capable of achieving greater than 40 dB isolation. This research has been supported by the NSF through Grants PHY-1205512 and PHY-1505598. LIGO-G1600067.
Computer Generated Inputs for NMIS Processor Verification
J. A. Mullens; J. E. Breeding; J. A. McEvers; R. W. Wysor; L. G. Chiang; J. R. Lenarduzzi; J. T. Mihalczo; J. K. Mattingly
2001-06-29
Proper operation of the Nuclear Identification Materials System (NMIS) processor can be verified using computer-generated inputs [BIST (Built-In-Self-Test)] at the digital inputs. Preselected sequences of input pulses to all channels with known correlation functions are compared to the output of the processor. These types of verifications have been utilized in NMIS type correlation processors at the Oak Ridge National Laboratory since 1984. The use of this test confirmed a malfunction in a NMIS processor at the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) in 1998. The NMIS processor boards were returned to the U.S. for repair and subsequently used in NMIS passive and active measurements with Pu at VNIIEF in 1999.
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling
Hamrick, Todd
2011-01-01
Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to compute the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.
Replenishment of magma chambers by light inputs
NASA Astrophysics Data System (ADS)
Huppert, Herbert E.; Sparks, R. Stephen J.; Whitehead, John A.; Hallworth, Mark A.
1986-05-01
Magma chambers, particularly those of basaltic composition, are often replenished by an influx of magma whose density is less than that of the resident magma. This paper describes the fundamental fluid mechanics involved in the replenishment by light inputs. If ρ denotes the uniform density of the resident magma and ρ — Δρ that of the input, the situation is described by the reduced gravity g' = gΔρ/ρ, the volume flux Q, and the viscosities of the resident and input magmas νe and νi, respectively. The (nondimensional) Reynolds numbers, Ree = (g'Q3)1/5/νe and Rei = (g'Q3)1/5/νi and chamber geometry then completely specify the system. For sufficiently low values of the two Reynolds numbers (each less than approximately 10), the input rises as a laminar conduit. For larger values of the Reynolds numbers, the conduit may break down and exhibit either a varicose or a meander instability and entrain some resident magma. At still larger Reynolds numbers, the flow will become quite unsteady and finally turbulent. The values of the Reynolds numbers at which these transitions occur have been documented by a series of experiments with water, glycerine, and corn syrup. If the input rises as a turbulent plume, significant entrainment of the resident magma can take place. The final spatial distribution of the mixed magma depends on the geometry of the chamber. If the chamber is much wider than it is high, the mixed magma forms a compositionally stratified region between the roof and a sharp front above uncontaminated magma. In the other geometrical extreme, the input magma is mixed with almost all of the resident magma. If the density of the resident magma is already stratified, the input plume may penetrate only part way into the chamber, even though its initial density is less than that of the lowest density resident magma. The plume will then intrude horizontally and form a hybrid layer at an intermediate depth. This provides a mechanism for preventing even
An update of input instructions to TEMOD
NASA Technical Reports Server (NTRS)
1973-01-01
The theory and operation of a FORTRAN 4 computer code, designated as TEMOD, used to calcuate tubular thermoelectric generator performance is described in WANL-TME-1906. The original version of TEMOD was developed in 1969. A description is given of additions to the mathematical model and an update of the input instructions to the code. Although the basic mathematical model described in WANL-TME-1906 has remained unchanged, a substantial number of input/output options were added to allow completion of module performance parametrics as required in support of the compact thermoelectric converter system technology program.
Input/Output Subroutine Library Program
NASA Technical Reports Server (NTRS)
Collier, James B.
1988-01-01
Efficient, easy-to-use program moved easily to different computers. Purpose of NAVIO, Input/Output Subroutine Library, provides input/output package of software for FORTRAN programs that is portable, efficient, and easy to use. Implemented as hierarchy of libraries. At bottom is very small library containing only non-portable routines called "I/O Kernel." Design makes NAVIO easy to move from one computer to another, by simply changing kernel. NAVIO appropriate for software system of almost any size wherein different programs communicate through files.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Development of force field parameters for molecular simulation of polylactide
McAliley, James H.; Bruce, David A.
2011-01-01
Polylactide is a biodegradable polymer that is widely used for biomedical applications, and it is a replacement for some petroleum based polymers in applications that range from packaging to carpeting. Efforts to characterize and further enhance polylactide based systems using molecular simulations have to this point been hindered by the lack of accurate atomistic models for the polymer. Thus, we present force field parameters specifically suited for molecular modeling of PLA. The model, which we refer to as PLAFF3, is based on a combination of the OPLS and CHARMM force fields, with modifications to bonded and nonbonded parameters. Dihedral angle parameters were adjusted to reproduce DFT data using newly developed CMAP dihedral cross terms, and the model was further adjusted to reproduce experimentally resolved crystal structure conformations, melt density, volume expansivity, and the glass transition temperature of PLA. We recommend the use of PLAFF3 in modeling PLA in its crystalline or amorphous states and have provided the necessary input files required for the publicly available molecular dynamics code GROMACS. PMID:22180734
Inertia Parameter Identification from Base Excitation Test Dat
NASA Astrophysics Data System (ADS)
Fuellekrug, U.; Schedlinski, C.
2004-08-01
With the purpose to further investigate and improve a method for the identification of inertia parameters, tests with flexible test structures have been carried out. Reference data for the inertia parameters were obtained from a Finite Element model and from conventional weighing and pendulum measurements. For the realization of the base excitation a six-axis vibration simulator was utilized. The base forces were recorded with a special Force Measurement Device (FMD), and the base accelerations of the test structures were measured by accelerometers. Each of the 3 translational and 3 rotational axes of the multi-axial test facility was driven by a sine sweep signal with an appropriate base acceleration input. The application of the identification algorithm to the measured data showed that an acceptable identification of mass and mass moments of inertia is possible. However, a highly accurate identification of the center of gravity location could not be achieved. The results of the analyses are discussed and the advantages and limits of the present method are pointed out. Recommendations for the practical application and improved center of gravity identification are given. Keywords: Inertia parameters, base excitation, multi- axial test facilities, vibration testing.
Field measurement of moisture-buffering model inputs for residential buildings
Woods, Jason; Winkler, Jon
2016-02-05
Moisture adsorption and desorption in building materials impact indoor humidity. This effect should be included in building-energy simulations, particularly when humidity is being investigated or controlled. Several models can calculate this moisture-buffering effect, but accurate ones require model inputs that are not always known to the user of the building-energy simulation. This research developed an empirical method to extract whole-house model inputs for the effective moisture penetration depth (EMPD) model. The experimental approach was to subject the materials in the house to a square-wave relative-humidity profile, measure all of the moisture-transfer terms (e.g., infiltration, air-conditioner condensate), and calculate the onlymore » unmeasured term—the moisture sorption into the materials. We validated this method with laboratory measurements, which we used to measure the EMPD model inputs of two houses. After deriving these inputs, we measured the humidity of the same houses during tests with realistic latent and sensible loads and demonstrated the accuracy of this approach. Furthermore, these results show that the EMPD model, when given reasonable inputs, is an accurate moisture-buffering model.« less
Investigation of Input Signal Curve Effect on Formed Pulse of Hydraulic-Powered Pulse Machine
NASA Astrophysics Data System (ADS)
Novoseltseva, M. V.; Masson, I. A.; Pashkov, E. N.
2016-04-01
Well drilling machines should have as high efficiency factor as it is possible. This work proposes factors that are affected by change of input signal pulse curve. A series of runs are conducted on mathematical model of hydraulic-powered pulse machine. From this experiment, interrelations between input pulse curve and construction parameters are found. Results of conducted experiment are obtained with the help of the mathematical model, which is created in Simulink Matlab. Keywords – mathematical modelling; impact machine; output signal amplitude; input signal curve.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
NASA Astrophysics Data System (ADS)
Liao, Qifeng; Lin, Guang
2016-07-01
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
Parameter Description Language Version 1.0
NASA Astrophysics Data System (ADS)
Zwolf, Carlo Maria; Harrison, Paul; Garrido, Julian; Ruiz, Jose Enrique; Le Petit, Franck; Zwolf, Carlo Maria
2014-05-01
This document discusses the definition of the Parameter Description Language (PDL). In this language parameters are described in a rigorous data model. With no loss of generality, we will represent this data model using XML. It intends to be a expressive language for self-descriptive web services exposing the semantic nature of input and output parameters, as well as all necessary complex constraints. PDL is a step forward towards true web services interoperability.
Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information.
Xu, Lu; Huang, Defeng David; Guo, Yingjie Jay
2015-12-01
In this paper, we propose a new blind learning algorithm, namely, the Benveniste-Goursat input-output decision (BG-IOD), to enhance the convergence performance of neural network-based equalizers for nonlinear channel equalization. In contrast to conventional blind learning algorithms, where only the output of the equalizer is employed for updating system parameters, the BG-IOD exploits a new type of extra information, the input decision information obtained from the input of the equalizer, to mitigate the influence of the nonlinear equalizer structure on parameters learning, thereby leading to improved convergence performance. We prove that, with the input decision information, a desirable convergence capability that the output symbol error rate (SER) is always less than the input SER if the input SER is below a threshold, can be achieved. Then, the BG soft-switching technique is employed to combine the merits of both input and output decision information, where the former is used to guarantee SER convergence and the latter is to improve SER performance. Simulation results show that the proposed algorithm outperforms conventional blind learning algorithms, such as stochastic quadratic distance and dual mode constant modulus algorithm, in terms of both convergence performance and SER performance, for nonlinear equalization. PMID:25706894
Yang, Xiaoyan; Cui, Jianwei; Lao, Dazhong; Li, Donghai; Chen, Junhui
2016-05-01
In this paper, a composite control based on Active Disturbance Rejection Control (ADRC) and Input Shaping is presented for TRMS with two degrees of freedom (DOF). The control tasks consist of accurately tracking desired trajectories and obtaining disturbance rejection in both horizontal and vertical planes. Due to un-measurable states as well as uncertainties stemming from modeling uncertainty and unknown disturbance torques, ADRC is employed, and feed-forward Input Shaping is used to improve the dynamical response. In the proposed approach, because the coupling effects are maintained in controller derivation, there is no requirement to decouple the TRMS into horizontal and vertical subsystems, which is usually performed in the literature. Finally, the proposed method is implemented on the TRMS platform, and the results are compared with those of PID and ADRC in a similar structure. The experimental results demonstrate the effectiveness of the proposed method. The operation of the controller allows for an excellent set-point tracking behavior and disturbance rejection with system nonlinearity and complex coupling conditions. PMID:26922492
A simple neural network model for the determination of aquifer parameters
NASA Astrophysics Data System (ADS)
Samani, N.; Gohari-Moghadam, M.; Safavi, A. A.
2007-06-01
SummaryA simple artificial neural network (ANN) model is developed for the determination of non-leaky confined aquifer parameters by normalizing and applying the principal component analysis (PCA) on adopted training data pattern from Lin and Chen [Lin, G.F., Chen, G.R., 2006. An improved neural network approach to the determination of aquifer parameters. Journal of Hydrology 316 (1-4), 281-289]. The proposed network uses faster Levenberg-Marquardt training algorithm instead of gradient descent. The application of PCA highly reduced the network topology so that it has only one neuron in the input layer and eight neurons in the hidden layer regardless of the number of drawdown records in the pumping test data. The network trained with 10,205 training sets and tested with 2000 sets of synthetic data. The network generates the coordinates of the match point for any individual pumping test case study and then the aquifer parameters are calculated using Theis' equation. The simple ANN trains faster and determines the coordinate of the match point more accurately because of the simplified topology and LM training algorithm. The accuracy, generalization ability and reliability of the proposed network is verified by two sets of real-time field data and the results are compared with that of Lin and Chen as well as graphical methods of aquifer parameters estimation. The proposed ANN appears to be a simpler and more accurate alternative to the type curve-matching techniques and previous ANN methods.
Multichannel analyzers at high rates of input
NASA Technical Reports Server (NTRS)
Rudnick, S. J.; Strauss, M. G.
1969-01-01
Multichannel analyzer, used with a gating system incorporating pole-zero compensation, pile-up rejection, and baseline-restoration, achieves good resolution at high rates of input. It improves resolution, reduces tailing and rate-contributed continuum, and eliminates spectral shift.
Adaptive Random Testing with Combinatorial Input Domain
Lu, Yansheng
2014-01-01
Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure-detection capability and has been widely applied in different scenarios, such as numerical programs, some object-oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART-CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART-CID generally performs better than RT, with respect to different evaluation metrics. PMID:24772036
Soil Organic Carbon Input from Urban Turfgrasses
Technology Transfer Automated Retrieval System (TEKTRAN)
Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon (C) input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon ...
Multiple Input Microcantilever Sensor with Capacitive Readout
Britton, C.L., Jr.; Brown, G.M.; Bryan, W.L.; Clonts, L.G.; DePriest, J.C.; Emergy, M.S.; Ericson, M.N.; Hu, Z.; Jones, R.L.; Moore, M.R.; Oden, P.I.; Rochelle, J.M.; Smith, S.F.; Threatt, T.D.; Thundat, T.; Turner, G.W.; Warmack, R.J.; Wintenberg, A.L.
1999-03-11
A surface-micromachined MEMS process has been used to demonstrate multiple-input chemical sensing using selectively coated cantilever arrays. Combined hydrogen and mercury-vapor detection was achieved with a palm-sized, self-powered module with spread-spectrum telemetry reporting.
Input-Based Incremental Vocabulary Instruction
ERIC Educational Resources Information Center
Barcroft, Joe
2012-01-01
This fascinating presentation of current research undoes numerous myths about how we most effectively learn new words in a second language. In clear, reader-friendly text, the author details the successful approach of IBI vocabulary instruction, which emphasizes the presentation of target vocabulary as input early on and the incremental (gradual)…
Soil Organic Carbon Input from Urban Turfgrasses
Technology Transfer Automated Retrieval System (TEKTRAN)
Turfgrass is a major vegetation type in the urban and suburban environment. Management practices such as species selection, irrigation, and mowing may affect carbon input and storage in these systems. Research was conducted to determine the rate of soil organic carbon (SOC) changes, soil carbon sequ...
7 CFR 3430.607 - Stakeholder input.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA...
7 CFR 3430.15 - Stakeholder input.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA...
7 CFR 3430.907 - Stakeholder input.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.907 Section 3430.907 Agriculture Regulations of the Department of Agriculture (Continued) COOPERATIVE STATE RESEARCH, EDUCATION, AND EXTENSION SERVICE, DEPARTMENT OF AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA...
7 CFR 3430.15 - Stakeholder input.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Stakeholder input. 3430.15 Section 3430.15 Agriculture Regulations of the Department of Agriculture (Continued) NATIONAL INSTITUTE OF FOOD AND AGRICULTURE COMPETITIVE AND NONCOMPETITIVE NON-FORMULA FEDERAL ASSISTANCE PROGRAMS-GENERAL...
Numerical simulation of LIGO input optics
NASA Astrophysics Data System (ADS)
None, Shivanand; Jamal, Nafis; Yoshida, Sanichiro
2005-11-01
Numerical analysis has been carried out to understand the performance of the Input Optics used in the first generation of LIGO (Laser Interferometer Gravitational-wave Observatory) detector. The input optics is a subsystem consisting of a mode cleaner and mode-matching telescope, where all the optics are suspended and installed in vacuum. Using the end-to-end package (LIGO programming language), computer codes have been made to simulate the input optics. Giving realistic seismic noise to the suspension point of the optics and using the length sensing/alignment sensing control for the mode cleaner, the performance of the input optics has been simulated under various scenarios such as with an order of magnitude higher seismic noise than the normal level, and with/without the alignment sensing control feedback from the arm cavity to the mode-matching telescope. The results are assessed in terms of the beam pointing fluctuation of the laser beam going into the arm cavities, and its influence on the optical coupling to the arm cavities and the noise level at the gravitational wave port signal.
Treatments of Precipitation Inputs to Hydrologic Models
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...
Input, Interaction and Output: An Overview
ERIC Educational Resources Information Center
Gass, Susan; Mackey, Alison
2006-01-01
This paper presents an overview of what has come to be known as the "Interaction Hypothesis," the basic tenet of which is that through input and interaction with interlocutors, language learners have opportunities to notice differences between their own formulations of the target language and the language of their conversational…
NASA Astrophysics Data System (ADS)
Tsantaki, M.; Sousa, S. G.; Santos, N. C.; Montalto, M.; Delgado-Mena, E.; Mortier, A.; Adibekyan, V.; Israelian, G.
2014-10-01
Context. Planetary studies demand precise and accurate stellar parameters as input for inferring the planetary properties. Different methods often provide different results that could lead to biases in the planetary parameters. Aims: In this work, we present a refinement of the spectral synthesis technique designed to treat fast rotating stars better. This method is used to derive precise stellar parameters, namely effective temperature, surface gravity, metallicity, and rotational velocity. The procedure is tested for FGK stars with low and moderate-to-high rotation rates. Methods: The spectroscopic analysis is based on the spectral synthesis package Spectroscopy Made Easy (SME), which assumes Kurucz model atmospheres in LTE. The line list where the synthesis is conducted is comprised of iron lines, and the atomic data are derived after solar calibration. Results: The comparison of our stellar parameters shows good agreement with literature values, both for slowly and for fast rotating stars. In addition, our results are on the same scale as the parameters derived from the iron ionization and excitation method presented in our previous works. We present new atmospheric parameters for 10 transiting planet hosts as an update to the SWEET-Cat catalog. We also re-analyze their transit light curves to derive new updated planetary properties. Based on observations collected at the La Silla Observatory, ESO (Chile) with the FEROS spectrograph at the 2.2 m telescope (ESO runs ID 089.C-0444(A), 088.C-0892(A)) and with the HARPS spectrograph at the 3.6 m telescope (ESO runs ID 072.C-0488(E), 079.C-0127(A)); at the Observatoire de Haute-Provence (OHP, CNRS/OAMP), France, with the SOPHIE spectrograph at the 1.93 m telescope and at the Observatoire Midi-Pyrénées (CNRS), France, with the NARVAL spectrograph at the 2 m Bernard Lyot Telescope (Run ID L131N11).Appendix A is available in electronic form at http://www.aanda.org
Poulin, Eric; Lebel, Réjean; Croteau, Etienne; Blanchette, Marie; Tremblay, Luc; Lecomte, Roger; Bentourkia, M'hamed; Lepage, Martin
2013-03-01
Reaching the full potential of magnetic resonance imaging (MRI)-positron emission tomography (PET) dual modality systems requires new methodologies in quantitative image analyses. In this study, methods are proposed to convert an arterial input function (AIF) derived from gadolinium-diethylenetriaminepentaacetic acid (Gd-DTPA) in MRI, into a (18)F-fluorodeoxyglucose ((18)F-FDG) AIF in PET, and vice versa. The AIFs from both modalities were obtained from manual blood sampling in a F98-Fisher glioblastoma rat model. They were well fitted by a convolution of a rectangular function with a biexponential clearance function. The parameters of the biexponential AIF model were found statistically different between MRI and PET. Pharmacokinetic MRI parameters such as the volume transfer constant (K(trans)), the extravascular-extracellular volume fraction (ν(e)), and the blood volume fraction (ν(p)) calculated with the Gd-DTPA AIF and the Gd-DTPA AIF converted from (18)F-FDG AIF normalized with or without blood sample were not statistically different. Similarly, the tumor metabolic rates of glucose (TMRGlc) calculated with (18) F-FDG AIF and with (18) F-FDG AIF obtained from Gd-DTPA AIF were also found not statistically different. In conclusion, only one accurate AIF would be needed for dual MRI-PET pharmacokinetic modeling in small animal models. PMID:22570280
NASA Technical Reports Server (NTRS)
Chen, J. C.; Hunt, D. L.
1984-01-01
An experimental modal analysis of the Galileo spacecraft was required to verify a finite element model used in loads analysis. Multiple input random and polyreference analysis techniques were applied in this program to demonstrate their effectiveness in determining the modal characteristics of a complex space structure. The methods were successful in determining an accurate set of modal data from two days of data acquisition. A complete set of results was available within 24 hours of test completion. Final analysis shows the modes from the multiple input random tests to be more complete and orthogonal than those obtained from classical sine dwell methods.
A More Accurate Measurement of the {sup 28}Si Lattice Parameter
Massa, E. Sasso, C. P.; Mana, G.; Palmisano, C.
2015-09-15
In 2011, a discrepancy between the values of the Planck constant measured by counting Si atoms and by comparing mechanical and electrical powers prompted a review, among others, of the measurement of the spacing of {sup 28}Si (220) lattice planes, either to confirm the measured value and its uncertainty or to identify errors. This exercise confirmed the result of the previous measurement and yields the additional value d{sub 220} = 192 014 711.98(34) am having a reduced uncertainty.
Chon, K H; Cohen, R J; Holstein-Rathlou, N H
1997-01-01
A linear and nonlinear autoregressive moving average (ARMA) identification algorithm is developed for modeling time series data. The algorithm uses Laguerre expansion of kernals (LEK) to estimate Volterra-Wiener kernals. However, instead of estimating linear and nonlinear system dynamics via moving average models, as is the case for the Volterra-Wiener analysis, we propose an ARMA model-based approach. The proposed algorithm is essentially the same as LEK, but this algorithm is extended to include past values of the output as well. Thus, all of the advantages associated with using the Laguerre function remain with our algorithm; but, by extending the algorithm to the linear and nonlinear ARMA model, a significant reduction in the number of Laguerre functions can be made, compared with the Volterra-Wiener approach. This translates into a more compact system representation and makes the physiological interpretation of higher order kernels easier. Furthermore, simulation results show better performance of the proposed approach in estimating the system dynamics than LEK in certain cases, and it remains effective in the presence of significant additive measurement noise. PMID:9236985
Accurate crop classification using hierarchical genetic fuzzy rule-based systems
NASA Astrophysics Data System (ADS)
Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.
2014-10-01
This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.
Visual parameter optimisation for biomedical image processing
2015-01-01
Background Biomedical image processing methods require users to optimise input parameters to ensure high-quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results We present a visualisation method that transforms users' ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches. PMID:26329538
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.
1993-01-01
The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Are Kohn-Sham conductances accurate?
Mera, H; Niquet, Y M
2010-11-19
We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil; Abhyankar, S.; Ghosh, Donetta L.; Smith, Barry; Huang, Zhenyu; Tartakovsky, Alexandre M.
2015-09-22
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
NASA Astrophysics Data System (ADS)
Yuan, Jing; Hu, Jiangping
2016-03-01
Pairing symmetries of iron-based superconductors are investigated systematically in a five-orbital model within the different regions of interaction parameters by functional renormalization group (FRG). Even for a fixed Fermi surface with both hole and electron pockets, it is found that depending on interaction parameters, a variety of pairing symmetries, including two types of d-wave and two types of s-wave pairing symmetries, can emerge. Only the dx^2-y^2 - and the s±-waves are robustly supported by the nearest-neighbor (NN) intra-orbital J 1 and the next-nearest-neighbor (NNN) intra-orbital J 2 antiferromagnetic (AFM) exchange couplings, respectively. This study suggests that the accurate initial input of the interaction parameters is essential to make FRG a useful method to determine the leading channel of superconducting instability.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D
George L Mesina; David Aumiller; Francis Buschman
2014-07-01
Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.
Input to state stability in reservoir models
NASA Astrophysics Data System (ADS)
Müller, Markus; Sierra, Carlos
2016-04-01
Models in ecology and biogeochemistry, in particular models of the global carbon cycle, can be generalized as systems of non-autonomous ordinary differential equations (ODEs). For many applications, it is important to determine the stability properties for this type of systems, but most methods available for autonomous systems are not necessarily applicable for the non-autonomous case. We discuss here stability notions for non-autonomous nonlinear models represented by systems of ODEs explicitly dependent on time and a time-varying input signal. We propose Input to State Stability (ISS) as candidate for the necessary generalization of the established analysis with respect to equilibria or invariant sets for autonomous systems, and show its usefulness by applying it to reservoir models typical for element cycling in ecosystem, e.g. in soil organic matter decomposition. We also show how ISS generalizes existent concepts formerly only available for Linear Time Invariant (LTI) and Linear Time Variant (LTV) systems to the nonlinear case.
2012-10-03
Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: Robot arms - Provides joint control and several tool control schemes UGV's - Provides translational and rotational commands for "skid-steer" vehicles Pan-tilt units - Provides several modes of control including velocity, position,more » and point-tracking Head-mounted displays (HMO)- Controls the viewpoint of a HMO Umbra frames - Controls the position andorientation of an Umbra posrot object Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.« less
Multimodal interfaces with voice and gesture input
Milota, A.D.; Blattner, M.M.
1995-07-20
The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.
Circadian light-input pathways in Drosophila.
Yoshii, Taishi; Hermann-Luibl, Christiane; Helfrich-Förster, Charlotte
2016-01-01
Light is the most important environmental cue to entrain the circadian clock in most animals. In the fruit fly Drosophila melanogaster, the light entrainment mechanisms of the clock have been well-studied. The Drosophila brain contains approximately 150 neurons that rhythmically express circadian clock genes. These neurons are called "clock neurons" and control behavioral activity rhythms. Many clock neurons express the Cryptochrome (CRY) protein, which is sensitive to UV and blue light, and thus enables clock neurons deep in the brain to directly perceive light. In addition to the CRY protein, external photoreceptors in the Drosophila eyes play an important role in circadian light-input pathways. Recent studies have provided new insights into the mechanisms that integrate these light inputs into the circadian network of the brain. In this review, we will summarize the current knowledge on the light entrainment pathways in the Drosophila circadian clock. PMID:27066180
Input on NIH Toolbox inclusion criteria
Victorson, David; Debb, Scott M.; Gershon, Richard C.
2013-01-01
Objective: The NIH Toolbox is intended to be responsive to the needs of investigators evaluating neurologic and behavioral function in diverse settings. Early phases of the project involved gathering information and input from potential end users. Methods: Information was collected through literature and instrument database reviews, requests for information, consensus meetings, and expert interviews and integrated into the NIH Toolbox development process in an iterative manner. Results: Criteria for instrument inclusion, subdomains to be assessed, and preferences regarding instrument cost and length were obtained. Existing measures suitable for inclusion in the NIH Toolbox and areas requiring new measure development were identified. Conclusion: The NIH Toolbox was developed with explicit input from potential end users regarding many of its key features. PMID:23479548
2012-10-03
Contains class for connecting to the Xbox 360 controller, displaying the user inputs {buttons, triggers, analog sticks), and controlling the rumble motors. Also contains classes for converting the raw Xbox 360 controller inputs into meaningful commands for the following objects: Robot arms - Provides joint control and several tool control schemes UGV's - Provides translational and rotational commands for "skid-steer" vehicles Pan-tilt units - Provides several modes of control including velocity, position, and point-tracking Head-mounted displays (HMO)- Controls the viewpoint of a HMO Umbra frames - Controls the position andorientation of an Umbra posrot object Umbra graphics window - Provides several modes of control for the Umbra OSG window viewpoint including free-fly, cursor-focused, and object following.