Total Dose Effects on Single Event Transients in Digital CMOS and Linear Bipolar Circuits
NASA Technical Reports Server (NTRS)
Buchner, S.; McMorrow, D.; Sibley, M.; Eaton, P.; Mavis, D.; Dusseau, L.; Roche, N. J-H.; Bernard, M.
2009-01-01
This presentation discusses the effects of ionizing radiation on single event transients (SETs) in circuits. The exposure of integrated circuits to ionizing radiation changes electrical parameters. The total ionizing dose effect is observed in both complementary metal-oxide-semiconductor (CMOS) and bipolar circuits. In bipolar circuits, transistors exhibit grain degradation, while in CMOS circuits, transistors exhibit threshold voltage shifts. Changes in electrical parameters can cause changes in single event upset(SEU)/SET rates. Depending on the effect, the rates may increase or decrease. Therefore, measures taken for SEU/SET mitigation might work at the beginning of a mission but not at the end following TID exposure. The effect of TID on SET rates should be considered if SETs cannot be tolerated.
NASA Astrophysics Data System (ADS)
Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart
2016-04-01
Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Reliability analysis of a sensitive and independent stabilometry parameter set
Nagymáté, Gergely; Orlovits, Zsanett
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel
2016-01-01
This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kano, Shinya; Maeda, Kosuke; Majima, Yutaka, E-mail: majima@msl.titech.ac.jp
2015-10-07
We present the analysis of chemically assembled double-dot single-electron transistors using orthodox model considering offset charges. First, we fabricate chemically assembled single-electron transistors (SETs) consisting of two Au nanoparticles between electroless Au-plated nanogap electrodes. Then, extraordinary stable Coulomb diamonds in the double-dot SETs are analyzed using the orthodox model, by considering offset charges on the respective quantum dots. We determine the equivalent circuit parameters from Coulomb diamonds and drain current vs. drain voltage curves of the SETs. The accuracies of the capacitances and offset charges on the quantum dots are within ±10%, and ±0.04e (where e is the elementary charge),more » respectively. The parameters can be explained by the geometrical structures of the SETs observed using scanning electron microscopy images. Using this approach, we are able to understand the spatial characteristics of the double quantum dots, such as the relative distance from the gate electrode and the conditions for adsorption between the nanogap electrodes.« less
Influence parameters of impact grinding mills
NASA Technical Reports Server (NTRS)
Hoeffl, K.; Husemann, K.; Goldacker, H.
1984-01-01
Significant parameters for impact grinding mills were investigated. Final particle size was used to evaluate grinding results. Adjustment of the parameters toward increased charge load results in improved efficiency; however, it was not possible to define a single, unified set to optimum grinding conditions.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Interaction of cadmium with phosphate on goethite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venema, P.; Hiemstra, T.; Riemsdijk, W.H. van
1997-08-01
Interactions between different ions are of importance in understanding chemical processes in natural systems. In this study simultaneous adsorption of phosphate and cadmium on goethite is studied in detail. The charge distribution (CD)-multisite complexation (MUSIC) model has been successful in describing extended data sets of cadmium adsorption and phosphate adsorption on goethite. In this study, the parameters of this model for these two data sets were combined to describe a new data set of simultaneous adsorption of cadmium and phosphate on goethite. Attention is focused on the surface speciation of cadmium. With the extra information that can be obtained frommore » the interaction experiments, the cadmium adsorption model is refined. For a perfect description of the data, the singly coordinated surface groups at the 110 face of goethite were assumed to form both monodentate and bidentate surface species with cadmium. The CD-MUSIC model is able to describe data sets of both simultaneous and single adsorption of cadmium and phosphate with the same parameters. The model calculations confirmed the idea that only singly coordinated surface groups are reactive for specific ion binding.« less
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.
1996-01-01
This paper first gives a heuristic description of the sensitivity of Interferometric Synthetic Aperture Radar to vertical vegetation distributions and underlying surface topography. A parameter estimation scenario is then described in which the Interferometric Synthetic Aperture Radar cross-correlation amplitude and phase are the observations from which vegetation and surface topographic parameters are estimated. It is shown that, even in the homogeneous-layer model of the vegetation, the number of parameters needed to describe the vegetation and underlying topography exceeds the number of Interferometric Synthetic Aperture Radar observations for single-baseline, single-frequency, single-incidence-angle, single-polarization Interferometric Synthetic Aperture Radar. Using ancillary ground-truth data to compensate for the underdetermination of the parameters, forest depths are estimated from the INSAR data. A recently-analyzed multibaseline data set is also discussed and the potential for stand-alone Interferometric Synthetic Aperture Radar parameter estimation is assessed. The potential of combining the information content of Interferometric Synthetic Aperture Radar with that of infrared/optical remote sensing data is briefly discussed.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.
1996-01-01
Drawing from recently submitted work, this paper first gives a heuristic description of the sensitivity of interferometric synthetic aperture radar (INSAR) to vertical vegetation distribution and under laying surface topography. A parameter estimation scenario is then described in which the INSAR cross correlation amplitude and phase are the observations from which vegetation and surface topographic parameters are estimated. It is shown that, even in the homogeneous layer model of the vegetation, the number of parameters needed to describe the vegetation and underlying topography exceeds the number of INSAR observations for single baseline, single frequency, single incidence-angle, single polarization INSAR. Using ancillary ground truth data to compensate for the under determination of the parameters, forest depths are estimated from the INSAR data. A recently analyzed multi-baseline data set is also discussed and the potential for stand alone INSAR parameter estimation is assessed. The potential of combining the information content of INSAR with that of infrared/optical remote sensing data is briefly discussed.
The four fixed points of scale invariant single field cosmological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, BingKan, E-mail: bxue@princeton.edu
2012-10-01
We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less
NASA Astrophysics Data System (ADS)
Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.
1992-04-01
Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.
Quantum sensing of the phase-space-displacement parameters using a single trapped ion
NASA Astrophysics Data System (ADS)
Ivanov, Peter A.; Vitanov, Nikolay V.
2018-03-01
We introduce a quantum sensing protocol for detecting the parameters characterizing the phase-space displacement by using a single trapped ion as a quantum probe. We show that, thanks to the laser-induced coupling between the ion's internal states and the motion mode, the estimation of the two conjugated parameters describing the displacement can be efficiently performed by a set of measurements of the atomic state populations. Furthermore, we introduce a three-parameter protocol capable of detecting the magnitude, the transverse direction, and the phase of the displacement. We characterize the uncertainty of the two- and three-parameter problems in terms of the Fisher information and show that state projective measurement saturates the fundamental quantum Cramér-Rao bound.
NASA Astrophysics Data System (ADS)
Aioanei, Daniel; Samorì, Bruno; Brucale, Marco
2009-12-01
Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.
Seasonal soybean crop reflectance
NASA Technical Reports Server (NTRS)
Lemaster, E. W. (Principal Investigator); Chance, J. E.
1983-01-01
Data are presented from field measurements of 1980 including 5 acquisitions of handheld radiometer reflectance measurements, 7 complete sets of parameters for implementing the Suits mode, and other biophysical parameters to characterize the soybean canopy. LANDSAT calculations on the simulated Brazilian soybean reflectance are included along with data collected during the summer and fall on 1981 on soybean single leaf optical parameters for three irrigation treatments. Tests of the Suits vegetative canopy reflectance model for the full hemisphere of observer directions as well as the nadir direction show moderate agreement for the visible channels of the MSS and poor agreement in the near infrared channel. Temporal changes in the spectral characteristics of the single leaves were seen to occur as a function of maturity which demonstrates that the absorptance of a soybean single leaf is more a function of thetransmittancee characteristics than the seasonally consistent single leaf reflectance.
NASA Astrophysics Data System (ADS)
Lika, Konstadia; Kearney, Michael R.; Kooijman, Sebastiaan A. L. M.
2011-11-01
The covariation method for estimating the parameters of the standard Dynamic Energy Budget (DEB) model provides a single-step method of accessing all the core DEB parameters from commonly available empirical data. In this study, we assess the robustness of this parameter estimation procedure and analyse the role of pseudo-data using elasticity coefficients. In particular, we compare the performance of Maximum Likelihood (ML) vs. Weighted Least Squares (WLS) approaches and find that the two approaches tend to converge in performance as the number of uni-variate data sets increases, but that WLS is more robust when data sets comprise single points (zero-variate data). The efficiency of the approach is shown to be high, and the prior parameter estimates (pseudo-data) have very little influence if the real data contain information about the parameter values. For instance, the effects of the pseudo-value for the allocation fraction κ is reduced when there is information for both growth and reproduction, that for the energy conductance is reduced when information on age at birth and puberty is given, and the effects of the pseudo-value for the maturity maintenance rate coefficient are insignificant. The estimation of some parameters (e.g., the zoom factor and the shape coefficient) requires little information, while that of others (e.g., maturity maintenance rate, puberty threshold and reproduction efficiency) require data at several food levels. The generality of the standard DEB model, in combination with the estimation of all of its parameters, allows comparison of species on the basis of parameter values. We discuss a number of preliminary patterns emerging from the present collection of parameter estimates across a wide variety of taxa. We make the observation that the estimated value of the fraction κ of mobilised reserve that is allocated to soma is far away from the value that maximises reproduction. We recognise this as the reason why two very different parameter sets must exist that fit most data set reasonably well, and give arguments why, in most cases, the set with the large value of κ should be preferred. The continued development of a parameter database through the estimation procedures described here will provide a strong basis for understanding evolutionary patterns in metabolic organisation across the diversity of life.
Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J
2014-01-01
Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Continued development of a detailed model of arc discharge dynamics
NASA Technical Reports Server (NTRS)
Beers, B. L.; Pine, V. W.; Ives, S. T.
1982-01-01
Using a previously developed set of codes (SEMC, CASCAD, ACORN), a parametric study was performed to quantify the parameters which describe the development of a single electron indicated avalanche into a negative tip streamer. The electron distribution function in Teflon is presented for values of the electric field in the range of four-hundred million volts/meter to four billon volts/meter. A formulation of the scattering parameters is developed which shows that the transport can be represented by three independent variables. The distribution of ionization sites is used to indicate an avalanche. The self consistent evolution of the avalanche is computed over the parameter range of scattering set.
Digital Microwave System Design Guide.
1984-02-01
traffic analysis is a continuous effort, setting parameters for subsequent stages of expansion after the system design is finished. 2.1.3 Quality of...operational structure of the user for whom he is providing service. 2.2.3 Quality of Service. In digital communications, the basic performance parameter ...the basic interpretation of system performance is measured in terms of a single parameter , throughput. Throughput can be defined as the number of
NASA Astrophysics Data System (ADS)
Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward
2016-12-01
SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.
A Generalized QMRA Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2016-10-01
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0
Protein Logic: A Statistical Mechanical Study of Signal Integration at the Single-Molecule Level
de Ronde, Wiet; Rein ten Wolde, Pieter; Mugler, Andrew
2012-01-01
Information processing and decision-making is based upon logic operations, which in cellular networks has been well characterized at the level of transcription. In recent years, however, both experimentalists and theorists have begun to appreciate that cellular decision-making can also be performed at the level of a single protein, giving rise to the notion of protein logic. Here we systematically explore protein logic using a well-known statistical mechanical model. As an example system, we focus on receptors that bind either one or two ligands, and their associated dimers. Notably, we find that a single heterodimer can realize any of the 16 possible logic gates, including the XOR gate, by variation of biochemical parameters. We then introduce what to our knowledge is a novel idea: that a set of receptors with fixed parameters can encode functionally unique logic gates simply by forming different dimeric combinations. An exhaustive search reveals that the simplest set of receptors (two single-ligand receptors and one double-ligand receptor) can realize several different groups of three unique gates, a result for which the parametric analysis of single receptors and dimers provides a clear interpretation. Both results underscore the surprising functional freedom readily available to cells at the single-protein level. PMID:23009860
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
1976-08-01
can easily change any of the parameters controlling the r, experimenter. B.2.3.3 The PLATO Laborato >• A block diagram of the laboratory is...the parameters of an adaptive filter, or to perform the computations required by the more complex displays. In addition to its role as the prime...by the inherent response variability which precludes reliable estimates of attention-sensitive parameters from a single observation. Thus
NASA Astrophysics Data System (ADS)
Dumon, M.; Van Ranst, E.
2016-01-01
This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Automatic tissue characterization from ultrasound imagery
NASA Astrophysics Data System (ADS)
Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.
1993-08-01
In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Entangling measurements for multiparameter estimation with two qubits
NASA Astrophysics Data System (ADS)
Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco
2018-01-01
Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.
Complete set of essential parameters of an effective theory
NASA Astrophysics Data System (ADS)
Ioffe, M. V.; Vereshagin, V. V.
2018-04-01
The present paper continues the series [V. V. Vereshagin, True self-energy function and reducibility in effective scalar theories, Phys. Rev. D 89, 125022 (2014); , 10.1103/PhysRevD.89.125022A. Vereshagin and V. Vereshagin, Resultant parameters of effective theory, Phys. Rev. D 69, 025002 (2004); , 10.1103/PhysRevD.69.025002K. Semenov-Tian-Shansky, A. Vereshagin, and V. Vereshagin, S-matrix renormalization in effective theories, Phys. Rev. D 73, 025020 (2006), 10.1103/PhysRevD.73.025020] devoted to the systematic study of effective scattering theories. We consider matrix elements of the effective Lagrangian monomials (in the interaction picture) of arbitrary high dimension D and show that the full set of corresponding coupling constants contains parameters of both kinds: essential and redundant. Since it would be pointless to formulate renormalization prescriptions for redundant parameters, it is necessary to select the full set of the essential ones. This is done in the present paper for the case of the single scalar field.
Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping
NASA Astrophysics Data System (ADS)
Balakrishnan, D.; Quan, C.; Tay, C. J.
2013-06-01
The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.
Advanced Microwave Ferrite Research (AMFeR): Phase Two
2006-12-31
motion for the single crystal LPE films were a qualitative success, but a complete set of parameters for these films has not yet been achieved. Key...biasing field. In order to address these issues, we investigated and optimized a new LPE flux system to grow high quality thick films and bulk single...self-biased circulators. III. Methodology: BaM thick film and bulk single crystal growth by LPE process BaFe 120 19 flux melt was prepared from a
Protein logic: a statistical mechanical study of signal integration at the single-molecule level.
de Ronde, Wiet; Rein ten Wolde, Pieter; Mugler, Andrew
2012-09-05
Information processing and decision-making is based upon logic operations, which in cellular networks has been well characterized at the level of transcription. In recent years, however, both experimentalists and theorists have begun to appreciate that cellular decision-making can also be performed at the level of a single protein, giving rise to the notion of protein logic. Here we systematically explore protein logic using a well-known statistical mechanical model. As an example system, we focus on receptors that bind either one or two ligands, and their associated dimers. Notably, we find that a single heterodimer can realize any of the 16 possible logic gates, including the XOR gate, by variation of biochemical parameters. We then introduce what to our knowledge is a novel idea: that a set of receptors with fixed parameters can encode functionally unique logic gates simply by forming different dimeric combinations. An exhaustive search reveals that the simplest set of receptors (two single-ligand receptors and one double-ligand receptor) can realize several different groups of three unique gates, a result for which the parametric analysis of single receptors and dimers provides a clear interpretation. Both results underscore the surprising functional freedom readily available to cells at the single-protein level. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surdoval, Wayne A.; Berry, David A.; Shultz, Travis R.
A set of equations are presented for calculating atomic principal spectral lines and fine-structure energy splits for single and multi-electron atoms. Calculated results are presented and compared to the National Institute of Science and Technology database demonstrating very good accuracy. The equations do not require fitted parameters. The only experimental parameter required is the Ionization energy for the electron of interest. The equations have comparable accuracy and broader applicability than the single electron Dirac equation. Three Appendices discuss the origin of the new equations and present calculated results. New insights into the special relativistic nature of the Dirac equation andmore » its relationship to the new equations are presented.« less
A canonical correlation neural network for multicollinearity and functional data.
Gou, Zhenkun; Fyfe, Colin
2004-03-01
We review a recent neural implementation of Canonical Correlation Analysis and show, using ideas suggested by Ridge Regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing Partial Least Squares regression (at one extreme) to Canonical Correlation Analysis (at the other)and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, we develop a second penalty term which acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term. We illustrate our algorithms on both artificial and real data.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
NASA Astrophysics Data System (ADS)
Subbulakshmi, N.; Kumar, M. Saravana; Sheela, K. Juliet; Krishnan, S. Radha; Shanmugam, V. M.; Subramanian, P.
2017-12-01
Electron Paramagnetic Resonance (EPR) spectroscopic studies of VO2+ ions as paramagnetic impurity in Lithium Sodium Acid Phthalate (LiNaP) single crystal have been done at room temperature on X-Band microwave frequency. The lattice parameter values are obtained for the chosen system from Single crystal X-ray diffraction study. Among the number of hyperfine lines in the EPR spectra only two sets are reported from EPR data. The principal values of g and A tensors are evaluated for the two different VO2+ sites I and II. They possess the crystalline field around the VO2+ as orthorhombic. Site II VO2+ ion is identified as substitutional in place of Na1 location and the other site I is identified as interstitial location. For both sites in LiNaP, VO2+ are identified in octahedral coordination with tetragonal distortion as seen from the spin Hamiltonian parameter values. The ground state of vanadyl ion in the LiNaP single crystal is dxy. Using optical absorption data the octahedral and tetragonal parameters are calculated. By correlating EPR and optical data, the molecular orbital bonding parameters have been discussed for both sites.
Multiple angles on the sterile neutrino - a combined view of cosmological and oscillation limits
NASA Astrophysics Data System (ADS)
Guzowski, Pawel
2017-09-01
The possible existence of sterile neutrinos is an important unresolved question for both particle physics and cosmology. Data sensitive to a sterile neutrino is coming from both particle physics experiments and from astrophysical measurements of the Cosmic Microwave Background. In this study, we address the question whether these two contrasting data sets provide complementary information about sterile neutrinos. We focus on the muon disappearance oscillation channel, taking data from the MINOS, ICECUBE and Planck experiments, converting the limits into particle physics and cosmological parameter spaces, to illustrate the different regions of parameter space where the data sets have the best sensitivity. For the first time, we combine the data sets into a single analysis to illustrate how the limits on the parameters of the sterile-neutrino model are strengthened. We investigate how data from a future accelerator neutrino experiment (SBN) will be able to further constrain this picture.
Cost-effectiveness of single versus double embryo transfer in IVF in relation to female age.
van Loendersloot, Laura L; Moolenaar, Lobke M; van Wely, Madelon; Repping, Sjoerd; Bossuyt, Patrick M; Hompes, Peter G A; van der Veen, Fulco; Mol, Ben Willem J
2017-07-01
To evaluate the cost-effectiveness of single embryo transfer followed by an additional frozen-thawed single embryo transfer, if more embryos are available, as compared to double embryo transfer in relation to female age. We used a decision tree model to evaluate the costs from a healthcare provider perspective and the pregnancy rates of two embryo transfer policies: one fresh single embryo transfer followed by an additional frozen-thawed single embryo transfer, if more embryos are available (strategy I), and double embryo transfer (strategy II). The analysis was performed on an intention-to-treat basis. Sensitivity analyses were carried out to evaluate the robustness of our model and to identify which model parameters had the strongest impact on the results. SET followed by an additional frozen-thawed single embryo transfer if available was dominant, less costly and more effective, over DET in women under 32 years. In women aged 32 or older DET was more effective than SET followed by an additional frozen-thawed single embryo transfer if available but also more costly. SET followed by an additional frozen-thawed single embryo transfer should be the preferred strategy in women under 32 undergoing IVF. The choice for SET followed by an additional frozen-thawed single embryo transfer or DET in women aged 32 or older depends on individual patient preferences and on how much society is willing to pay for an extra child. There is a strong need for a randomized clinical trial comparing the cost and effects of SET followed by an additional frozen-thawed single embryo transfer and DET in the latter category of women. Copyright © 2017 Elsevier B.V. All rights reserved.
Standard Reference Material (SRM 1990) for Single Crystal Diffractometer Alignment
Wong-Ng, W.; Siegrist, T.; DeTitta, G.T.; Finger, L.W.; Evans, H.T.; Gabe, E.J.; Enright, G.D.; Armstrong, J.T.; Levenson, M.; Cook, L.P.; Hubbard, C.R.
2001-01-01
An international project was successfully completed which involved two major undertakings: (1) a round-robin to demonstrate the viability of the selected standard and (2) the certification of the lattice parameters of the SRM 1990, a Standard Reference Material?? for single crystal diffractometer alignment. This SRM is a set of ???3500 units of Cr-doped Al2O3, or ruby spheres [(0 420.011 mole fraction % Cr (expanded uncertainty)]. The round-robin consisted of determination of lattice parameters of a pair of crystals' the ruby sphere as a standard, and a zeolite reference to serve as an unknown. Fifty pairs of crystals were dispatched from Hauptman-Woodward Medical Research Institute to volunteers in x-ray laboratories world-wide. A total of 45 sets of data was received from 32 laboratories. The mean unit cell parameters of the ruby spheres was found to be a=4.7608 A?? ?? 0.0062 A??, and c=12.9979 A?? ?? 0.020 A?? (95 % intervals of the laboratory means). The source of errors of outlier data was identified. The SRM project involved the certification of lattice parameters using four well-aligned single crystal diffractometers at (Bell Laboratories) Lucent Technologies and at NRC of Canada (39 ruby spheres), the quantification of the Cr content using a combined microprobe and SEM/EDS technique, and the evaluation of the mosaicity of the ruby spheres using a double-crystal spectrometry method. A confirmation of the lattice parameters was also conducted using a Guinier-Ha??gg camera. Systematic corrections of thermal expansion and refraction corrections were applied. These rubies_ are rhombohedral, with space group R3c. The certified mean unit cell parameters are a=4.76080 ?? 0.00029 A??, and c=12 99568 A?? ?? 0.00087 A?? (expanded uncertainty). These certified lattice parameters fall well within the results of those obtained from the international round-robin study. The Guinier-Ha??gg transmission measurements on five samples of powdered rubies (a=4.7610 A?? ?? 0.0013 A??, and c=12.9954 A?? ?? 0.0034 A??) agreed well with the values obtained from the single crystal spheres.
Standard Reference Material (SRM 1990) For Single Crystal Diffractometer Alignment
Wong-Ng, W.; Siegrist, T.; DeTitta, G. T.; Finger, L. W.; Evans, H. T.; Gabe, E. J.; Enright, G. D.; Armstrong, J. T.; Levenson, M.; Cook, L. P.; Hubbard, C. R.
2001-01-01
An international project was successfully completed which involved two major undertakings: (1) a round-robin to demonstrate the viability of the selected standard and (2) the certification of the lattice parameters of the SRM 1990, a Standard Reference Material® for single crystal diffractometer alignment. This SRM is a set of ≈3500 units of Cr-doped Al2O3, or ruby spheres [(0.420.011 mole fraction % Cr (expanded uncertainty)]. The round-robin consisted of determination of lattice parameters of a pair of crystals: the ruby sphere as a standard, and a zeolite reference to serve as an unknown. Fifty pairs of crystals were dispatched from Hauptman-Woodward Medical Research Institute to volunteers in x-ray laboratories world-wide. A total of 45 sets of data was received from 32 laboratories. The mean unit cell parameters of the ruby spheres was found to be a=4.7608 ű0.0062 Å, and c=12.9979 ű0.020 Å (95 % intervals of the laboratory means). The source of errors of outlier data was identified. The SRM project involved the certification of lattice parameters using four well-aligned single crystal diffractometers at (Bell Laboratories) Lucent Technologies and at NRC of Canada (39 ruby spheres), the quantification of the Cr content using a combined microprobe and SEM/EDS technique, and the evaluation of the mosaicity of the ruby spheres using a double-crystal spectrometry method. A confirmation of the lattice parameters was also conducted using a Guinier-Hägg camera. Systematic corrections of thermal expansion and refraction corrections were applied. These rubies– are rhombohedral, with space group R3¯c. The certified mean unit cell parameters are a=4.76080±0.00029 Å, and c=12.99568 ű0.00087 Å (expanded uncertainty). These certified lattice parameters fall well within the results of those obtained from the international round-robin study. The Guinier-Hägg transmission measurements on five samples of powdered rubies (a=4.7610 ű0.0013 Å, and c = 12.9954 ű0.0034 Å) agreed well with the values obtained from the single crystal spheres. PMID:27500067
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Elastic, inelastic, and 1-nucleon transfer channels in the 7Li+120Sn system
NASA Astrophysics Data System (ADS)
Kundu, A.; Santra, S.; Pal, A.; Chattopadhyay, D.; Tripathi, R.; Roy, B. J.; Nag, T. N.; Nayak, B. K.; Saxena, A.; Kailas, S.
2017-03-01
Background: Simultaneous description of major outgoing channels for a nuclear reaction by coupled-channels calculations using the same set of potential and coupling parameters is one of the difficult tasks to accomplish in nuclear reaction studies. Purpose: To measure the elastic, inelastic, and transfer cross sections for as many channels as possible in 7Li+120Sn system at different beam energies and simultaneously describe them by a single set of model calculations using fresco. Methods: Projectile-like fragments were detected using six sets of Si-detector telescopes to measure the cross sections for elastic, inelastic, and 1-nucleon transfer channels at two beam energies of 28 and 30 MeV. Optical model analysis of elastic data and coupled-reaction-channels (CRC) calculations that include around 30 reaction channels coupled directly to the entrance channel, with respective structural parameters, were performed to understand the measured cross sections. Results: Structure information available in the literature for some of the identified states did not reproduce the present data. Cross sections obtained from CRC calculations using a modified but single set of potential and coupling parameters were able to describe simultaneously the measured data for all the channels at both the measured energies as well as the existing data for elastic and inelastic cross sections at 44 MeV. Conclusions: Non-reproduction of some of the cross sections using the structure information available in the literature which are extracted from reactions involving different projectiles indicates that such measurements are probe dependent. New structural parameters were assigned for such states as well as for several new transfer states whose spectroscopic factors were not known.
NASA Technical Reports Server (NTRS)
Treuhaft, Robert N.; Law, Beverly E.; Siqueira, Paul R.
2000-01-01
Parameters describing the vertical structure of forests, for example tree height, height-to-base-of-live-crown, underlying topography, and leaf area density, bear on land-surface, biogeochemical, and climate modeling efforts. Single, fixed-baseline interferometric synthetic aperture radar (INSAR) normalized cross-correlations constitute two observations from which to estimate forest vertical structure parameters: Cross-correlation amplitude and phase. Multialtitude INSAR observations increase the effective number of baselines potentially enabling the estimation of a larger set of vertical-structure parameters. Polarimetry and polarimetric interferometry can further extend the observation set. This paper describes the first acquisition of multialtitude INSAR for the purpose of estimating the parameters describing a vegetated land surface. These data were collected over ponderosa pine in central Oregon near longitude and latitude -121 37 25 and 44 29 56. The JPL interferometric TOPSAR system was flown at the standard 8-km altitude, and also at 4-km and 2-km altitudes, in a race track. A reference line including the above coordinates was maintained at 35 deg for both the north-east heading and the return southwest heading, at all altitudes. In addition to the three altitudes for interferometry, one line was flown with full zero-baseline polarimetry at the 8-km altitude. A preliminary analysis of part of the data collected suggests that they are consistent with one of two physical models describing the vegetation: 1) a single-layer, randomly oriented forest volume with a very strong ground return or 2) a multilayered randomly oriented volume; a homogeneous, single-layer model with no ground return cannot account for the multialtitude correlation amplitudes. Below the inconsistency of the data with a single-layer model is followed by analysis scenarios which include either the ground or a layered structure. The ground returns suggested by this preliminary analysis seem too strong to be plausible, but parameters describing a two-layer compare reasonably well to a field-measured probability distribution of tree heights in the area.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
NASA Astrophysics Data System (ADS)
Sreekala, P. S.; Honey, John; Aanandan, C. K.
2018-05-01
In this communication, the broadband artificial dielectric plasma behavior of Camphor Sulphonic acid doped Polyaniline (PANI-CSA) film at microwave frequencies is experimentally verified. The fabricated PANI-CSA films have been experimentally characterized by rectangular wave guide measurements for a broad range of frequencies within the X band and the effective material parameters, skin depth and conductivity have been extracted from the scattering parameters. Since most of the artificial materials available today are set up by consolidating two structured materials which independently demonstrates negative permittivity and negative permeability, this open another strategy for creation of compact single negative materials for microwave applications. The proposed doping can shift the double positive material parameter of the sample to single negative in nature.
NASA Astrophysics Data System (ADS)
Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar
2017-12-01
Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.
Magnetic properties of single crystal alpha-benzoin oxime: An EPR study
NASA Astrophysics Data System (ADS)
Sayin, Ulku; Dereli, Ömer; Türkkan, Ercan; Ozmen, Ayhan
2012-02-01
The electron paramagnetic resonance (EPR) spectra of gamma irradiated single crystals of alpha-benzoinoxime (ABO) have been examined between 120 and 440 K. Considering the dependence on temperature and the orientation of the spectra of single crystals in the magnetic field, we identified two different radicals formed in irradiated ABO single crystals. To theoretically determine the types of radicals, the most stable structure of ABO was obtained by molecular mechanic and B3LYP/6-31G(d,p) calculations. Four possible radicals were modeled and EPR parameters were calculated for the modeled radicals using the B3LYP method and the TZVP basis set. Calculated values of two modeled radicals were in strong agreement with experimental EPR parameters determined from the spectra. Additional simulated spectra of the modeled radicals, where calculated hyperfine coupling constants were used as starting points for simulations, were well matched with experimental spectra.
Two-dimensional and three-dimensional evaluation of the deformation relief
NASA Astrophysics Data System (ADS)
Alfyorova, E. A.; Lychagin, D. V.
2017-12-01
This work presents the experimental results concerning the research of the morphology of the face-centered cubic single crystal surface after compression deformation. Our aim is to identify the method of forming a quasiperiodic profile of single crystals with different crystal geometrical orientation and quantitative description of deformation structures. A set of modern methods such as optical and confocal microscopy is applied to determine the morphology of surface parameters. The results show that octahedral slip is an integral part of the formation of the quasiperiodic profile surface starting with initial strain. The similarity of the formation process of the surface profile at different scale levels is given. The size of consistent deformation regions is found. This is 45 µm for slip lines ([001]-single crystal) and 30 µm for mesobands ([110]-single crystal). The possibility of using two- and three-dimensional roughness parameters to describe the deformation structures was shown.
NASA Technical Reports Server (NTRS)
Hadass, Z.
1974-01-01
The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Osche, G R
2000-08-20
Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
A multi-objective approach to improve SWAT model calibration in alpine catchments
NASA Astrophysics Data System (ADS)
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.
1983-01-01
The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)
Finite Nuclei in the Quark-Meson Coupling Model.
Stone, J R; Guichon, P A M; Reinhard, P G; Thomas, A W
2016-03-04
We report the first use of the effective quark-meson coupling (QMC) energy density functional (EDF), derived from a quark model of hadron structure, to study a broad range of ground state properties of even-even nuclei across the periodic table in the nonrelativistic Hartree-Fock+BCS framework. The novelty of the QMC model is that the nuclear medium effects are treated through modification of the internal structure of the nucleon. The density dependence is microscopically derived and the spin-orbit term arises naturally. The QMC EDF depends on a single set of four adjustable parameters having a clear physics basis. When applied to diverse ground state data the QMC EDF already produces, in its present simple form, overall agreement with experiment of a quality comparable to a representative Skyrme EDF. There exist, however, multiple Skyrme parameter sets, frequently tailored to describe selected nuclear phenomena. The QMC EDF set of fewer parameters, derived in this work, is not open to such variation, chosen set being applied, without adjustment, to both the properties of finite nuclei and nuclear matter.
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
Multi-Scale Modeling to Improve Single-Molecule, Single-Cell Experiments
NASA Astrophysics Data System (ADS)
Munsky, Brian; Shepherd, Douglas
2014-03-01
Single-cell, single-molecule experiments are producing an unprecedented amount of data to capture the dynamics of biological systems. When integrated with computational models, observations of spatial, temporal and stochastic fluctuations can yield powerful quantitative insight. We concentrate on experiments that localize and count individual molecules of mRNA. These high precision experiments have large imaging and computational processing costs, and we explore how improved computational analyses can dramatically reduce overall data requirements. In particular, we show how analyses of spatial, temporal and stochastic fluctuations can significantly enhance parameter estimation results for small, noisy data sets. We also show how full probability distribution analyses can constrain parameters with far less data than bulk analyses or statistical moment closures. Finally, we discuss how a systematic modeling progression from simple to more complex analyses can reduce total computational costs by orders of magnitude. We illustrate our approach using single-molecule, spatial mRNA measurements of Interleukin 1-alpha mRNA induction in human THP1 cells following stimulation. Our approach could improve the effectiveness of single-molecule gene regulation analyses for many other process.
NASA Astrophysics Data System (ADS)
Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew
2006-07-01
Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rastogi, Rupali, E-mail: rastogirupali@ymail.com; Tarannum, Nazia; Butcher, R. J.
2016-03-15
5-Bromosalicylalcohol was prepared by the interaction of NaBH{sub 4} and 5-bromosalicylaldehyde. The use of sodium borohydride makes the reaction easy, facile, economic and does not require any toxic catalyst. The compound is characterized by FTIR, {sup 1}H NMR, {sup 13}C NMR, TEM and ESI-mass spectra. Crystal structure is determined by single crystal X-ray analysis. Quantum mechanical calculations of geometries, energies and thermodynamic parameters are carried out using density functional theory (DFT/B3LYP) method with 6-311G(d,p) basis set. The optimized geometrical parameters obtained by B3LYP method show good agreement with experimental data.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Papanastasiou, Giorgos; Williams, Michelle C; Kershaw, Lucy E; Dweck, Marc R; Alam, Shirjel; Mirsadraee, Saeed; Connell, Martin; Gray, Calum; MacGillivray, Tom; Newby, David E; Semple, Scott Ik
2015-02-17
Mathematical modeling of cardiovascular magnetic resonance perfusion data allows absolute quantification of myocardial blood flow. Saturation of left ventricle signal during standard contrast administration can compromise the input function used when applying these models. This saturation effect is evident during application of standard Fermi models in single bolus perfusion data. Dual bolus injection protocols have been suggested to eliminate saturation but are much less practical in the clinical setting. The distributed parameter model can also be used for absolute quantification but has not been applied in patients with coronary artery disease. We assessed whether distributed parameter modeling might be less dependent on arterial input function saturation than Fermi modeling in healthy volunteers. We validated the accuracy of each model in detecting reduced myocardial blood flow in stenotic vessels versus gold-standard invasive methods. Eight healthy subjects were scanned using a dual bolus cardiac perfusion protocol at 3T. We performed both single and dual bolus analysis of these data using the distributed parameter and Fermi models. For the dual bolus analysis, a scaled pre-bolus arterial input function was used. In single bolus analysis, the arterial input function was extracted from the main bolus. We also performed analysis using both models of single bolus data obtained from five patients with coronary artery disease and findings were compared against independent invasive coronary angiography and fractional flow reserve. Statistical significance was defined as two-sided P value < 0.05. Fermi models overestimated myocardial blood flow in healthy volunteers due to arterial input function saturation in single bolus analysis compared to dual bolus analysis (P < 0.05). No difference was observed in these volunteers when applying distributed parameter-myocardial blood flow between single and dual bolus analysis. In patients, distributed parameter modeling was able to detect reduced myocardial blood flow at stress (<2.5 mL/min/mL of tissue) in all 12 stenotic vessels compared to only 9 for Fermi modeling. Comparison of single bolus versus dual bolus values suggests that distributed parameter modeling is less dependent on arterial input function saturation than Fermi modeling. Distributed parameter modeling showed excellent accuracy in detecting reduced myocardial blood flow in all stenotic vessels.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
Er:YAG laser for endodontics: efficiency and safety
NASA Astrophysics Data System (ADS)
Hibst, Raimund; Stock, Karl; Gall, Robert; Keller, Ulrich
1997-12-01
Recently it has been shown that bacterias can be sterilized by Er:YAG laser irradiation. By optical fiber transmission the bactericidal effect can also be used in endodontics. In order to explore potential laser parameters, we further investigated sterilization of caries and measured temperatures in models simulating endodontic treatment. It was found out that the bactericidal effect is cumulative, with single pulses being active. This offers to choose all laser parameters except pulse energy (radiant exposure) from technical, practical or safety considerations. For clinical studies the following parameter set is proposed for efficient and safe application (teeth with a root wall thickness > 1 mm, and prepared up to ISO 50): pulse energy: 50 mJ, repetition rate: 15 Hz, fiber withdrawal velocity: 2 mm/s. With these settings 4 passes must be performed to accumulate the total dose for sterilization.
Continued fractions with limit periodic coefficients
NASA Astrophysics Data System (ADS)
Buslaev, V. I.
2018-02-01
The boundary properties of functions represented by limit periodic continued fractions of a fairly general form are investigated. Such functions are shown to have no single-valued meromorphic extension to any neighbourhood of any non-isolated boundary point of the set of convergence of the continued fraction. The boundary of the set of meromorphy has the property of symmetry in an external field determined by the parameters of the continued fraction. Bibliography: 26 titles.
Does History Repeat Itself? Wavelets and the Phylodynamics of Influenza A
Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.
2012-01-01
Unprecedented global surveillance of viruses will result in massive sequence data sets that require new statistical methods. These data sets press the limits of Bayesian phylogenetics as the high-dimensional parameters that comprise a phylogenetic tree increase the already sizable computational burden of these techniques. This burden often results in partitioning the data set, for example, by gene, and inferring the evolutionary dynamics of each partition independently, a compromise that results in stratified analyses that depend only on data within a given partition. However, parameter estimates inferred from these stratified models are likely strongly correlated, considering they rely on data from a single data set. To overcome this shortfall, we exploit the existing Monte Carlo realizations from stratified Bayesian analyses to efficiently estimate a nonparametric hierarchical wavelet-based model and learn about the time-varying parameters of effective population size that reflect levels of genetic diversity across all partitions simultaneously. Our methods are applied to complete genome influenza A sequences that span 13 years. We find that broad peaks and trends, as opposed to seasonal spikes, in the effective population size history distinguish individual segments from the complete genome. We also address hypotheses regarding intersegment dynamics within a formal statistical framework that accounts for correlation between segment-specific parameters. PMID:22160768
Modeling and analysis of energy quantization effects on single electron inverter performance
NASA Astrophysics Data System (ADS)
Dan, Surya Shankar; Mahapatra, Santanu
2009-08-01
In this paper, for the first time, the effects of energy quantization on single electron transistor (SET) inverter performance are analyzed through analytical modeling and Monte Carlo simulations. It is shown that energy quantization mainly changes the Coulomb blockade region and drain current of SET devices and thus affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new analytical model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. A compact expression is developed for a novel parameter quantization threshold which is introduced for the first time in this paper. Quantization threshold explicitly defines the maximum energy quantization that an SET inverter logic circuit can withstand before its noise margin falls below a specified tolerance level. It is found that SET inverter designed with CT:CG=1/3 (where CT and CG are tunnel junction and gate capacitances, respectively) offers maximum robustness against energy quantization.
Drift and observations in cosmic-ray modulation, 1
NASA Technical Reports Server (NTRS)
Potgieter, M. S.
1985-01-01
It is illustrated that a relative simple drift model can, in contrast with no drift models, simultaneously fit proton and electron spectra observed in 1965-66 and 1977, using a single set of modulation parameters except for a change in the IMF polarity. This result is interpreted together with the observation of Evenson and Meyer that electrons are recovering more rapidly than protons after 1980, in contrast with what Burger and Swanenburg observed in 1968-72, as a charge sign dependent effect due to the occurrence of drift in cosmic ray modulation. The same set of parameters produces a shift in the phase and amplitude of the diurnal anisotropy vector, consistent with observations in 1969-71 and 1980-81.
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Propagation characteristics of two-color laser pulses in homogeneous plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemlata,; Saroch, Akanksha; Jha, Pallavi
2015-11-15
An analytical and numerical study of the evolution of two-color, sinusoidal laser pulses in cold, underdense, and homogeneous plasma has been presented. The wave equations for the radiation fields driven by linear as well as nonlinear contributions due to the two-color laser pulses have been set up. A variational technique is used to obtain the simultaneous equations describing the evolution of the laser spot size, pulse length, and chirp parameter. Numerical methods are used to graphically analyze the simultaneous evolution of these parameters due to the combined effect of the two-color laser pulses. Further, the pulse parameters are compared withmore » those obtained for a single laser pulse. Significant focusing, compression, and enhanced positive chirp is obtained due to the combined effect of simultaneously propagating two-color pulses as compared to a single pulse propagating in plasma.« less
USDA-ARS?s Scientific Manuscript database
The complexity of the hydrologic system challenges the development of models. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the ...
Evaluation of innovative rocket engines for single-stage earth-to-orbit vehicles
NASA Astrophysics Data System (ADS)
Manski, Detlef; Martin, James A.
1988-07-01
Computer models of rocket engines and single-stage-to-orbit vehicles that were developed by the authors at DFVLR and NASA have been combined. The resulting code consists of engine mass, performance, trajectory and vehicle sizing models. The engine mass model includes equations for each subsystem and describes their dependences on various propulsion parameters. The engine performance model consists of multidimensional sets of theoretical propulsion properties and a complete thermodynamic analysis of the engine cycle. The vehicle analyses include an optimized trajectory analysis, mass estimation, and vehicle sizing. A vertical-takeoff, horizontal-landing, single-stage, winged, manned, fully reusable vehicle with a payload capability of 13.6 Mg (30,000 lb) to low earth orbit was selected. Hydrogen, methane, propane, and dual-fuel engines were studied with staged-combustion, gas-generator, dual bell, and the dual-expander cycles. Mixture ratio, chamber pressure, nozzle exit pressure liftoff acceleration, and dual fuel propulsive parameters were optimized.
Evaluation of innovative rocket engines for single-stage earth-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Manski, Detlef; Martin, James A.
1988-01-01
Computer models of rocket engines and single-stage-to-orbit vehicles that were developed by the authors at DFVLR and NASA have been combined. The resulting code consists of engine mass, performance, trajectory and vehicle sizing models. The engine mass model includes equations for each subsystem and describes their dependences on various propulsion parameters. The engine performance model consists of multidimensional sets of theoretical propulsion properties and a complete thermodynamic analysis of the engine cycle. The vehicle analyses include an optimized trajectory analysis, mass estimation, and vehicle sizing. A vertical-takeoff, horizontal-landing, single-stage, winged, manned, fully reusable vehicle with a payload capability of 13.6 Mg (30,000 lb) to low earth orbit was selected. Hydrogen, methane, propane, and dual-fuel engines were studied with staged-combustion, gas-generator, dual bell, and the dual-expander cycles. Mixture ratio, chamber pressure, nozzle exit pressure liftoff acceleration, and dual fuel propulsive parameters were optimized.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Properties of Martian Hematite at Meridiani Planum by Simultaneous Fitting of Mars Mossbauer Spectra
NASA Technical Reports Server (NTRS)
Agresti, D. G.; Fleischer, I.; Klingelhoefer, G.; Morris, R. V.
2010-01-01
Mossbauer spectrometers [1] on the two Mars Exploration Rovers (MERs) have been making measurements of surface rocks and soils since January 2004, recording spectra in 10-K-wide temperature bins ranging from 180 K to 290 K. Initial analyses focused on modeling individual spectra directly as acquired or, to increase statistical quality, as sums of single-rock or soil spectra over temperature or as sums over similar rock or soil type [2, 3]. Recently, we have begun to apply simultaneous fitting procedures [4] to Mars Mossbauer data [5-7]. During simultaneous fitting (simfitting), many spectra are modeled similarly and fit together to a single convergence criterion. A satisfactory simfit with parameter values consistent among all spectra is more likely than many single-spectrum fits of the same data because fitting parameters are shared among multiple spectra in the simfit. Consequently, the number of variable parameters, as well as the correlations among them, is greatly reduced. Here we focus on applications of simfitting to interpret the hematite signature in Moessbauer spectra acquired at Meridiani Planum, results of which were reported in [7]. The Spectra. We simfit two sets of spectra with large hematite content [7]: 1) 60 rock outcrop spectra from Eagle Crater; and 2) 46 spectra of spherule-rich lag deposits (Table 1). Spectra of 10 different targets acquired at several distinct temperatures are included in each simfit set. In the table, each Sol (martian day) represents a different target, NS is the number of spectra for a given sol, and NT is the number of spectra for a given temperature. The spectra are indexed to facilitate definition of parameter relations and constraints. An example spectrum is shown in Figure 1, together with a typical fitting model. Results. We have shown that simultaneous fitting is effective in analyzing a large set of related MER Mossbauer spectra. By using appropriate constraints, we derive target-specific quantities and the temperature dependence of certain parameters. By examining different fitting models, we demonstrate an improved fit for martian hematite modeled with two sextets rather than as a single sextet, and show that outcrop and spherule hematite are distinct. For outcrop, the weaker sextet indicates a Morin transition typical of well-crystallized and chemically pure hematite, while most of the outcrop hematite remains in a weakly ferromagnetic state at all temperatures. For spherule spectra, both sextets are consistent with weakly ferromagnetic hematite with no Morin transition. For both hematites, there is evidence for a range of particle sizes.
2016-06-13
motional ground state, the ratio of Rabi frequencies of carrier and sideband couplings is given by the Lamb-Dicke parameter48, which is for u1 and Dkx...carrier Rabi - frequencies determine Lamb-Dicke parameters and allow for finding the orientation of modes. We use a single ion near T0 to determine the...and find corresponding coefficient settings where we obtain a maximal Rabi rate of the detection transition and/or minimal Rabi rates of micromotion
A possible loophole in the theorem of Bell.
Hess, K; Philipp, W
2001-12-04
The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.
The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.
Nowak, Markus; Castellini, Claudio
2016-01-01
Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Farhan, Y.H.; Scow, K.M.; Fan, S.
Trichloroethylene (TCE) biodegradation in soil under aerobic conditions requires the presence of another compound, such as toluene, to support growth of microbial populations and enzyme induction. The biodegradation kinetics of TCE and toluene were examined by conducting three groups of experiments in soil: toluene only, toluene combined with low TCE concentrations, and toluene with TCE concentrations similar to or higher than toluene. The biodegradation of TCE and toluene and their interrelationships were modeled using a combination of several biodegradation functions. In the model, the pollutants were described as existing in the solid, liquid, and gas phases of soil, with biodegradationmore » occurring only in the liquid phase. The distribution of the chemicals between the solid and liquid phase was described by a linear sorption isotherm, whereas liquid-vapor partitioning was described by Henry's law. Results from 12 experiments with toluene only could be described by a single set of kinetic parameters. The same set of parameters could describe toluene degradation in 10 experiments where low TCE concentrations were present. From these 10 experiments a set of parameters describing TCE cometabolism induced by toluene also was obtained. The complete set of parameters was used to describe the biodegradation of both compounds in 15 additional experiments, where significant TCE toxicity and inhibition effects were expected. Toluene parameters were similar to values reported for pure culture systems. Parameters describing the interaction of TCE with toluene and biomass were different from reported values for pure cultures, suggesting that the presence of soil may have affected the cometabolic ability of the indigenous soil microbial populations.« less
Folding and stability of helical bundle proteins from coarse-grained models.
Kapoor, Abhijeet; Travesset, Alex
2013-07-01
We develop a coarse-grained model where solvent is considered implicitly, electrostatics are included as short-range interactions, and side-chains are coarse-grained to a single bead. The model depends on three main parameters: hydrophobic, electrostatic, and side-chain hydrogen bond strength. The parameters are determined by considering three level of approximations and characterizing the folding for three selected proteins (training set). Nine additional proteins (containing up to 126 residues) as well as mutated versions (test set) are folded with the given parameters. In all folding simulations, the initial state is a random coil configuration. Besides the native state, some proteins fold into an additional state differing in the topology (structure of the helical bundle). We discuss the stability of the native states, and compare the dynamics of our model to all atom molecular dynamics simulations as well as some general properties on the interactions governing folding dynamics. Copyright © 2013 Wiley Periodicals, Inc.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
NASA Technical Reports Server (NTRS)
Koval, L. R.
1975-01-01
In the context of sound transmission through aircraft fuselage panels, equations for the field-incidence transmission loss (TL) of a single-walled panel are derived that include the effects of external air flow, panel curvature, and internal fuselage pressurization. These effects are incorporated into the classical equations for the TL of single panels, and the resulting double integral for field-incidence TL is numerically evaluated for a specific set of parameters.
Stack Characterization in CryoSat Level1b SAR/SARin Baseline C
NASA Astrophysics Data System (ADS)
Scagliola, Michele; Fornari, Marco; Di Giacinto, Andrea; Bouffard, Jerome; Féménias, Pierre; Parrinello, Tommaso
2015-04-01
CryoSat was launched on the 8th April 2010 and is the first European ice mission dedicated to the monitoring of precise changes in the thickness of polar ice sheets and floating sea ice. CryoSat is the first altimetry mission operating in SAR mode and it carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL), that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. The current CryoSat IPF (Instrument Processing Facility), Baseline B, was released in operation in February 2012. After more than 2 years of development, the release in operations of the Baseline C is expected in the first half of 2015. It is worth recalling here that the CryoSat SAR/SARin IPF1 generates 20Hz waveforms in correspondence of an approximately equally spaced set of ground locations on the Earth surface, i.e. surface samples, and that a surface sample gathers a collection of single-look echoes coming from the processed bursts during the time of visibility. Thus, for a given surface sample, the stack can be defined as the collection of all the single-look echoes pointing to the current surface sample, after applying all the necessary range corrections. The L1B product contains the power average of all the single-look echoes in the stack: the multi-looked L1B waveform. This reduces the data volume, while removing some information contained in the single looks, useful for characterizing the surface and modelling the L1B waveform. To recover such information, a set of parameters has been added to the L1B product: the stack characterization or beam behaviour parameters. The stack characterization, already included in previous Baselines, has been reviewed and expanded in Baseline C. This poster describes all the stack characterization parameters, detailing what they represent and how they have been computed. In details, such parameters can be summarized in: - Stack statistical parameters, such as skewness and kurtosis - Look angle (i.e. the angle at which the surfaces sample is seen with respect to the nadir direction of the satellite) and Doppler angle (i.e. the angle at which the surfaces sample is seen with respect to the normal to the velocity vector) for the first and the last single-look echoes in the stack. - Number of single-looks averaged in the stack (in Baseline C a stack-weighting has been applied that reduces the number of looks). With the correct use of these parameters, users will be able to retrieve some of the 'lost' information contained within the stack and fully exploit the L1B product.
Laboratory testing on infiltration in single synthetic fractures
NASA Astrophysics Data System (ADS)
Cherubini, Claudia; Pastore, Nicola; Li, Jiawei; Giasi, Concetta I.; Li, Ling
2017-04-01
An understanding of infiltration phenomena in unsaturated rock fractures is extremely important in many branches of engineering for numerous reasons. Sectors such as the oil, gas and water industries are regularly interacting with water seepage through rock fractures, yet the understanding of the mechanics and behaviour associated with this sort of flow is still incomplete. An apparatus has been set up to test infiltration in single synthetic fractures in both dry and wet conditions. To simulate the two fracture planes, concrete fractures have been moulded from 3D printed fractures with varying geometrical configurations, in order to analyse the influence of aperture and roughness on infiltration. Water flows through the single fractures by means of a hydraulic system composed by an upstream and a downstream reservoir, the latter being subdivided into five equal sections in order to measure the flow rate in each part to detect zones of preferential flow. The fractures have been set at various angles of inclination to investigate the effect of this parameter on infiltration dynamics. The results obtained identified that altering certain fracture parameters and conditions produces relevant effects on the infiltration process through the fractures. The main variables influencing the formation of preferential flow are: the inclination angle of the fracture, the saturation level of the fracture and the mismatch wavelength of the fracture.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Deep neural nets as a method for quantitative structure-activity relationships.
Ma, Junshui; Sheridan, Robert P; Liaw, Andy; Dahl, George E; Svetnik, Vladimir
2015-02-23
Neural networks were widely used for quantitative structure-activity relationships (QSAR) in the 1990s. Because of various practical issues (e.g., slow on large problems, difficult to train, prone to overfitting, etc.), they were superseded by more robust methods like support vector machine (SVM) and random forest (RF), which arose in the early 2000s. The last 10 years has witnessed a revival of neural networks in the machine learning community thanks to new methods for preventing overfitting, more efficient training algorithms, and advancements in computer hardware. In particular, deep neural nets (DNNs), i.e. neural nets with more than one hidden layer, have found great successes in many applications, such as computer vision and natural language processing. Here we show that DNNs can routinely make better prospective predictions than RF on a set of large diverse QSAR data sets that are taken from Merck's drug discovery effort. The number of adjustable parameters needed for DNNs is fairly large, but our results show that it is not necessary to optimize them for individual data sets, and a single set of recommended parameters can achieve better performance than RF for most of the data sets we studied. The usefulness of the parameters is demonstrated on additional data sets not used in the calibration. Although training DNNs is still computationally intensive, using graphical processing units (GPUs) can make this issue manageable.
NASA Astrophysics Data System (ADS)
Siebenmorgen, R.; Voshchinnikov, N. V.; Bagnulo, S.; Cox, N. L. J.; Cami, J.; Peest, C.
2018-03-01
It is well known that the dust properties of the diffuse interstellar medium exhibit variations towards different sight-lines on a large scale. We have investigated the variability of the dust characteristics on a small scale, and from cloud-to-cloud. We use low-resolution spectro-polarimetric data obtained in the context of the Large Interstellar Polarisation Survey (LIPS) towards 59 sight-lines in the Southern Hemisphere, and we fit these data using a dust model composed of silicate and carbon particles with sizes from the molecular to the sub-micrometre domain. Large (≥6 nm) silicates of prolate shape account for the observed polarisation. For 32 sight-lines we complement our data set with UVES archive high-resolution spectra, which enable us to establish the presence of single-cloud or multiple-clouds towards individual sight-lines. We find that the majority of these 35 sight-lines intersect two or more clouds, while eight of them are dominated by a single absorbing cloud. We confirm several correlations between extinction and parameters of the Serkowski law with dust parameters, but we also find previously undetected correlations between these parameters that are valid only in single-cloud sight-lines. We find that interstellar polarisation from multiple-clouds is smaller than from single-cloud sight-lines, showing that the presence of a second or more clouds depolarises the incoming radiation. We find large variations of the dust characteristics from cloud-to-cloud. However, when we average a sufficiently large number of clouds in single-cloud or multiple-cloud sight-lines, we always retrieve similar mean dust parameters. The typical dust abundances of the single-cloud cases are [C]/[H] = 92 ppm and [Si]/[H] = 20 ppm.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
Adaptive single-pixel imaging with aggregated sampling and continuous differential measurements
NASA Astrophysics Data System (ADS)
Huo, Yaoran; He, Hongjie; Chen, Fan; Tai, Heng-Ming
2018-06-01
This paper proposes an adaptive compressive imaging technique with one single-pixel detector and single arm. The aggregated sampling (AS) method enables the reduction of resolutions of the reconstructed images. It aims to reduce the time and space consumption. The target image with a resolution up to 1024 × 1024 can be reconstructed successfully at the 20% sampling rate. The continuous differential measurement (CDM) method combined with a ratio factor of significant coefficient (RFSC) improves the imaging quality. Moreover, RFSC reduces the human intervention in parameter setting. This technique enhances the practicability of single-pixel imaging with the benefits from less time and space consumption, better imaging quality and less human intervention.
NASA Astrophysics Data System (ADS)
Reif, Maria M.; Hünenberger, Philippe H.
2011-04-01
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Hünenberger, J. Chem. Phys. 124, 224501 (2006), 10.1529/biophysj.106.083667; M. M. Reif and P. H. Hünenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li+, Na+, K+, Rb+, Cs+) and halide (F-, Cl-, Br-, I-) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998), 10.1021/jp982638r; Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, Δ G_hyd^{ominus }[H+] = -1100, -1075 or -1050 kJ mol-1, resulting in three sets L, M, and H for the SPC water model and three sets LE, ME, and HE for the SPC/E water model (alternative sets can easily be interpolated to intermediate Δ G_hyd^{ominus }[H+] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective ionic radius entering into the calculation of the correction terms is also evaluated and found to be very limited. Ultimately, it is expected that comparison with other experimental ionic properties (e.g., derivative single-ion solvation properties, as well as data concerning ionic crystals, melts, solutions at finite concentrations, or nonaqueous solutions) will permit to validate one specific set and thus, the associated Δ G_hyd^{ominus }[H+] value (atomistic consistency assumption). Preliminary results (first-peak positions in the ion-water radial distribution functions, partial molar volumes of ionic salts in water, and structural properties of ionic crystals) support a value of Δ G_hyd^{ominus }[H+] close to -1100 kJ.mol-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltiadis Alamaniotis; Vivek Agarwal
This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less
Allometric scaling: analysis of LD50 data.
Burzala-Kowalczyk, Lidia; Jongbloed, Geurt
2011-04-01
The need to identify toxicologically equivalent doses across different species is a major issue in toxicology and risk assessment. In this article, we investigate interspecies scaling based on the allometric equation applied to the single, oral LD (50) data previously analyzed by Rhomberg and Wolff. We focus on the statistical approach, namely, regression analysis of the mentioned data. In contrast to Rhomberg and Wolff's analysis of species pairs, we perform an overall analysis based on the whole data set. From our study it follows that if one assumes one single scaling rule for all species and substances in the data set, then β = 1 is the most natural choice among a set of candidates known in the literature. In fact, we obtain quite narrow confidence intervals for this parameter. However, the estimate of the variance in the model is relatively high, resulting in rather wide prediction intervals. © 2010 Society for Risk Analysis.
Steady-state, lumped-parameter model for capacitor-run, single-phase induction motors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umans, S.D.
1996-01-01
This paper documents a technique for deriving a steady-state, lumped-parameter model for capacitor-run, single-phase induction motors. The objective of this model is to predict motor performance parameters such as torque, loss distribution, and efficiency as a function of applied voltage and motor speed as well as the temperatures of the stator windings and of the rotor. The model includes representations of both the main and auxiliary windings (including arbitrary external impedances) and also the effects of core and rotational losses. The technique can be easily implemented and the resultant model can be used in a wide variety of analyses tomore » investigate motor performance as a function of load, speed, and winding and rotor temperatures. The technique is based upon a coupled-circuit representation of the induction motor. A notable feature of the model is the technique used for representing core loss. In equivalent-circuit representations of transformers and induction motors, core loss is typically represented by a core-loss resistance in shunt with the magnetizing inductance. In order to maintain the coupled-circuit viewpoint adopted in this paper, this technique was modified slightly; core loss is represented by a set of core-loss resistances connected to the ``secondaries`` of a set of windings which perfectly couple to the air-gap flux of the motor. An example of the technique is presented based upon a 3.5 kW, single-phase, capacitor-run motor and the validity of the technique is demonstrated by comparing predicted and measured motor performance.« less
Haskey, Shaun R.; Lanctot, Matthew J.; Liu, Y. Q.; ...
2015-01-05
Parameter scans show the strong dependence of the plasma response on the poloidal structure of the applied field highlighting the importance of being able to control this parameter using non-axisymmetric coil sets. An extensive examination of the linear single fluid plasma response to n = 3 magnetic perturbations in L-mode DIII-D lower single null plasmas is presented. The effects of plasma resistivity, toroidal rotation and applied field structure are calculated using the linear single fluid MHD code, MARS-F. Measures which separate the response into a pitch-resonant and resonant field amplification (RFA) component are used to demonstrate the extent to whichmore » resonant screening and RFA occurs. The ability to control the ratio of pitch-resonant fields to RFA by varying the phasing between upper and lower resonant magnetic perturbations coils sets is shown. The predicted magnetic probe outputs and displacement at the x-point are also calculated for comparison with experiments. Additionally, modelling of the linear plasma response using experimental toroidal rotation profiles and Spitzer like resistivity profiles are compared with results which provide experimental evidence of a direct link between the decay of the resonant screening response and the formation of a 3D boundary. As a result, good agreement is found during the initial application of the MP, however, later in the shot a sudden drop in the poloidal magnetic probe output occurs which is not captured in the linear single fluid modelling.« less
3D Printing — The Basins of Tristability in the Lorenz System
NASA Astrophysics Data System (ADS)
Xiong, Anda; Sprott, Julien C.; Lyu, Jingxuan; Wang, Xilu
The famous Lorenz system is studied and analyzed for a particular set of parameters originally proposed by Lorenz. With those parameters, the system has a single globally attracting strange attractor, meaning that almost all initial conditions in its 3D state space approach the attractor as time advances. However, with a slight change in one of the parameters, the chaotic attractor coexists with a symmetric pair of stable equilibrium points, and the resulting tri-stable system has three intertwined basins of attraction. The advent of 3D printers now makes it possible to visualize the topology of such basins of attraction as the results presented here illustrate.
Fisher information theory for parameter estimation in single molecule microscopy: tutorial
Chao, Jerry; Ward, E. Sally; Ober, Raimund J.
2016-01-01
Estimation of a parameter of interest from image data represents a task that is commonly carried out in single molecule microscopy data analysis. The determination of the positional coordinates of a molecule from its image, for example, forms the basis of standard applications such as single molecule tracking and localization-based superresolution image reconstruction. Assuming that the estimator used recovers, on average, the true value of the parameter, its accuracy, or standard deviation, is then at best equal to the square root of the Cramér-Rao lower bound. The Cramér-Rao lower bound can therefore be used as a benchmark in the evaluation of the accuracy of an estimator. Additionally, as its value can be computed and assessed for different experimental settings, it is useful as an experimental design tool. This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramér-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy. The material includes a presentation of the photon detection process that underlies all image data, various image data models that describe images acquired with different detector types, and Fisher information expressions that are necessary for the calculation of the lower bound. Throughout the tutorial, examples involving concrete estimation problems are used to illustrate the effects of various factors on the accuracy of parameter estimation, and more generally, to demonstrate the flexibility of the mathematical framework. PMID:27409706
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
Generative Representations for Evolving Families of Designs
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2003-01-01
Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.
Rodrigues, Philip; Wilkinson, Callum; McFarland, Kevin
2016-08-24
The longstanding discrepancy between bubble chamber measurements of ν μ-induced single pion production channels has led to large uncertainties in pion production cross section parameters for many years. We extend the reanalysis of pion production data in deuterium bubble chambers where this discrepancy is solved to include the ν μn → μ –pπ 0 and ν μn→μ –nπ + channels, and use the resulting data to fit the parameters of the GENIE pion production model. We find a set of parameters that can describe the bubble chamber data better than the GENIE default parameters, and provide updated central values andmore » reduced uncertainties for use in neutrino oscillation and cross section analyses which use the GENIE model. Here, we find that GENIE’s non-resonant background prediction has to be significantly reduced to fit the data, which may help to explain the recent discrepancies between simulation and data observed by the MINERνA coherent pion and NOνA oscillation analyses.« less
Keskin, O.; Bahar, I.; Badretdinov, A. Y.; Ptitsyn, O. B.; Jernigan, R. L.
1998-01-01
Whether knowledge-based intra-molecular inter-residue potentials are valid to represent inter-molecular interactions taking place at protein-protein interfaces has been questioned in several studies. Differences in the chain connectivity effect and in residue packing geometry between interfaces and single chain monomers have been pointed out as possible sources of distinct energetics for the two cases. In the present study, the interfacial regions of protein-protein complexes are examined to extract inter-molecular inter-residue potentials, using the same statistical methods as those previously adopted for intra-molecular residue pairs. Two sets of energy parameters are derived, corresponding to solvent-mediation and "average residue" mediation. The former set is shown to be highly correlated (correlation coefficient 0.89) with that previously obtained for inter-residue interactions within single chain monomers, while the latter exhibits a weaker correlation (0.69) with its intra-molecular counterpart. In addition to the close similarity of intra- and inter-molecular solvent-mediated potentials, they are shown to be significantly more residue-specific and thereby discriminative compared to the residue-mediated ones, indicating that solvent-mediation plays a major role in controlling the effective inter-residue interactions, either at interfaces, or within single monomers. Based on this observation, a reduced set of energy parameters comprising 20 one-body and 3 two-body terms is proposed (as opposed to the 20 x 20 tables of inter-residue potentials), which reproduces the conventional 20 x 20 tables with a correlation coefficient of 0.99. PMID:9865952
Bell's theorem and the problem of decidability between the views of Einstein and Bohr.
Hess, K; Philipp, W
2001-12-04
Einstein, Podolsky, and Rosen (EPR) have designed a gedanken experiment that suggested a theory that was more complete than quantum mechanics. The EPR design was later realized in various forms, with experimental results close to the quantum mechanical prediction. The experimental results by themselves have no bearing on the EPR claim that quantum mechanics must be incomplete nor on the existence of hidden parameters. However, the well known inequalities of Bell are based on the assumption that local hidden parameters exist and, when combined with conflicting experimental results, do appear to prove that local hidden parameters cannot exist. This fact leaves only instantaneous actions at a distance (called "spooky" by Einstein) to explain the experiments. The Bell inequalities are based on a mathematical model of the EPR experiments. They have no experimental confirmation, because they contradict the results of all EPR experiments. In addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions; for instance, he assumes that the hidden parameters are governed by a single probability measure independent of the analyzer settings. We argue that the mathematical model of Bell excludes a large set of local hidden variables and a large variety of probability densities. Our set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does permit derivation of the quantum result and is consistent with all known experiments.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Microcavity morphology optimization
NASA Astrophysics Data System (ADS)
Ferdous, Fahmida; Demchenko, Alena A.; Vyatchanin, Sergey P.; Matsko, Andrey B.; Maleki, Lute
2014-09-01
High spectral mode density of conventional optical cavities is detrimental to the generation of broad optical frequency combs and to other linear and nonlinear applications. In this work we optimize the morphology of high-Q whispering gallery (WG) and Fabry-Perot (FP) cavities and find a set of parameters that allows treating them, essentially, as single-mode structures, thus removing limitations associated with a high density of cavity mode spectra. We show that both single-mode WGs and single-mode FP cavities have similar physical properties, in spite of their different loss mechanisms. The morphology optimization does not lead to a reduction of quality factors of modes belonging to the basic family. We study the parameter space numerically and find the region where the highest possible Q factor of the cavity modes can be realized while just having a single bound state in the cavity. The value of the Q factor is comparable with that achieved in conventional cavities. The proposed cavity structures will be beneficial for generation of octave spanning coherent frequency combs and will prevent undesirable effects of parametric instability in laser gravitational wave detectors.
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
van den Noort, Josien C; Verhagen, Rens; van Dijk, Kees J; Veltink, Peter H; Vos, Michelle C P M; de Bie, Rob M A; Bour, Lo J; Heida, Ciska T
2017-10-01
This proof-of-principle study describes the methodology and explores and demonstrates the applicability of a system, existing of miniature inertial sensors on the hand and a separate force sensor, to objectively quantify hand motor symptoms in patients with Parkinson's disease (PD) in a clinical setting (off- and on-medication condition). Four PD patients were measured in off- and on- dopaminergic medication condition. Finger tapping, rapid hand opening/closing, hand pro/supination, tremor during rest, mental task and kinetic task, and wrist rigidity movements were measured with the system (called the PowerGlove). To demonstrate applicability, various outcome parameters of measured hand motor symptoms of the patients in off- vs. on-medication condition are presented. The methodology described and results presented show applicability of the PowerGlove in a clinical research setting, to objectively quantify hand bradykinesia, tremor and rigidity in PD patients, using a single system. The PowerGlove measured a difference in off- vs. on-medication condition in all tasks in the presented patients with most of its outcome parameters. Further study into the validity and reliability of the outcome parameters is required in a larger cohort of patients, to arrive at an optimal set of parameters that can assist in clinical evaluation and decision-making.
Analyzing Single-Molecule Time Series via Nonparametric Bayesian Inference
Hines, Keegan E.; Bankston, John R.; Aldrich, Richard W.
2015-01-01
The ability to measure the properties of proteins at the single-molecule level offers an unparalleled glimpse into biological systems at the molecular scale. The interpretation of single-molecule time series has often been rooted in statistical mechanics and the theory of Markov processes. While existing analysis methods have been useful, they are not without significant limitations including problems of model selection and parameter nonidentifiability. To address these challenges, we introduce the use of nonparametric Bayesian inference for the analysis of single-molecule time series. These methods provide a flexible way to extract structure from data instead of assuming models beforehand. We demonstrate these methods with applications to several diverse settings in single-molecule biophysics. This approach provides a well-constrained and rigorously grounded method for determining the number of biophysical states underlying single-molecule data. PMID:25650922
Rain-rate data base development and rain-rate climate analysis
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1993-01-01
The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.
A study on parameter variation effects on battery packs for electric vehicles
NASA Astrophysics Data System (ADS)
Zhou, Long; Zheng, Yuejiu; Ouyang, Minggao; Lu, Languang
2017-10-01
As one single cell cannot meet power and driving range requirement in an electric vehicle, the battery packs with hundreds of single cells connected in parallel and series should be constructed. The most significant difference between a single cell and a battery pack is cell variation. Not only does cell variation affect pack energy density and power density, but also it causes early degradation of battery and potential safety issues. The cell variation effects on battery packs are studied, which are of great significant to battery pack screening and management scheme. In this study, the description for the consistency characteristics of battery packs was first proposed and a pack model with 96 cells connected in series was established. A set of parameters are introduced to study the cell variation and their impacts on battery packs are analyzed through the battery pack capacity loss simulation and experiments. Meanwhile, the capacity loss composition of the battery pack is obtained and verified by the temperature variation experiment. The results from this research can demonstrate that the temperature, self-discharge rate and coulombic efficiency are the major affecting parameters of cell variation and indicate the dissipative cell equalization is sufficient for the battery pack.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estrada Rodas, Ernesto A.; Neu, Richard W.
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Estrada Rodas, Ernesto A.; Neu, Richard W.
2017-09-11
A crystal viscoplasticity (CVP) model for the creep-fatigue interactions of nickel-base superalloy CMSX-8 is proposed. At the microstructure scale of relevance, the superalloys are a composite material comprised of a γ phase and a γ' strengthening phase with unique deformation mechanisms that are highly dependent on temperature. Considering the differences in the deformation of the individual material phases is paramount to predicting the deformation behavior of superalloys at a wide range of temperatures. In this work, we account for the relevant deformation mechanisms that take place in both material phases by utilizing two additive strain rates to model the deformationmore » on each material phase. The model is capable of representing the creep-fatigue interactions in single-crystal superalloys for realistic 3-dimensional components in an Abaqus User Material Subroutine (UMAT). Using a set of material parameters calibrated to superalloy CMSX-8, the model predicts creep-fatigue, fatigue and thermomechanical fatigue behavior of this single-crystal superalloy. In conclusion, a sensitivity study of the material parameters is done to explore the effect on the deformation due to changes in the material parameters relevant to the microstructure.« less
Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis
NASA Astrophysics Data System (ADS)
Springer, Everett P.; Cundy, Terrance W.
1987-02-01
Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.
Dynamic Response of Multiphase Porous Media
1993-06-16
34"--OIct 5oct, tf1 2fOct, a f s,t,R Linearly Set Parameters Interpolate s = 1.03 from Model Fit s,t,R t = R = 0.0 Parameters Figure 3.3 Extrapolation...nitrogen. To expedite the testing, the system was equipped with solenoid operated valves so that the tests could be conducted by a single operator...incident bar. Figure 6.6 shows the incident bar entering the pressure vessel that contains the test specimen. The hose and valves are for filling and 6-5 I
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Branon search in hadronic colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cembranos, J.A.R.; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040 Madrid; Dobado, A.
2004-11-01
In the context of the brane-world scenarios with compactified extra dimensions, we study the production of brane fluctuations (branons) in hadron colliders (pp, pp, and e{sup {+-}}p) in terms of the brane tension parameter f, the branon mass M, and the number of branons N. From the absence of monojet events at HERA and Tevatron (run I), we set bounds on these parameters and we also study how such bounds could be improved at Tevatron (run II) and the future LHC. The single-photon channel is also analyzed for the two last colliders.
Spiral Galaxy Lensing: A Model with Twist
NASA Astrophysics Data System (ADS)
Bell, Steven R.; Ernst, Brett; Fancher, Sean; Keeton, Charles R.; Komanduru, Abi; Lundberg, Erik
2014-12-01
We propose a single galaxy gravitational lensing model with a mass density that has a spiral structure. Namely, we extend the arcsine gravitational lens (a truncated singular isothermal elliptical model), adding an additional parameter that controls the amount of spiraling in the structure of the mass density. An important feature of our model is that, even though the mass density is sophisticated, we succeed in integrating the deflection term in closed form using a Gauss hypergeometric function. When the spiraling parameter is set to zero, this reduces to the arcsine lens.
Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidab...
Effect Size Measure and Analysis of Single Subject Designs
ERIC Educational Resources Information Center
Society for Research on Educational Effectiveness, 2013
2013-01-01
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Prediction of Environmental Impact of High-Energy Materials with Atomistic Computer Simulations
2010-11-01
from a training set of compounds. Other methods include Quantitative Struc- ture-Activity Relationship ( QSAR ) and Quantitative Structure-Property...26 28 the development of QSPR/ QSAR models, in contrast to boiling points and critical parameters derived from empirical correlations, to improve...Quadratic Configuration Interaction Singles Doubles QSAR Quantitative Structure-Activity Relationship QSPR Quantitative Structure-Property
Bustamante, P; Pena, M A; Barra, J
2000-01-20
Sodium salts are often used in drug formulation but their partial solubility parameters are not available. Sodium alters the physical properties of the drug and the knowledge of these parameters would help to predict adhesion properties that cannot be estimated using the solubility parameters of the parent acid. This work tests the applicability of the modified extended Hansen method to determine partial solubility parameters of sodium salts of acidic drugs containing a single hydrogen bonding group (ibuprofen, sodium ibuprofen, benzoic acid and sodium benzoate). The method uses a regression analysis of the logarithm of the experimental mole fraction solubility of the drug against the partial solubility parameters of the solvents, using models with three and four parameters. The solubility of the drugs was determined in a set of solvents representative of several chemical classes, ranging from low to high solubility parameter values. The best results were obtained with the four parameter model for the acidic drugs and with the three parameter model for the sodium derivatives. The four parameter model includes both a Lewis-acid and a Lewis-base term. Since the Lewis acid properties of the sodium derivatives are blocked by sodium, the three parameter model is recommended for these kind of compounds. Comparison of the parameters obtained shows that sodium greatly changes the polar parameters whereas the dispersion parameter is not much affected. Consequently the total solubility parameters of the salts are larger than for the parent acids in good agreement with the larger hydrophilicity expected from the introduction of sodium. The results indicate that the modified extended Hansen method can be applied to determine the partial solubility parameters of acidic drugs and their sodium salts.
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei; ...
2017-03-02
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelovich, Tamar; Hansen, Thorsten; Liu, Zhen-Fei
A parameter-free version of the recently developed driven Liouville-von Neumann equation [T. Zelovich et al., J. Chem. Theory Comput. 10(8), 2927-2941 (2014)] for electronic transport calculations in molecular junctions is presented. The single driving rate, appearing as a fitting parameter in the original methodology, is replaced by a set of state-dependent broadening factors applied to the different single-particle lead levels. These broadening factors are extracted explicitly from the self-energy of the corresponding electronic reservoir and are fully transferable to any junction incorporating the same lead model. Furthermore, the performance of the method is demonstrated via tight-binding and extended Hückel calculationsmore » of simple junction models. Our analytic considerations and numerical results indicate that the developed methodology constitutes a rigorous framework for the design of "black-box" algorithms to simulate electron dynamics in open quantum systems out of equilibrium.« less
Single Object & Time Series Spectroscopy with JWST NIRCam
NASA Technical Reports Server (NTRS)
Greene, Tom; Schlawin, Everett A.
2017-01-01
JWST will enable high signal-to-noise spectroscopic observations of the atmospheres of transiting planets with high sensitivity at wavelengths that are inaccessible with HST or other existing facilities. We plan to exploit this by measuring abundances, chemical compositions, cloud properties, and temperature-pressure parameters of a set of mostly warm (T 600 - 1200 K) and low mass (14 -200 Earth mass) planets in our guaranteed time program. These planets are expected to have significant molecular absorptions of H2O, CH4, CO2, CO, and other molecules that are key for determining these parameters and illuminating how and where the planets formed. We describe how we will use the NIRCam grisms to observe slitless transmission and emission spectra of these planets over 2.4 - 5.0 microns wavelength and how well these observations can measure our desired parameters. This will include how we set integration times, exposure parameters, and obtain simultaneous shorter wavelength images to track telescope pointing and stellar variability. We will illustrate this with specific examples showing model spectra, simulated observations, expected information retrieval results, completed Astronomer's Proposal Tools observing templates, target visibility, and other considerations.
Crack Instability Predictions Using a Multi-Term Approach
NASA Technical Reports Server (NTRS)
Zanganeh, Mohammad; Forman, Royce G.
2015-01-01
Present crack instability analysis for fracture critical flight hardware is normally performed using a single parameter, K(sub C), fracture toughness value obtained from standard ASTM 2D geometry test specimens made from the appropriate material. These specimens do not sufficiently match the boundary conditions and the elastic-plastic constraint characteristics of the hardware component, and also, the crack instability of most commonly used aircraft and aerospace structural materials have some amount of stable crack growth before fracture which makes the normal use of a K(sub C) single parameter toughness value highly approximate. In the past, extensive studies have been conducted to improve the single parameter (K or J controlled) approaches by introducing parameters accounting for the geometry or in-plane constraint effects. Using 'J-integral' and 'A' parameter as a measure of constraint is one of the most accurate elastic-plastic crack solutions currently available. In this work the feasibility of the J-A approach for prediction of the crack instability was investigated first by ignoring the effects of stable crack growth i.e. using a critical J and A and second by considering the effects of stable crack growth using the corrected J-delta a using the 'A' parameter. A broad range of initial crack lengths and a wide range of specimen geometries including C(T), M(T), ESE(T), SE(T), Double Edge Crack (DEC), Three-Hole-Tension (THT) and NC (crack from a notch) manufactured from Al7075 were studied. Improvements in crack instability predictions were observed compared to the other methods available in the literature.
NASA Astrophysics Data System (ADS)
García-Morales, Vladimir; Manzanares, José A.; Mafe, Salvador
2017-04-01
We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ . This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.
Arefin, Md Shamsul
2012-01-01
This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319
A Functional Varying-Coefficient Single-Index Model for Functional Response Data
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2016-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer’s Disease Neuroimaging Initiative (ADNI) study. PMID:29200540
A Functional Varying-Coefficient Single-Index Model for Functional Response Data.
Li, Jialiang; Huang, Chao; Zhu, Hongtu
2017-01-01
Motivated by the analysis of imaging data, we propose a novel functional varying-coefficient single index model (FVCSIM) to carry out the regression analysis of functional response data on a set of covariates of interest. FVCSIM represents a new extension of varying-coefficient single index models for scalar responses collected from cross-sectional and longitudinal studies. An efficient estimation procedure is developed to iteratively estimate varying coefficient functions, link functions, index parameter vectors, and the covariance function of individual functions. We systematically examine the asymptotic properties of all estimators including the weak convergence of the estimated varying coefficient functions, the asymptotic distribution of the estimated index parameter vectors, and the uniform convergence rate of the estimated covariance function and their spectrum. Simulation studies are carried out to assess the finite-sample performance of the proposed procedure. We apply FVCSIM to investigating the development of white matter diffusivities along the corpus callosum skeleton obtained from Alzheimer's Disease Neuroimaging Initiative (ADNI) study.
García-Morales, Vladimir; Manzanares, José A; Mafe, Salvador
2017-04-01
We present a weakly coupled map lattice model for patterning that explores the effects exerted by weakening the local dynamic rules on model biological and artificial networks composed of two-state building blocks (cells). To this end, we use two cellular automata models based on (i) a smooth majority rule (model I) and (ii) a set of rules similar to those of Conway's Game of Life (model II). The normal and abnormal cell states evolve according to local rules that are modulated by a parameter κ. This parameter quantifies the effective weakening of the prescribed rules due to the limited coupling of each cell to its neighborhood and can be experimentally controlled by appropriate external agents. The emergent spatiotemporal maps of single-cell states should be of significance for positional information processes as well as for intercellular communication in tumorigenesis, where the collective normalization of abnormal single-cell states by a predominantly normal neighborhood may be crucial.
Sampling-based ensemble segmentation against inter-operator variability
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew
2011-03-01
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).
NASA Astrophysics Data System (ADS)
Cardenas, Nelson; Kyrish, Matthew; Taylor, Daniel; Fraelich, Margaret; Lechuga, Oscar; Claytor, Richard; Claytor, Nelson
2015-03-01
Electro-Chemical Polishing is routinely used in the anodizing industry to achieve specular surface finishes of various metals products prior to anodizing. Electro-Chemical polishing functions by leveling the microscopic peaks and valleys of the substrate, thereby increasing specularity and reducing light scattering. The rate of attack is dependent of the physical characteristics (height, depth, and width) of the microscopic structures that constitute the surface finish. To prepare the sample, mechanical polishing such as buffing or grinding is typically required before etching. This type of mechanical polishing produces random microscopic structures at varying depths and widths, thus the electropolishing parameters are determined in an ad hoc basis. Alternatively, single point diamond turning offers excellent repeatability and highly specific control of substrate polishing parameters. While polishing, the diamond tool leaves behind an associated tool mark, which is related to the diamond tool geometry and machining parameters. Machine parameters such as tool cutting depth, speed and step over can be changed in situ, thus providing control of the spatial frequency of the microscopic structures characteristic of the surface topography of the substrate. By combining single point diamond turning with subsequent electro-chemical etching, ultra smooth polishing of both rotationally symmetric and free form mirrors and molds is possible. Additionally, machining parameters can be set to optimize post polishing for increased surface quality and reduced processing times. In this work, we present a study of substrate surface finish based on diamond turning tool mark spatial frequency with subsequent electro-chemical polishing.
Data Reorganization for Optimal Time Series Data Access, Analysis, and Visualization
NASA Astrophysics Data System (ADS)
Rui, H.; Teng, W. L.; Strub, R.; Vollmer, B.
2012-12-01
The way data are archived is often not optimal for their access by many user communities (e.g., hydrological), particularly if the data volumes and/or number of data files are large. The number of data records of a non-static data set generally increases with time. Therefore, most data sets are commonly archived by time steps, one step per file, often containing multiple variables. However, many research and application efforts need time series data for a given geographical location or area, i.e., a data organization that is orthogonal to the way the data are archived. The retrieval of a time series of the entire temporal coverage of a data set for a single variable at a single data point, in an optimal way, is an important and longstanding challenge, especially for large science data sets (i.e., with volumes greater than 100 GB). Two examples of such large data sets are the North American Land Data Assimilation System (NLDAS) and Global Land Data Assimilation System (GLDAS), archived at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC; Hydrology Data Holdings Portal, http://disc.sci.gsfc.nasa.gov/hydrology/data-holdings). To date, the NLDAS data set, hourly 0.125x0.125° from Jan. 1, 1979 to present, has a total volume greater than 3 TB (compressed). The GLDAS data set, 3-hourly and monthly 0.25x0.25° and 1.0x1.0° Jan. 1948 to present, has a total volume greater than 1 TB (compressed). Both data sets are accessible, in the archived time step format, via several convenient methods, including Mirador search and download (http://mirador.gsfc.nasa.gov/), GrADS Data Server (GDS; http://hydro1.sci.gsfc.nasa.gov/dods/), direct FTP (ftp://hydro1.sci.gsfc.nasa.gov/data/s4pa/), and Giovanni Online Visualization and Analysis (http://disc.sci.gsfc.nasa.gov/giovanni). However, users who need long time series currently have no efficient way to retrieve them. Continuing a longstanding tradition of facilitating data access, analysis, and visualization that contribute to knowledge discovery from large science data sets, the GES DISC recently begun a NASA ACCESS-funded project to, in part, optimally reorganize selected large data sets for access and use by the hydrological user community. This presentation discusses the following aspects of the project: (1) explorations of approaches, such as database and file system; (2) findings for each approach, such as limitations and concerns, and pros and cons; (3) implementation of reorganizing data via the file system approach, including data processing (parameter and spatial subsetting), metadata and file structure of reorganized time series data (true "Data Rod," single variable, single grid point, and entire data range per file), and production and quality control. The reorganized time series data will be integrated into several broadly used data tools, such as NASA Giovanni and those provided by CUAHSI HIS (http://his.cuahsi.org/) and EPA BASINS (http://water.epa.gov/scitech/datait/models/basins/), as well as accessible via direct FTP, along with documentation and sample reading software. The data reorganization is initially, as part of the project, applied to selected popular hydrology-related parameters, with other parameters to be added, as resources permit.
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
Liang, D.; Xu, X.; Tsang, L.; Andreadis, K.M.; Josberger, E.G.
2008-01-01
A model for the microwave emissions of multilayer dry snowpacks, based on dense media radiative transfer (DMRT) theory with the quasicrystalline approximation (QCA), provides more accurate results when compared to emissions determined by a homogeneous snowpack and other scattering models. The DMRT model accounts for adhesive aggregate effects, which leads to dense media Mie scattering by using a sticky particle model. With the multilayer model, we examined both the frequency and polarization dependence of brightness temperatures (Tb's) from representative snowpacks and compared them to results from a single-layer model and found that the multilayer model predicts higher polarization differences, twice as much, and weaker frequency dependence. We also studied the temporal evolution of Tb from multilayer snowpacks. The difference between Tb's at 18.7 and 36.5 GHz can be S K lower than the single-layer model prediction in this paper. By using the snowpack observations from the Cold Land Processes Field Experiment as input for both multi- and single-layer models, it shows that the multilayer Tb's are in better agreement with the data than the single-layer model. With one set of physical parameters, the multilayer QCA/DMRT model matched all four channels of Tb observations simultaneously, whereas the single-layer model could only reproduce vertically polarized Tb's. Also, the polarization difference and frequency dependence were accurately matched by the multilayer model using the same set of physical parameters. Hence, algorithms for the retrieval of snowpack depth or water equivalent should be based on multilayer scattering models to achieve greater accuracy. ?? 2008 IEEE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbiener, W.A.; Cudnik, R.A.; Dykhuizen, R.C.
Experimental studies were conducted in a /sup 2///sub 15/-scale model of a four-loop pressurized water reactor at pressures to 75 psia to extend the understanding of steam-water interaction phenomena and processes associated with a loss-of-coolant accident. Plenum filling studies were conducted with hydraulic communication between the cold leg and core steam supplies and hot walls, with both fixed and ramped steam flows. Comparisons of correlational fits have been made for penetration data obtained with hydraulic communication, fixed cold leg steam, and no cold leg steam. Statistical tests applied to these correlational fits have indicated that the hydraulic communication and fixedmore » cold leg steam data can be considered to be a common data set. Comparing either of these data sets to the no cold leg steam data using the statistical test indicated that it was unlikely that these sets could be considered to be a common data set. The introduction of cold leg steam results in a slight decrease in penetration relative to that obtained without cold leg steam at the same value of subcooling of water entering the downcomer. A dimensionless parameter which is a weighted mean of a modified Froude number and the Weber number has been proposed as a scaling parameter for penetration data. This parameter contains an additional degree of freedom which allows data from different scales to collapse more closely to a single curve than current scaling parameters permit.« less
Sánchez-Jiménez, Pedro E; Pérez-Maqueda, Luis A; Perejón, Antonio; Criado, José M
2013-02-05
This paper provides some clarifications regarding the use of model-fitting methods of kinetic analysis for estimating the activation energy of a process, in response to some results recently published in Chemistry Central journal. The model fitting methods of Arrhenius and Savata are used to determine the activation energy of a single simulated curve. It is shown that most kinetic models correctly fit the data, each providing a different value for the activation energy. Therefore it is not really possible to determine the correct activation energy from a single non-isothermal curve. On the other hand, when a set of curves are recorded under different heating schedules are used, the correct kinetic parameters can be clearly discerned. Here, it is shown that the activation energy and the kinetic model cannot be unambiguously determined from a single experimental curve recorded under non isothermal conditions. Thus, the use of a set of curves recorded under different heating schedules is mandatory if model-fitting methods are employed.
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
NASA Astrophysics Data System (ADS)
Kim, A. G.
2011-02-01
I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.
Edenharter, Günther M; Gartner, Daniel; Pförringer, Dominik
2017-06-01
Increasing costs of material resources challenge hospitals to stay profitable. Particularly in anesthesia departments and intensive care units, bronchoscopes are used for various indications. Inefficient management of single- and multiple-use systems can influence the hospitals' material costs substantially. Using mathematical modeling, we developed a strategic decision support tool to determine the optimum mix of disposable and reusable bronchoscopy devices in the setting of an intensive care unit. A mathematical model with the objective to minimize costs in relation to demand constraints for bronchoscopy devices was formulated. The stochastic model decides whether single-use, multi-use, or a strategically chosen mix of both device types should be used. A decision support tool was developed in which parameters for uncertain demand such as mean, standard deviation, and a reliability parameter can be inserted. Furthermore, reprocessing costs per procedure, procurement, and maintenance costs for devices can be parameterized. Our experiments show for which demand pattern and reliability measure, it is efficient to only use reusable or disposable devices and under which circumstances the combination of both device types is beneficial. To determine the optimum mix of single-use and reusable bronchoscopy devices effectively and efficiently, managers can enter their hospital-specific parameters such as demand and prices into the decision support tool.The software can be downloaded at: https://github.com/drdanielgartner/bronchomix/.
Software Analytical Instrument for Assessment of the Process of Casting Slabs
NASA Astrophysics Data System (ADS)
Franěk, Zdeněk; Kavička, František; Štětina, Josef; Masarik, Miloš
2010-06-01
The paper describes the original proposal of ways of solution and function of the program equipment for assessment of the process of casting slabs. The program system LITIOS was developed and implemented in EVRAZ Vitkovice Steel Ostrava on the equipment of continuous casting of steel (further only ECC). This program system works on the data warehouse of technological parameters of casting and quality parameters of slabs. It enables an ECC technologist to analyze the course of casting melt and with using statistics methods to set the influence of single technological parameters on the duality of final slabs. The system also enables long term monitoring and optimization of the production.
Cooperative inversion of magnetotelluric and seismic data sets
NASA Astrophysics Data System (ADS)
Markovic, M.; Santos, F.
2012-04-01
Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
The CHIC Model: A Global Model for Coupled Binary Data
ERIC Educational Resources Information Center
Wilderjans, Tom; Ceulemans, Eva; Van Mechelen, Iven
2008-01-01
Often problems result in the collection of coupled data, which consist of different N-way N-mode data blocks that have one or more modes in common. To reveal the structure underlying such data, an integrated modeling strategy, with a single set of parameters for the common mode(s), that is estimated based on the information in all data blocks, may…
Somatic deleterious mutation rate in a woody plant: estimation from phenotypic data
Bobiwash, K; Schultz, S T; Schoen, D J
2013-01-01
We conducted controlled crosses in populations of the long-lived clonal shrub, Vaccinium angustifolium (lowbush blueberry) to estimate inbreeding depression and mutation parameters associated with somatic deleterious mutation. Inbreeding depression level was high, with many plants failing to set fruit after self-pollination. We also compared fruit set from autogamous pollinations (pollen collected from within the same inflorescence) with fruit set from geitonogamous pollinations (pollen collected from the same plant but from inflorescences separated by several meters of branch growth). The difference between geitonogamous versus autogamous fitness within single plants is referred to as ‘autogamy depression' (AD). AD can be caused by somatic deleterious mutation. AD was significantly different from zero for fruit set. We developed a maximum-likelihood procedure to estimate somatic mutation parameters from AD, and applied it to geitonogamous and autogamous fruit set data from this experiment. We infer that, on average, approximately three sublethal, partially dominant somatic mutations exist within the crowns of the plants studied. We conclude that somatic mutation in this woody plant results in an overall genomic deleterious mutation rate that exceeds the rate measured to date for annual plants. Some implications of this result for evolutionary biology and agriculture are discussed. PMID:23778990
NASA Astrophysics Data System (ADS)
Liou, Cheng-Dar
2015-09-01
This study investigates an infinite capacity Markovian queue with a single unreliable service station, in which the customers may balk (do not enter) and renege (leave the queue after entering). The unreliable service station can be working breakdowns even if no customers are in the system. The matrix-analytic method is used to compute the steady-state probabilities for the number of customers, rate matrix and stability condition in the system. The single-objective model for cost and bi-objective model for cost and expected waiting time are derived in the system to fit in with practical applications. The particle swarm optimisation algorithm is implemented to find the optimal combinations of parameters in the pursuit of minimum cost. Two different approaches are used to identify the Pareto optimal set and compared: the epsilon-constraint method and non-dominate sorting genetic algorithm. Compared results allow using the traditional optimisation approach epsilon-constraint method, which is computationally faster and permits a direct sensitivity analysis of the solution under constraint or parameter perturbation. The Pareto front and non-dominated solutions set are obtained and illustrated. The decision makers can use these to improve their decision-making quality.
Basis for paraxial surface-plasmon-polariton packets
NASA Astrophysics Data System (ADS)
Martinez-Herrero, Rosario; Manjavacas, Alejandro
2016-12-01
We present a theoretical framework for the study of surface-plasmon polariton (SPP) packets propagating along a lossy metal-dielectric interface within the paraxial approximation. Using a rigorous formulation based on the plane-wave spectrum formalism, we introduce a set of modes that constitute a complete basis set for the solutions of Maxwell's equations for a metal-dielectric interface in the paraxial approximation. The use of this set of modes allows us to fully analyze the evolution of the transversal structure of SPP packets beyond the single plane-wave approximation. As a paradigmatic example, we analyze the case of a Gaussian SPP mode, for which, exploiting the analogy with paraxial optical beams, we introduce a set of parameters that characterize its propagation.
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.
Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu
2015-05-01
Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.
Evaluation of a new 240-μm single-use holmium:YAG optical fiber for flexible ureteroscopy.
Khemees, Tariq A; Shore, David M; Antiporda, Michael; Teichman, Joel M H; Knudsen, Bodo E
2013-04-01
Numerous holmium:yttrium-aluminum-garnet laser fibers are available for flexible ureteroscopy. Performance and durability of fibers can vary widely among different manufacturers and their product lines with differences within a single product line have been reported. We sought to evaluate a newly developed nontapered, single-use 240-μm fiber, Flexiva™ 200 (Boston Scientific, Natick, MA), during clinical use and in a bench-testing model. A total of 100 new fibers were tested after their use in 100 consecutive flexible ureteroscopic lithotripsy procedures by a single surgeon (B.K.). Prospectively recorded clinical parameters were laser pulse energy and frequency settings, total energy delivered and fibers failure. Subsequently, each fiber was bench-tested using an established protocol. Parameters evaluated for were fibers true diameter, flexibility, tip degradation, energy transmission in straight and 180° bend configuration and fibers failure threshold with stress testing. The mean total energy delivered was 2.20 kJ (range 0-18.24 kJ) and most common laser settings used were 0.8 J at 8 Hz, 0.2 J at 50 Hz, and 1.0 J at 10 Hz, respectively. No fiber fractured during clinical procedures. The true fiber diameter was 450 μm. Fiber tips burnt back an average of 1.664 mm, but were highly variable. With laser setting of 400 mJ at 5 Hz, the mean energy transmitted was 451 and 441 mJ in straight and 180° bend configuration, respectively. Thirteen percent of fibers fractured at the bend radius of 0.5 cm with a positive correlation to the total energy transmitted during clinical use identified. Fiber performance was consistent in terms of energy transmission and resistance to fracture when activated in bent configuration. Fiber failure during stress testing showed significant correlation with the total energy delivered during the clinical procedure. The lack of fiber fracture during clinical use may reduce the risk of flexible endoscope damage due to fiber failure.
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Modeling the bidirectional reflectance distribution function of mixed finite plant canopies and soil
NASA Technical Reports Server (NTRS)
Schluessel, G.; Dickinson, R. E.; Privette, J. L.; Emery, W. J.; Kokaly, R.
1994-01-01
An analytical model of the bidirectional reflectance for optically semi-infinite plant canopies has been extended to describe the reflectance of finite depth canopies contributions from the underlying soil. The model depends on 10 independent parameters describing vegetation and soil optical and structural properties. The model is inverted with a nonlinear minimization routine using directional reflectance data for lawn (leaf area index (LAI) is equal to 9.9), soybeans (LAI, 2.9) and simulated reflectance data (LAI, 1.0) from a numerical bidirectional reflectance distribution function (BRDF) model (Myneni et al., 1988). While the ten-parameter model results in relatively low rms differences for the BRDF, most of the retrieved parameters exhibit poor stability. The most stable parameter was the single-scattering albedo of the vegetation. Canopy albedo could be derived with an accuracy of less than 5% relative error in the visible and less than 1% in the near-infrared. Sensitivity were performed to determine which of the 10 parameters were most important and to assess the effects of Gaussian noise on the parameter retrievals. Out of the 10 parameters, three were identified which described most of the BRDF variability. At low LAI values the most influential parameters were the single-scattering albedos (both soil and vegetation) and LAI, while at higher LAI values (greater than 2.5) these shifted to the two scattering phase function parameters for vegetation and the single-scattering albedo of the vegetation. The three-parameter model, formed by fixing the seven least significant parameters, gave higher rms values but was less sensitive to noise in the BRDF than the full ten-parameter model. A full hemispherical reflectance data set for lawn was then interpolated to yield BRDF values corresponding to advanced very high resolution radiometer (AVHRR) scan geometries collected over a period of nine days. The resulting parameters and BRDFs are similar to those for the full sampling geometry, suggesting that the limited geometry of AVHRR measurements might be used to reliably retrieve BRDF and canopy albedo with this model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitman, A.J.
The sensitivity of a land-surface scheme (the Biosphere Atmosphere Transfer Scheme, BATS) to its parameter values was investigated using a single column model. Identifying which parameters were important in controlling the turbulent energy fluxes, temperature, soil moisture, and runoff was dependent upon many factors. In the simulation of a nonmoisture-stressed tropical forest, results were dependent on a combination of reservoir terms (soil depth, root distribution), flux efficiency terms (roughness length, stomatal resistance), and available energy (albedo). If moisture became limited, the reservoir terms increased in importance because the total fluxes predicted depended on moisture availability and not on the ratemore » of transfer between the surface and the atmosphere. The sensitivity shown by BATS depended on which vegetation type was being simulated, which variable was used to determine sensitivity, the magnitude and sign of the parameter change, the climate regime (precipitation amount and frequency), and soil moisture levels and proximity to wilting. The interactions between these factors made it difficult to identify the most important parameters in BATS. Therefore, this paper does not argue that a particular set of parameters is important in BATS, rather it shows that no general ranking of parameters is possible. It is also emphasized that using `stand-alone` forcing to examine the sensitivity of a land-surface scheme to perturbations, in either parameters or the atmosphere, is unreliable due to the lack of surface-atmospheric feedbacks.« less
Station coordinates, baselines, and earth rotation from Lageos laser ranging - 1976-1984
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Schultz, B. E.; Eanes, R. J.
1985-01-01
The orbit of the Lageos satellite is well suited as a reference frame for studying the rotation of the earth and the relative motion of points on the earth's crust. The satellite laser measurements can determine the location of a set of tracking stations in an appropriate terrestrial coordinate system. The motion of the earth's rotation axis relative to this system can be studied on the basis of the established tracking station locations. The present investigation is concerned with an analysis of 7.7 years of Lageos laser ranging data. In the first solution considered, the entire data span was used to adjust a single set of station positions simultaneously with orbit and earth rotation parameters. Attention is given to the accuracy of earth rotation parameters which are determined as an inherent part of the solution process.
Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes
NASA Astrophysics Data System (ADS)
Zhou, Dong
In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user subroutine DISP and URDFIL of ABAQUS, respectively, while constitutive formulations of the FEM model are coded and implemented in UMAT. The results of the simulations are compared to experiments. This model verified the validity of Winter's two-phase model and Taylor's uniform stress assumption, explored substructure evolution and "intrinsic" behavior in substructures and successfully simulated the process of PSB band formation and propagation.
General Boundary Conditions for a Majorana Single-Particle in a Box in (1 + 1) Dimensions
NASA Astrophysics Data System (ADS)
De Vincenzo, Salvatore; Sánchez, Carlet
2018-05-01
We consider the problem of a Majorana single-particle in a box in (1 + 1) dimensions. We show that the most general set of boundary conditions for the equation that models this particle is composed of two families of boundary conditions, each one with a real parameter. Within this set, we only have four confining boundary conditions—but infinite not confining boundary conditions. Our results are also valid when we include a Lorentz scalar potential in this equation. No other Lorentz potential can be added. We also show that the four confining boundary conditions for the Majorana particle are precisely the four boundary conditions that mathematically can arise from the general linear boundary condition used in the MIT bag model. Certainly, the four boundary conditions for the Majorana particle are also subject to the Majorana condition.
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
NASA Technical Reports Server (NTRS)
Wilson, R. Gale
1994-01-01
The potential capabilities and limitations of single ball lenses for coupling laser diode radiation to single-mode optical fibers have been analyzed; parameters important to optical communications were specifically considered. These parameters included coupling efficiency, effective numerical apertures, lens radius, lens refractive index, wavelength, magnification in imaging the laser diode on the fiber, and defocus to counterbalance spherical aberration of the lens. Limiting numerical apertures in object and image space were determined under the constraint that the lens perform to the Rayleigh criterion of 0.25-wavelength (Strehl ratio = 0.80). The spherical aberration-defocus balance to provide an optical path difference of 0.25 wavelength units was shown to define a constant coupling efficiency (i.e., 0.56). The relative numerical aperture capabilities of the ball lens were determined for a set of wavelengths and associated fiber-core diameters of particular interest for single-mode fiber-optic communication. The results support general continuing efforts in the optical fiber communications industry to improve coupling links within such systems with emphasis on manufacturing simplicity, system packaging flexibility, relaxation of assembly alignment tolerances, cost reduction of opto-electronic components and long term reliability and stability.
NASA Astrophysics Data System (ADS)
Montazeri, A.; West, C.; Monk, S. D.; Taylor, C. J.
2017-04-01
This paper concerns the problem of dynamic modelling and parameter estimation for a seven degree of freedom hydraulic manipulator. The laboratory example is a dual-manipulator mobile robotic platform used for research into nuclear decommissioning. In contrast to earlier control model-orientated research using the same machine, the paper develops a nonlinear, mechanistic simulation model that can subsequently be used to investigate physically meaningful disturbances. The second contribution is to optimise the parameters of the new model, i.e. to determine reliable estimates of the physical parameters of a complex robotic arm which are not known in advance. To address the nonlinear and non-convex nature of the problem, the research relies on the multi-objectivisation of an output error single-performance index. The developed algorithm utilises a multi-objective genetic algorithm (GA) in order to find a proper solution. The performance of the model and the GA is evaluated using both simulated (i.e. with a known set of 'true' parameters) and experimental data. Both simulation and experimental results show that multi-objectivisation has improved convergence of the estimated parameters compared to the single-objective output error problem formulation. This is achieved by integrating the validation phase inside the algorithm implicitly and exploiting the inherent structure of the multi-objective GA for this specific system identification problem.
Correlation Filtering of Modal Dynamics using the Laplace Wavelet
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.
1997-01-01
Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.
NASA Technical Reports Server (NTRS)
Wallace, Terryl A.; Bey, Kim S.; Taminger, Karen M. B.; Hafley, Robert A.
2004-01-01
A study was conducted to evaluate the relative significance of input parameters on Ti- 6Al-4V deposits produced by an electron beam free form fabrication process under development at the NASA Langley Research Center. Five input parameters where chosen (beam voltage, beam current, translation speed, wire feed rate, and beam focus), and a design of experiments (DOE) approach was used to develop a set of 16 experiments to evaluate the relative importance of these parameters on the resulting deposits. Both single-bead and multi-bead stacks were fabricated using 16 combinations, and the resulting heights and widths of the stack deposits were measured. The resulting microstructures were also characterized to determine the impact of these parameters on the size of the melt pool and heat affected zone. The relative importance of each input parameter on the height and width of the multi-bead stacks will be discussed. .
Finite Element Analysis of a Copper Single Crystal Shape Memory Alloy-Based Endodontic Instruments
NASA Astrophysics Data System (ADS)
Vincent, Marin; Thiebaud, Frédéric; Bel Haj Khalifa, Saifeddine; Engels-Deutsch, Marc; Ben Zineb, Tarak
2015-10-01
The aim of the present paper is the development of endodontic Cu-based single crystal Shape Memory Alloy (SMA) instruments in order to eliminate the antimicrobial and mechanical deficiencies observed with the conventional Nickel-Titane (NiTi) SMA files. A thermomechanical constitutive law, already developed and implemented in a finite element code by our research group, is adopted for the simulation of the single crystal SMA behavior. The corresponding material parameters were identified starting from experimental results for a tensile test at room temperature. A computer-aided design geometry has been achieved and considered for a finite element structural analysis of the endodontic Cu-based single crystal SMA files. They are meshed with tetrahedral continuum elements to improve the computation time and the accuracy of results. The geometric parameters tested in this study are the length of the active blade, the rod length, the pitch, the taper, the tip diameter, and the rod diameter. For each set of adopted parameters, a finite element model is built and tested in a combined bending-torsion loading in accordance with ISO 3630-1 norm. The numerical analysis based on finite element procedure allowed purposing an optimal geometry suitable for Cu-based single crystal SMA endodontic files. The same analysis was carried out for the classical NiTi SMA files and a comparison was made between the two kinds of files. It showed that Cu-based single crystal SMA files are less stiff than the NiTi files. The Cu-based endodontic files could be used to improve the root canal treatments. However, the finite element analysis brought out the need for further investigation based on experiments.
Application of modern radiative transfer tools to model laboratory quartz emissivity
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.
2005-08-01
Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.
Direct Sensor Orientation of a Land-Based Mobile Mapping System
Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua
2011-01-01
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015
NASA Astrophysics Data System (ADS)
Bai, Jianwen; Shen, Zhenyao; Yan, Tiezhu
2017-09-01
An essential task in evaluating global water resource and pollution problems is to obtain the optimum set of parameters in hydrological models through calibration and validation. For a large-scale watershed, single-site calibration and validation may ignore spatial heterogeneity and may not meet the needs of the entire watershed. The goal of this study is to apply a multi-site calibration and validation of the Soil andWater Assessment Tool (SWAT), using the observed flow data at three monitoring sites within the Baihe watershed of the Miyun Reservoir watershed, China. Our results indicate that the multi-site calibration parameter values are more reasonable than those obtained from single-site calibrations. These results are mainly due to significant differences in the topographic factors over the large-scale area, human activities and climate variability. The multi-site method involves the division of the large watershed into smaller watersheds, and applying the calibrated parameters of the multi-site calibration to the entire watershed. It was anticipated that this case study could provide experience of multi-site calibration in a large-scale basin, and provide a good foundation for the simulation of other pollutants in followup work in the Miyun Reservoir watershed and other similar large areas.
Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...
2015-06-15
In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less
Protein and quality characterization of complete and partial near isogenic lines of waxy wheat
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate protein composition and its effects on flour quality and physical dough test parameters using waxy wheat near-isogenic lines. Partial waxy (single and double nulls) and waxy (null at all three waxy loci, Wx-A1, Wx-B1, and Wx-D1) lines of N11 set (bread whe...
Relativistic effects in electron impact ionization from the p-orbital
NASA Astrophysics Data System (ADS)
Haque, A. K. F.; Uddin, M. A.; Basak, A. K.; Karim, K. R.; Saha, B. C.; Malik, F. B.
2006-06-01
The parameters of our recent modification of BELI formula (MBELL) [A.K.F. Haque, M.A. Uddin, A.K. Basak, K.R. Karim, B.C. Saha, Phys. Rev. A 73 (2006) 012708] are generalized in terms of the orbital quantum numbers nl to evaluate the electron impact ionization (EII) cross sections of a wide range of isoelectronic targets (H to Ne series) and incident energies. For both the open and closed p-shell targets, the present MBELL results with a single parameter set, agree nicely with the experimental cross sections. The relativistic effect of ionization in the 2p subshell of U82+ for incident energies up to 250 MeV is well accounted for by the prescribed parameters of the model.
Link between alginate reaction front propagation and general reaction diffusion theory.
Braschler, Thomas; Valero, Ana; Colella, Ludovica; Pataky, Kristopher; Brugger, Jürgen; Renaud, Philippe
2011-03-15
We provide a common theoretical framework reuniting specific models for the Ca(2+)-alginate system and general reaction diffusion theory along with experimental validation on a microfluidic chip. As a starting point, we use a set of nonlinear, partial differential equations that are traditionally solved numerically: the Mikkelsen-Elgsaeter model. Applying the traveling-wave hypothesis as a major simplification, we obtain an analytical solution. The solution indicates that the fundamental properties of the alginate reaction front are governed by a single dimensionless parameter λ. For small λ values, a large depletion zone accompanies the reaction front. For large λ values, the alginate reacts before having the time to diffuse significantly. We show that the λ parameter is of general importance beyond the alginate model system, as it can be used to classify known solutions for second-order reaction diffusion schemes, along with the novel solution presented here. For experimental validation, we develop a microchip model system, in which the alginate gel formation can be carried out in a highly controlled, essentially 1D environment. The use of a filter barrier enables us to rapidly renew the CaCl(2) solution, while maintaining flow speeds lower than 1 μm/s for the alginate compartment. This allows one to impose an exactly known bulk CaCl(2) concentration and diffusion resistance. This experimental model system, taken together with the theoretical development, enables the determination of the entire set of physicochemical parameters governing the alginate reaction front in a single experiment.
Fisher information of a single qubit interacts with a spin-qubit in the presence of a magnetic field
NASA Astrophysics Data System (ADS)
Metwally, N.
2018-06-01
In this contribution, quantum Fisher information is utilized to estimate the parameters of a central qubit interacting with a single-spin qubit. The effect of the longitudinal, transverse and the rotating strengths of the magnetic field on the estimation degree is discussed. It is shown that, in the resonance case, the number of peaks and consequently the size of the estimation regions increase as the rotating magnetic field strength increases. The precision estimation of the central qubit parameters depends on the initial state settings of the central and the spin-qubit, either encode classical or quantum information. It is displayed that, the upper bounds of the estimation degree are large if the two qubits encode classical information. In the non-resonance case, the estimation degree depends on which of the longitudinal/transverse strength is larger. The coupling constant between the central qubit and the spin-qubit has a different effect on the estimation degree of the weight and the phase parameters, where the possibility of estimating the weight parameter decreases as the coupling constant increases, while it increases for the phase parameter. For large number of spin-particles, namely, we have a spin-bath particles, the upper bounds of the Fisher information with respect to the weight parameter of the central qubit decreases as the number of the spin particle increases. As the interaction time increases, the upper bounds appear at different initial values of the weight parameter.
Chaĭkovskiĭ, I A; Baum, O V; Popov, L A; Voloshin, V I; Budnik, N N; Frolov, Iu A; Kovalenko, A S
2014-01-01
While discussing the diagnostic value of the single channel electrocardiogram a set of theoretical considerations emerges inevitably, one of the most important among them is the question about dependence of the electrocardiogram parameters from the direction of electrical axis of heart. In other words, changes in what of electrocardiogram parameters are in fact liable to reflect pathological processes in myocardium, and what ones are determined by extracardiac factors, primarily by anatomic characteristics of patients. It is arguable that while analyzing electrocardiogram it is necessary to orient to such physiologically based informative indexes as ST segment displacement. Also, symmetry of the T wave shape is an important parameter which is independent of patients anatomic features. The results obtained are of interest for theoretical and applied aspects of the biophysics of the cardiac electric field.
Smith, Erin; Cusack, Tara; Cunningham, Caitriona; Blake, Catherine
2017-10-01
This review examines the effect of a dual task on the gait parameters of older adults with a mean gait speed of 1.0 m/s or greater, and the effect of type and complexity of task. A systematic review of Web of Science, PubMed, SCOPUS, Embase, and PsycINFO was performed in July 2016. Twenty-three studies (28 data sets) were reviewed and pooled for meta-analysis. The effect size on seven gait parameters was measured as the raw mean difference between single- and dual-task performance. Gait speed significantly reduced with the addition of a dual task, with increasing complexity showing greater decrements. Cadence, stride time, and measures of gait variability were all negatively affected under the dual-task condition. In older adults, the addition of a dual task significantly reduces gait speed and cadence, with possible implications for the assessment of older people, as the addition of a dual task may expose deficits not observed under single-task assessment.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
NASA Astrophysics Data System (ADS)
Francisco, Arthur; Blondel, Cécile; Brunetière, Noël; Ramdarshan, Anusha; Merceron, Gildas
2018-03-01
Tooth wear and, more specifically, dental microwear texture is a dietary proxy that has been used for years in vertebrate paleoecology and ecology. DMTA, dental microwear texture analysis, relies on a few parameters related to the surface complexity, anisotropy and heterogeneity of the enamel facets at the micrometric scale. Working with few but physically meaningful parameters helps in comparing published results and in defining levels for classification purposes. Other dental microwear approaches are based on ISO parameters and coupled with statistical tests to find the more relevant ones. The present study roughly utilizes most of the aforementioned parameters in their more or less modified form. But more than parameters, we here propose a new approach: instead of a single parameter characterizing the whole surface, we sample the surface and thus generate 9 derived parameters in order to broaden the parameter set. The identification of the most discriminative parameters is performed with an automated procedure which is an extended and refined version of the workflows encountered in some studies. The procedure in its initial form includes the most common tools, like the ANOVA and the correlation analysis, along with the required mathematical tests. The discrimination results show that a simplified form of the procedure is able to more efficiently identify the desired number of discriminative parameters. Also highlighted are some trends like the relevance of working with both height and spatial parameters, as well as the potential benefits of dimensionless surfaces. On a set of 45 surfaces issued from 45 specimens of three modern ruminants with differences in feeding preferences (grazing, leaf-browsing and fruit-eating), it is clearly shown that the level of wear discrimination is improved with the new methodology compared to the other ones.
Issues in the inverse modeling of a soil infiltration process
NASA Astrophysics Data System (ADS)
Kuraz, Michal; Jacka, Lukas; Leps, Matej
2017-04-01
This contribution addresses issues in evaluation of the soil hydraulic parameters (SHP) from the Richards equation based inverse model. The inverse model was representing single ring infiltration experiment on mountainous podzolic soil profile, and was searching for the SHP parameters of the top soil layer. Since the thickness of the top soil layer is often much lower than the depth required to embed the single ring or Guelph permeameter device, the SHPs for the top soil layer are very difficult to measure directly. The SHPs for the top soil layer were therefore identified here by inverse modeling of the single ring infiltration process, where, especially, the initial unsteady part of the experiment is expected to provide very useful data for evaluating the retention curve parameters (excluding the residual water content) and the saturated hydraulic conductivity. The main issue, which is addressed in this contribution, is the uniqueness of the Richards equation inverse model. We tried to answer the question whether is it possible to characterize the unsteady infiltration experiment with a unique set of SHPs values, and whether are all SHP parameters vulnerable with the non-uniqueness. Which is an important issue, since we could further conclude whether the popular gradient methods are appropriate here. Further the issues in assigning the initial and boundary condition setup, the influence of spatial and temporal discretization on the values of the identified SHPs, and the convergence issues with the Richards equation nonlinear operator during automatic calibration procedure are also covered here.
Mean field treatment of heterogeneous steady state kinetics
NASA Astrophysics Data System (ADS)
Geva, Nadav; Vaissier, Valerie; Shepherd, James; Van Voorhis, Troy
2017-10-01
We propose a method to quickly compute steady state populations of species undergoing a set of chemical reactions whose rate constants are heterogeneous. Using an average environment in place of an explicit nearest neighbor configuration, we obtain a set of equations describing a single fluctuating active site in the presence of an averaged bath. We apply this Mean Field Steady State (MFSS) method to a model of H2 production on a disordered surface for which the activation energy for the reaction varies from site to site. The MFSS populations quantitatively reproduce the KMC results across the range of rate parameters considered.
NASA Astrophysics Data System (ADS)
Šarolić, A.; Živković, Z.; Reilly, J. P.
2016-06-01
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
Šarolić, A; Živković, Z; Reilly, J P
2016-06-21
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
Latash, M; Gottleib, G
1990-01-01
Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.
Accelerating calculations of RNA secondary structure partition functions using GPUs
2013-01-01
Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434
Complex absorbing potentials within EOM-CC family of methods: Theory, implementation, and benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zuev, Dmitry; Jagau, Thomas-C.; Krylov, Anna I.
2014-07-14
A production-level implementation of equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) for electron attachment and excitation energies augmented by a complex absorbing potential (CAP) is presented. The new method enables the treatment of metastable states within the EOM-CC formalism in a similar manner as bound states. The numeric performance of the method and the sensitivity of resonance positions and lifetimes to the CAP parameters and the choice of one-electron basis set are investigated. A protocol for studying molecular shape resonances based on the use of standard basis sets and a universal criterion for choosing the CAP parameters are presented. Our resultsmore » for a variety of π{sup *} shape resonances of small to medium-size molecules demonstrate that CAP-augmented EOM-CCSD is competitive relative to other theoretical approaches for the treatment of resonances and is often able to reproduce experimental results.« less
Self-Adaptive Stepsize Search Applied to Optimal Structural Design
NASA Astrophysics Data System (ADS)
Nolle, L.; Bland, J. A.
Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.
Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui
2016-01-01
Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365
A surface hydrology model for regional vector borne disease models
NASA Astrophysics Data System (ADS)
Tompkins, Adrian; Asare, Ernest; Bomblies, Arne; Amekudzi, Leonard
2016-04-01
Small, sun-lit temporary pools that form during the rainy season are important breeding sites for many key mosquito vectors responsible for the transmission of malaria and other diseases. The representation of this surface hydrology in mathematical disease models is challenging, due to their small-scale, dependence on the terrain and the difficulty of setting soil parameters. Here we introduce a model that represents the temporal evolution of the aggregate statistics of breeding sites in a single pond fractional coverage parameter. The model is based on a simple, geometrical assumption concerning the terrain, and accounts for the processes of surface runoff, pond overflow, infiltration and evaporation. Soil moisture, soil properties and large-scale terrain slope are accounted for using a calibration parameter that sets the equivalent catchment fraction. The model is calibrated and then evaluated using in situ pond measurements in Ghana and ultra-high (10m) resolution explicit simulations for a village in Niger. Despite the model's simplicity, it is shown to reproduce the variability and mean of the pond aggregate water coverage well for both locations and validation techniques. Example malaria simulations for Uganda will be shown using this new scheme with a generic calibration setting, evaluated using district malaria case data. Possible methods for implementing regional calibration will be briefly discussed.
Polarization-modulated second harmonic generation ellipsometric microscopy at video rate.
DeWalt, Emma L; Sullivan, Shane Z; Schmitt, Paul D; Muir, Ryan D; Simpson, Garth J
2014-08-19
Fast 8 MHz polarization modulation coupled with analytical modeling, fast beam-scanning, and synchronous digitization (SD) have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and polarized laser transmittance imaging with image acquisition rates up to video rate. In contrast to polarimetry, in which the polarization state of the exiting beam is recorded, NOSE enables recovery of the complex-valued Jones tensor of the sample that describes all polarization-dependent observables of the measurement. Every video-rate scan produces a set of 30 images (10 for each detector with three detectors operating in parallel), each of which corresponds to a different polarization-dependent result. Linear fitting of this image set contracts it down to a set of five parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the incident beam. These parameters can in turn be used to recover the Jones tensor elements of the sample. Following validation of the approach using z-cut quartz, NOSE microscopy was performed for microcrystals of both naproxen and glucose isomerase. When weighted by the measurement time, NOSE microscopy was found to provide a substantial (>7 decades) improvement in the signal-to-noise ratio relative to our previous measurements based on the rotation of optical elements and a 3-fold improvement relative to previous single-point NOSE approaches.
Structural identifiability analysis of a cardiovascular system model.
Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas
2016-05-01
The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Relativistic excited state binding energies and RMS radii of Λ-hypernuclei
NASA Astrophysics Data System (ADS)
Nejad, S. Mohammad Moosavi; Armat, A.
2018-02-01
Using an analytical solution for the relativistic equation of single Λ-hypernuclei in the presence of Woods-Saxon (WS) potential we present, for the first time, an analytical form for the excited state binding energies of 1p, 1d, 1f and 1g shells of a number of hypernuclei. Based on phenomenological analysis of the Λ binding energies in a set of Λ-hypernuclei, the WS potential parameters are obtained phenomenologically for the set of Λ-hypernuclei. Systematic study of the energy levels of single Λ-hypernuclei enables us to extract more detailed information about the Λ-nucleon interaction. We also study the root mean square (RMS) radii of the Λ orbits in the hypernuclear ground states. Our results are presented for several hypernuclei and it is shown that our results for the binding energies are in good agreement with experimental data.
Sánchez, Ariel G.; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; ...
2016-09-30
The cosmological information contained in anisotropic galaxy clustering measurements can often be compressed into a small number of parameters whose posterior distribution is well described by a Gaussian. Here, we present a general methodology to combine these estimates into a single set of consensus constraints that encode the total information of the individual measurements, taking into account the full covariance between the different methods. We also illustrate this technique by applying it to combine the results obtained from different clustering analyses, including measurements of the signature of baryon acoustic oscillations and redshift-space distortions, based on a set of mock cataloguesmore » of the final SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). Our results show that the region of the parameter space allowed by the consensus constraints is smaller than that of the individual methods, highlighting the importance of performing multiple analyses on galaxy surveys even when the measurements are highly correlated. Our paper is part of a set that analyses the final galaxy clustering data set from BOSS. The methodology presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.« less
NASA Astrophysics Data System (ADS)
Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kostioukhine, V.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Libby, J.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Tegenfeldt, F.; Terranova, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Lysebetten, A.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; DELPHI Collaboration
2010-03-01
The data taken by Delphi at centre-of-mass energies between 189 and 209 GeV are used to place limits on the CP-conserving trilinear gauge boson couplings Δ gZ1, λ γ and Δ κ γ associated to W + W - and single W production at Lep2. Using data from the jj ℓ ν, jjjj, jjX and ℓ X final states, where j, ℓ and X represent a jet, a lepton and missing four-momentum, respectively, the following limits are set on the couplings when one parameter is allowed to vary and the others are set to their Standard Model values of zero: begin{array}{l}Δ g^Z_1=-0.025^{+0.033}_{-0.030}, noalign{}λ_γ =0.002^{+0.035}_{-0.035}qquadand noalign{}Δkappa_γ =0.024^{+0.077}_{-0.081}. Results are also presented when two or three parameters are allowed to vary. All observations are consistent with the predictions of the Standard Model and supersede the previous results on these gauge coupling parameters published by Delphi.
Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2017-03-14
The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .
Torres, Edmanuel; DiLabio, Gino A
2013-08-13
Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
Non-line-of-sight ultraviolet link loss in noncoplanar geometry.
Wang, Leijie; Xu, Zhengyuan; Sadler, Brian M
2010-04-15
Various path loss models have been developed for solar blind non-line-of-sight UV communication links under an assumption of coplanar source beam axis and receiver pointing direction. This work further extends an existing single-scattering coplanar analytical model to noncoplanar geometry. The model is derived as a function of geometric parameters and atmospheric characteristics. Its behavior is numerically studied in different noncoplanar geometric settings.
NASA Astrophysics Data System (ADS)
McQuilkin, Martin
The Two-Parameter- Fracture-Criterion (TPFC) was validated using an elastic-plastic two-dimensional (2D) finite-element code, ZIP2D, with the plane-strain- core concept. Fracture simulations were performed on three crack configurations: (1) middle-crack-tension, M(T), (2) single-edge- crack-tension, SE(T), and (3) single-edge crack-bend, SE(B), specimens. They were made of 2014-T6 (TL) aluminum alloy. Fracture test data from Thomas Orange work (NASA) were only available on M(T) specimens (one-half width, w = 1.5 to 6 in.) and they were all tested at cryogenic (-320 o F) temperature. All crack configurations were analysed over a very wide range of widths (w = 0.75 to 24 in.) and crack-length- to-width ratios ranged from 0.2 to 0.8. The TPFC was shown to fit the simulated fracture data fairly well (within 6.5%) for all crack configurations for net-section stresses less than the material proportional limit. For M(T) specimens, a simple approximation was shown to work well for net-section stresses greater than the proportional limit. Further study is needed for net-section stresses greater than the proportional limit for the SE(T) and SE(B) specimens.
Spérandio, Mathieu; Pocquet, Mathieu; Guo, Lisha; Ni, Bing-Jie; Vanrolleghem, Peter A; Yuan, Zhiguo
2016-03-01
Five activated sludge models describing N2O production by ammonium oxidising bacteria (AOB) were compared to four different long-term process data sets. Each model considers one of the two known N2O production pathways by AOB, namely the AOB denitrification pathway and the hydroxylamine oxidation pathway, with specific kinetic expressions. Satisfactory calibration could be obtained in most cases, but none of the models was able to describe all the N2O data obtained in the different systems with a similar parameter set. Variability of the parameters can be related to difficulties related to undescribed local concentration heterogeneities, physiological adaptation of micro-organisms, a microbial population switch, or regulation between multiple AOB pathways. This variability could be due to a dependence of the N2O production pathways on the nitrite (or free nitrous acid-FNA) concentrations and other operational conditions in different systems. This work gives an overview of the potentialities and limits of single AOB pathway models. Indicating in which condition each single pathway model is likely to explain the experimental observations, this work will also facilitate future work on models in which the two main N2O pathways active in AOB are represented together.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Geographically weighted regression and multicollinearity: dispelling the myth
NASA Astrophysics Data System (ADS)
Fotheringham, A. Stewart; Oshan, Taylor M.
2016-10-01
Geographically weighted regression (GWR) extends the familiar regression framework by estimating a set of parameters for any number of locations within a study area, rather than producing a single parameter estimate for each relationship specified in the model. Recent literature has suggested that GWR is highly susceptible to the effects of multicollinearity between explanatory variables and has proposed a series of local measures of multicollinearity as an indicator of potential problems. In this paper, we employ a controlled simulation to demonstrate that GWR is in fact very robust to the effects of multicollinearity. Consequently, the contention that GWR is highly susceptible to multicollinearity issues needs rethinking.
Kinetics of Mixed Microbial Assemblages Enhance Removal of Highly Dilute Organic Substrates
Lewis, David L.; Hodson, Robert E.; Hwang, Huey-Min
1988-01-01
Our experiments with selected organic substrates reveal that the rate-limiting process governing microbial degradation rates changes with substrate concentration, S, in such a manner that substrate removal is enhanced at lower values of S. This enhancement is the result of the dominance of very efficient systems for substrate removal at low substrate concentrations. The variability of dominant kinetic parameters over a range of S causes the kinetics of complex assemblages to be profoundly dissimilar to those of systems possessing a single set of kinetic parameters; these findings necessitate taking a new approach to predicting substrate removal rates over wide ranges of S. PMID:16347715
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D P; Ritts, W D; Wharton, S
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less
Integrating Analysis Goals for EOP, CRF and TRF
NASA Technical Reports Server (NTRS)
Ma, Chopo; MacMillan, Daniel; Petrov, Leonid
2002-01-01
In a simplified, idealized way the TRF (Terrestrial Reference Frame) can be considered a set of positions at epoch and corresponding linear rates of change while the CRF (Celestial Reference Frame) is a set of fixed directions in space. VLBI analysis can be optimized for CRF and TRF separately while handling some of the complexity of geodetic and astrometric reality. For EOP (Earth Orientation Parameter) time series both CRF and TRF should be accurate at the epoch of interest and well defined over time. The optimal integration of EOP, TRF and CRF in a single VLBI solution configuration requires a detailed consideration of the data set and the possibly conflicting nature of the reference frames. A possible approach for an integrated analysis is described.
An arena for model building in the Cohen-Glashow very special relativity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheikh-Jabbari, M. M., E-mail: jabbari@theory.ipm.ac.i; Tureanu, A., E-mail: anca.tureanu@helsinki.f
2010-02-15
The Cohen-Glashow Very Special Relativity (VSR) algebra is defined as the part of the Lorentz algebra which upon addition of CP or T invariance enhances to the full Lorentz group, plus the space-time translations. We show that noncommutative space-time, in particular noncommutative Moyal plane, with light- like noncommutativity provides a robust mathematical setting for quantum field theories which are VSR invariant and hence set the stage for building VSR invariant particle physics models. In our setting the VSR invariant theories are specified with a single deformation parameter, the noncommutativity scale {Lambda}{sub NC}. Preliminary analysis with the available data leads tomore » {Lambda}{sub NC} {>=} 1-10 TeV.« less
Worldwide Historical Estimates of Leaf Area Index, 1932-2000
NASA Technical Reports Server (NTRS)
Scurlock, J. M. O.; Asner, G. P.; Gower, S. T.
2001-01-01
Approximately 1000 published estimates of leaf area index (LAI) from nearly 400 unique field sites, covering the period 1932-2000, have been compiled into a single data set. LA1 is a key parameter for global and regional models of biosphere/atmosphere exchange of carbon dioxide, water vapor, and other materials. It also plays an integral role in determining the energy balance of the land surface. This data set provides a benchmark of typical values and ranges of LA1 for a variety of biomes and land cover types, in support of model development and validation of satellite-derived remote sensing estimates of LA1 and other vegetation parameters. The LA1 data are linked to a bibliography of over 300 originalsource references.This report documents the development of this data set, its contents, and its availability on the Internet from the Oak Ridge National Laboratory Distributed Active Archive Center for Biogeochemical Dynamics. Caution is advised in using these data, which were collected using a wide range of methodologies and assumptions that may not allow comparisons among sites.
Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.
Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver
2016-01-01
The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.
Detecting influential observations in nonlinear regression modeling of groundwater flow
Yager, Richard M.
1998-01-01
Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Caranica, C; Al-Omari, A; Deng, Z; Griffith, J; Nilsen, R; Mao, L; Arnold, J; Schüttler, H-B
2018-01-01
A major challenge in systems biology is to infer the parameters of regulatory networks that operate in a noisy environment, such as in a single cell. In a stochastic regime it is hard to distinguish noise from the real signal and to infer the noise contribution to the dynamical behavior. When the genetic network displays oscillatory dynamics, it is even harder to infer the parameters that produce the oscillations. To address this issue we introduce a new estimation method built on a combination of stochastic simulations, mass action kinetics and ensemble network simulations in which we match the average periodogram and phase of the model to that of the data. The method is relatively fast (compared to Metropolis-Hastings Monte Carlo Methods), easy to parallelize, applicable to large oscillatory networks and large (~2000 cells) single cell expression data sets, and it quantifies the noise impact on the observed dynamics. Standard errors of estimated rate coefficients are typically two orders of magnitude smaller than the mean from single cell experiments with on the order of ~1000 cells. We also provide a method to assess the goodness of fit of the stochastic network using the Hilbert phase of single cells. An analysis of phase departures from the null model with no communication between cells is consistent with a hypothesis of Stochastic Resonance describing single cell oscillators. Stochastic Resonance provides a physical mechanism whereby intracellular noise plays a positive role in establishing oscillatory behavior, but may require model parameters, such as rate coefficients, that differ substantially from those extracted at the macroscopic level from measurements on populations of millions of communicating, synchronized cells.
Barua, Nabanita; Sitaraman, Chitra; Goel, Sonu; Chakraborti, Chandana; Mukherjee, Sonai; Parashar, Hemandra
2016-01-01
Context: Analysis of diagnostic ability of macular ganglionic cell complex and retinal nerve fiber layer (RNFL) in glaucoma. Aim: To correlate functional and structural parameters and comparing predictive value of each of the structural parameters using Fourier-domain (FD) optical coherence tomography (OCT) among primary open angle glaucoma (POAG) and ocular hypertension (OHT) versus normal population. Setting and Design: Single centric, cross-sectional study done in 234 eyes. Materials and Methods: Patients were enrolled in three groups: POAG, ocular hypertensive and normal (40 patients in each group). After comprehensive ophthalmological examination, patients underwent standard automated perimetry and FD-OCT scan in optic nerve head and ganglion cell mode. The relationship was assessed by correlating ganglion cell complex (GCC) parameters with mean deviation. Results were compared with RNFL parameters. Statistical Analysis: Data were analyzed with SPSS, analysis of variance, t-test, Pearson's coefficient, and receiver operating curve. Results: All parameters showed strong correlation with visual field (P < 0.001). Inferior GCC had highest area under curve (AUC) for detecting glaucoma (0.827) in POAG from normal population. However, the difference was not statistically significant (P > 0.5) when compared with other parameters. None of the parameters showed significant diagnostic capability to detect OHT from normal population. In diagnosing early glaucoma from OHT and normal population, only inferior GCC had statistically significant AUC value (0.715). Conclusion: In this study, GCC and RNFL parameters showed equal predictive capability in perimetric versus normal group. In early stage, inferior GCC was the best parameter. In OHT population, single day cross-sectional imaging was not valuable. PMID:27221682
Arjunan, Sridhar Poosapadi; Kumar, Dinesh Kant
2010-10-21
Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface.
2010-01-01
Background Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. Methods SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. Results The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. Conclusions The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface. PMID:20964863
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Earthquake Early Warning: New Strategies for Seismic Hardware
NASA Astrophysics Data System (ADS)
Allardice, S.; Hill, P.
2017-12-01
Implementing Earthquake Early Warning System (EEWS) triggering algorithms into seismic networks has been a hot topic of discussion for some years now. With digitizer technology now available, such as the Güralp Minimus, with on average 40-60ms delay time (latency) from earthquake origin to issuing an alert the next step is to provide network operators with a simple interface for on board parameter calculations from a seismic station. A voting mechanism is implemented on board which mitigates the risk of false positives being communicated. Each Minimus can be configured to with a `score' from various sources i.e. Z channel on seismometer, N/S E/W channels on accelerometer and MEMS inside Minimus. If the score exceeds the set threshold then an alert is sent to the `Master Minimus'. The Master Minimus within the network will also be configured as to when the alert should be issued i.e. at least 3 stations must have triggered. Industry standard algorithms focus around the calculation of Peak Ground Acceleration (PGA), Peak Ground Velocity (PGV), Peak Ground Displacement (PGD) and C. Calculating these single station parameters on-board in order to stream only the results could help network operators with possible issues, such as restricted bandwidth. Developments on the Minimus allow these parameters to be calculated and distributed through Common Alert Protocol (CAP). CAP is the XML based data format used for exchanging and describing public warnings and emergencies. Whenever the trigger conditions are met the Minimus can send a signed UDP packet to the configured CAP receiver which can then send the alert via SMS, e-mail or CAP forwarding. Increasing network redundancy is also a consideration when developing these features, therefore the forwarding CAP message can be sent to multiple destinations. This allows for a hierarchical approach by which the single station (or network) parameters can be streamed to another Minimus, or data centre, or both, so that there is no one single point of failure. Developments on the Guralp Minimus to calculate these on board parameters which are capable of streaming single station parameters, accompanied with the ultra-low latency is the next generation of EEWS and Güralps contribution to the community.
Topology and geometry of the dark matter web: A multi-stream view
NASA Astrophysics Data System (ADS)
Ramachandra, Nesar S.; Shandarin, Sergei F.
2017-05-01
Topological connections in the single-streaming voids and multistreaming filaments and walls reveal a cosmic web structure different from traditional mass density fields. A single void structure not only percolates the multistream field in all the directions, but also occupies over 99 per cent of all the single-streaming regions. Sub-grid analyses on scales smaller than simulation resolution reveal tiny pockets of voids that are isolated by membranes of the structure. For the multistreaming excursion sets, the percolating structure is significantly thinner than the filaments in overdensity excursion approach. Hessian eigenvalues of the multistream field are used as local geometrical indicators of dark matter structures. Single-streaming regions have most of the zero eigenvalues. Parameter-free conditions on the eigenvalues in the multistream region may be used to delineate primitive geometries with concavities corresponding to filaments, walls and haloes.
Design considerations for eye-safe single-aperture laser radars
NASA Astrophysics Data System (ADS)
Starodubov, D.; McCormick, K.; Volfson, L.
2015-05-01
The design considerations for low cost, shock resistant, compact and efficient laser radars and ranging systems are discussed. The reviewed approach with single optical aperture allows reducing the size, weight and power of the system. Additional design benefits include improved stability, reliability and rigidity of the overall system. The proposed modular architecture provides simplified way of varying the performance parameters of the range finder product family by selecting the sets of specific illumination and detection modules. The performance operation challenges are presented. The implementation of non-reciprocal optical elements is considered. The cross talk between illumination and detection channels for single aperture design is reviewed. 3D imaging capability for the ranging applications is considered. The simplified assembly and testing process for single aperture range finders that allows to mass produce the design are discussed. The eye safety of the range finder operation is summarized.
Rosado-Souza, Laise; Scossa, Federico; Chaves, Izabel S; Kleessen, Sabrina; Salvador, Luiz F D; Milagre, Jocimar C; Finger, Fernando; Bhering, Leonardo L; Sulpice, Ronan; Araújo, Wagner L; Nikoloski, Zoran; Fernie, Alisdair R; Nunes-Nesi, Adriano
2015-09-01
Collectively, the results presented improve upon the utility of an important genetic resource and attest to a complex genetic basis for differences in both leaf metabolism and fruit morphology between natural populations. Diversity of accessions within the same species provides an alternative method to identify physiological and metabolic traits that have large effects on growth regulation, biomass and fruit production. Here, we investigated physiological and metabolic traits as well as parameters related to plant growth and fruit production of 49 phenotypically diverse pepper accessions of Capsicum chinense grown ex situ under controlled conditions. Although single-trait analysis identified up to seven distinct groups of accessions, working with the whole data set by multivariate analyses allowed the separation of the 49 accessions in three clusters. Using all 23 measured parameters and data from the geographic origin for these accessions, positive correlations between the combined phenotypes and geographic origin were observed, supporting a robust pattern of isolation-by-distance. In addition, we found that fruit set was positively correlated with photosynthesis-related parameters, which, however, do not explain alone the differences in accession susceptibility to fruit abortion. Our results demonstrated that, although the accessions belong to the same species, they exhibit considerable natural intraspecific variation with respect to physiological and metabolic parameters, presenting diverse adaptation mechanisms and being a highly interesting source of information for plant breeders. This study also represents the first study combining photosynthetic, primary metabolism and growth parameters for Capsicum to date.
Chawla, A; Mukherjee, S; Karthikeyan, B
2009-02-01
The objective of this study is to identify the dynamic material properties of human passive muscle tissues for the strain rates relevant to automobile crashes. A novel methodology involving genetic algorithm (GA) and finite element method is implemented to estimate the material parameters by inverse mapping the impact test data. Isolated unconfined impact tests for average strain rates ranging from 136 s(-1) to 262 s(-1) are performed on muscle tissues. Passive muscle tissues are modelled as isotropic, linear and viscoelastic material using three-element Zener model available in PAMCRASH(TM) explicit finite element software. In the GA based identification process, fitness values are calculated by comparing the estimated finite element forces with the measured experimental forces. Linear viscoelastic material parameters (bulk modulus, short term shear modulus and long term shear modulus) are thus identified at strain rates 136 s(-1), 183 s(-1) and 262 s(-1) for modelling muscles. Extracted optimal parameters from this study are comparable with reported parameters in literature. Bulk modulus and short term shear modulus are found to be more influential in predicting the stress-strain response than long term shear modulus for the considered strain rates. Variations within the set of parameters identified at different strain rates indicate the need for new or improved material model, which is capable of capturing the strain rate dependency of passive muscle response with single set of material parameters for wide range of strain rates.
Absolute Isotopic Abundance Ratios and the Accuracy of Δ47 Measurements
NASA Astrophysics Data System (ADS)
Daeron, M.; Blamart, D.; Peral, M.; Affek, H. P.
2016-12-01
Conversion from raw IRMS data to clumped isotope anomalies in CO2 (Δ47) relies on four external parameters: the (13C/12C) ratio of VPDB, the (17O/16O) and (18O/16O) ratios of VSMOW (or VPDB-CO2), and the slope of the triple oxygen isotope line (λ). Here we investigate the influence that these isotopic parameters exert on measured Δ47 values, using real-world data corresponding to 7 months of measurements; simulations based on randomly generated data; precise comparisons between water-equilibrated CO2 samples and between carbonate standards believed to share quasi-identical Δ47 values; reprocessing of two carbonate calibration data sets with different slopes of Δ47 versus T. Using different sets of isotopic parameters generally produces systematic offsets as large as 0.04 ‰ in final Δ47 values. What's more, even using a single set of isotopic parameters can produce intra- and inter-laboratory discrepancies in final Δ47 values, if some of these parameters are inaccurate. Depending on the isotopic compositions of the standards used for conversion to "absolute" values, these errors should correlate strongly with either δ13C or δ18O, or more weakly with both. Based on measurements of samples expected to display identical Δ47 values, such as 25°C water-equilibrated CO2 with different carbon and oxygen isotope compositions, or high-temperature standards ETH-1 and ETH-2, we conclude that the isotopic parameters used so far in most clumped isotope studies produces large, systematic errors controlled by the relative bulk isotopic compositions of samples and standards, which should be one of the key factors responsible for current inter-laboratory discrepancies. By contrast, the isotopic parameters of Brand et al. [2010] appear to yield accurate Δ47 values regardless of bulk isotopic composition. References:Brand, Assonov and Coplen [2010] http://dx.doi.org/10.1351/PAC-REP-09-01-05
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
SCOPE: a web server for practical de novo motif discovery.
Carlson, Jonathan M; Chakravarty, Arijit; DeZiel, Charles E; Gross, Robert H
2007-07-01
SCOPE is a novel parameter-free method for the de novo identification of potential regulatory motifs in sets of coordinately regulated genes. The SCOPE algorithm combines the output of three component algorithms, each designed to identify a particular class of motifs. Using an ensemble learning approach, SCOPE identifies the best candidate motifs from its component algorithms. In tests on experimentally determined datasets, SCOPE identified motifs with a significantly higher level of accuracy than a number of other web-based motif finders run with their default parameters. Because SCOPE has no adjustable parameters, the web server has an intuitive interface, requiring only a set of gene names or FASTA sequences and a choice of species. The most significant motifs found by SCOPE are displayed graphically on the main results page with a table containing summary statistics for each motif. Detailed motif information, including the sequence logo, PWM, consensus sequence and specific matching sites can be viewed through a single click on a motif. SCOPE's efficient, parameter-free search strategy has enabled the development of a web server that is readily accessible to the practising biologist while providing results that compare favorably with those of other motif finders. The SCOPE web server is at
Design of integration-ready metasurface-based infrared absorbers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogando, Karim, E-mail: karim@cab.cnea.gov.ar; Pastoriza, Hernán
2015-07-28
We introduce an integration ready design of metamaterial infrared absorber, highly compatible with many kinds of fabrication processes. We present the results of an exhaustive experimental characterization, including an analysis of the effects of single meta-atom geometrical parameters and collective arrangement. We confront the results with the theoretical interpretations proposed in the literature. Based on the results, we develop a set of practical design rules for metamaterial absorbers in the infrared region.
Total Dose Effects on Single Event Transients in Linear Bipolar Systems
NASA Technical Reports Server (NTRS)
Buchner, Stephen; McMorrow, Dale; Bernard, Muriel; Roche, Nicholas; Dusseau, Laurent
2008-01-01
Single Event Transients (SETs) originating in linear bipolar integrated circuits are known to undermine the reliability of electronic systems operating in the radiation environment of space. Ionizing particle radiation produces a variety of SETs in linear bipolar circuits. The extent to which these SETs threaten system reliability depends on both their shapes (amplitude and width) and their threshold energies. In general, SETs with large amplitudes and widths are the most likely to propagate from a bipolar circuit's output through a subsystem. The danger these SET pose is that, if they become latched in a follow-on circuit, they could cause an erroneous system response. Long-term exposure of linear bipolar circuits to particle radiation produces total ionizing dose (TID) and/or displacement damage dose (DDD) effects that are characterized by a gradual degradation in some of the circuit's electrical parameters. For example, an operational amplifier's gain-bandwidth product is reduced by exposure to ionizing radiation, and it is this reduction that contributes to the distortion of the SET shapes. In this paper, we compare SETs produced in a pristine LM124 operational amplifier with those produced in one exposed to ionizing radiation for three different operating configurations - voltage follower (VF), inverter with gain (IWG), and non-inverter with gain (NIWG). Each configuration produces a unique set of transient shapes that change following exposure to ionizing radiation. An important finding is that the changes depend on operating configuration; some SETs decrease in amplitude, some remain relatively unchanged, some become narrower and some become broader.
Next generation lightweight mirror modeling software
NASA Astrophysics Data System (ADS)
Arnold, William R.; Fitzgerald, Matthew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-09-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 3-5 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any text editor, all the shell thickness parameters and suspension spring rates are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
NASA Astrophysics Data System (ADS)
Tuttle, William D.; Thorington, Rebecca L.; Viehland, Larry A.; Breckenridge, W. H.; Wright, Timothy G.
2018-03-01
Accurate interatomic potentials were calculated for the interaction of a singly charged carbon cation, C+, with a single rare gas atom, RG (RG = Ne-Xe). The RCCSD(T) method and basis sets of quadruple-ζ and quintuple-ζ quality were employed; each interaction energy was counterpoise corrected and extrapolated to the basis set limit. The lowest C+(2P) electronic term of the carbon cation was considered, and the interatomic potentials calculated for the diatomic terms that arise from these: 2Π and 2Σ+. Additionally, the interatomic potentials for the respective spin-orbit levels were calculated, and the effect on the spectroscopic parameters was examined. In doing this, anomalously large spin-orbit splittings for RG = Ar-Xe were found, and this was investigated using multi-reference configuration interaction calculations. The latter indicated a small amount of RG → C+ electron transfer and this was used to rationalize the observations. This is taken as evidence of an incipient chemical interaction, which was also examined via contour plots, Birge-Sponer plots and various population analyses across the C+-RG series (RG = He-Xe), with the latter showing unexpected results. Trends in several spectroscopic parameters were examined as a function of the increasing atomic number of the RG atom. Finally, each set of RCCSD(T) potentials was employed, including spin-orbit coupling to calculate the transport coefficients for C+ in RG, and the results were compared with the limited available data. This article is part of the theme issue `Modern theoretical chemistry'.
Effect of an Additional, Parallel Capacitor on Pulsed Inductive Plasma Accelerator Performance
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Sivak, Amy D.; Balla, Joseph V.
2011-01-01
A model of pulsed inductive plasma thrusters consisting of a set of coupled circuit equations and a one-dimensional momentum equation has been used to study the effects of adding a second, parallel capacitor into the system. The equations were nondimensionalized, permitting the recovery of several already-known scaling parameters and leading to the identification of a parameter that is unique to the particular topology studied. The current rise rate through the inductive acceleration coil was used as a proxy measurement of the effectiveness of inductive propellant ionization since higher rise rates produce stronger, potentially better ionizing electric fields at the coil face. Contour plots representing thruster performance (exhaust velocity and efficiency) and current rise rate in the coil were generated numerically as a function of the scaling parameters. The analysis reveals that when the value of the second capacitor is much less than the first capacitor, the performance of the two-capacitor system approaches that of the single-capacitor system. In addition, as the second capacitor is decreased in value the current rise rate can grow to be twice as great as the rise rate attained in the single capacitor case.
Evaluation of RAPID for a UNF cask benchmark problem
NASA Astrophysics Data System (ADS)
Mascolino, Valerio; Haghighat, Alireza; Roskoff, Nathan J.
2017-09-01
This paper examines the accuracy and performance of the RAPID (Real-time Analysis for Particle transport and In-situ Detection) code system for the simulation of a used nuclear fuel (UNF) cask. RAPID is capable of determining eigenvalue, subcritical multiplication, and pin-wise, axially-dependent fission density throughout a UNF cask. We study the source convergence based on the analysis of the different parameters used in an eigenvalue calculation in the MCNP Monte Carlo code. For this study, we consider a single assembly surrounded by absorbing plates with reflective boundary conditions. Based on the best combination of eigenvalue parameters, a reference MCNP solution for the single assembly is obtained. RAPID results are in excellent agreement with the reference MCNP solutions, while requiring significantly less computation time (i.e., minutes vs. days). A similar set of eigenvalue parameters is used to obtain a reference MCNP solution for the whole UNF cask. Because of time limitation, the MCNP results near the cask boundaries have significant uncertainties. Except for these, the RAPID results are in excellent agreement with the MCNP predictions, and its computation time is significantly lower, 35 second on 1 core versus 9.5 days on 16 cores.
Generalized gas-solid adsorption modeling: Single-component equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas; ...
2015-01-07
Over the last several decades, modeling of gas–solid adsorption at equilibrium has generally been accomplished through the use of isotherms such as the Freundlich, Langmuir, Tóth, and other similar models. While these models are relatively easy to adapt for describing experimental data, their simplicity limits their generality to be used with many different sets of data. This limitation forces engineers and scientists to test each different model in order to evaluate which one can best describe their data. Additionally, the parameters of these models all have a different physical interpretation, which may have an effect on how they can bemore » further extended into kinetic, thermodynamic, and/or mass transfer models for engineering applications. Therefore, it is paramount to adopt not only a more general isotherm model, but also a concise methodology to reliably optimize for and obtain the parameters of that model. A model of particular interest is the Generalized Statistical Thermodynamic Adsorption (GSTA) isotherm. The GSTA isotherm has enormous flexibility, which could potentially be used to describe a variety of different adsorption systems, but utilizing this model can be fairly difficult due to that flexibility. To circumvent this complication, a comprehensive methodology and computer code has been developed that can perform a full equilibrium analysis of adsorption data for any gas-solid system using the GSTA model. The code has been developed in C/C++ and utilizes a Levenberg–Marquardt’s algorithm to handle the non-linear optimization of the model parameters. Since the GSTA model has an adjustable number of parameters, the code iteratively goes through all number of plausible parameters for each data set and then returns the best solution based on a set of scrutiny criteria. Data sets at different temperatures are analyzed serially and then linear correlations with temperature are made for the parameters of the model. The end result is a full set of optimal GSTA parameters, both dimensional and non-dimensional, as well as the corresponding thermodynamic parameters necessary to predict the behavior of the system at temperatures for which data were not available. It will be shown that this code, utilizing the GSTA model, was able to describe a wide variety of gas-solid adsorption systems at equilibrium.In addition, a physical interpretation of these results will be provided, as well as an alternate derivation of the GSTA model, which intends to reaffirm the physical meaning.« less
NASA Astrophysics Data System (ADS)
Mavkov, B.; Witrant, E.; Prieur, C.; Maljaars, E.; Felici, F.; Sauter, O.; the TCV-Team
2018-05-01
In this paper, model-based closed-loop algorithms are derived for distributed control of the inverse of the safety factor profile and the plasma pressure parameter β of the TCV tokamak. The simultaneous control of the two plasma quantities is performed by combining two different control methods. The control design of the plasma safety factor is based on an infinite-dimensional setting using Lyapunov analysis for partial differential equations, while the control of the plasma pressure parameter is designed using control techniques for single-input and single-output systems. The performance and robustness of the proposed controller is analyzed in simulations using the fast plasma transport simulator RAPTOR. The control is then implemented and tested in experiments in TCV L-mode discharges using the RAPTOR model predicted estimates for the q-profile. The distributed control in TCV is performed using one co-current and one counter-current electron cyclotron heating actuation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Lu, Z; MacMahon, H
Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-03-01
Inequality indices are widely applied in economics and in the social sciences as quantitative measures of the socioeconomic inequality of human societies. The application of inequality indices extends to size-distributions at large, where these indices can be used as general gauges of statistical heterogeneity. Moreover, as inequality indices are plentiful, arrays of such indices facilitate high-detail quantification of statistical heterogeneity. In this paper we elevate from arrays of inequality indices to inequality spectra: continuums of inequality indices that are parameterized by a single control parameter. We present a general methodology of constructing Lorenz-based inequality spectra, apply the general methodology to establish four sets of inequality spectra, investigate the properties of these sets, and show how these sets generalize known inequality gauges such as: the Gini index, the extended Gini index, the Rényi index, and hill curves.
Jarvis, Stuart; Kovacs, Caroline; Briggs, Jim; Meredith, Paul; Schmidt, Paul E; Featherstone, Peter I; Prytherch, David R; Smith, Gary B
2015-02-01
The Royal College of Physicians (RCPL) National Early Warning Score (NEWS) escalates care to a doctor at NEWS values of ≥5 and when the score for any single vital sign is 3. We calculated the 24-h risk of serious clinical outcomes for vital signs observation sets with NEWS values of 3, 4 and 5, separately determining risks when the score did/did not include a single score of 3. We compared workloads generated by the RCPL's escalation protocol and for aggregate NEWS value alone. Aggregate NEWS values of 3 or 4 (n=142,282) formed 15.1% of all vital signs sets measured; those containing a single vital sign scoring 3 (n=36,207) constituted 3.8% of all sets. Aggregate NEWS values of either 3 or 4 with a component score of 3 have significantly lower risks (OR: 0.26 and 0.53) than an aggregate value of 5 (OR: 1.0). Escalating care to a doctor when any single component of NEWS scores 3 compared to when aggregate NEWS values ≥5, would have increased doctors' workload by 40% with only a small increase in detected adverse outcomes from 2.99 to 3.08 per day (a 3% improvement in detection). The recommended NEWS escalation protocol produces additional work for the bedside nurse and responding doctor, disproportionate to a modest benefit in increased detection of adverse outcomes. It may have significant ramifications for efficient staff resource allocation, distort patient safety focus and risk alarm fatigue. Our findings suggest that the RCPL escalation guidance warrants review. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
CMB constraints on the inflaton couplings and reheating temperature in α-attractor inflation
NASA Astrophysics Data System (ADS)
Drewes, Marco; Kang, Jin U.; Mun, Ui Ri
2017-11-01
We study reheating in α-attractor models of inflation in which the inflaton couples to other scalars or fermions. We show that the parameter space contains viable regions in which the inflaton couplings to radiation can be determined from the properties of CMB temperature fluctuations, in particular the spectral index. This may be the only way to measure these fundamental microphysical parameters, which shaped the universe by setting the initial temperature of the hot big bang and contain important information about the embedding of a given model of inflation into a more fundamental theory of physics. The method can be applied to other models of single field inflation.
pypet: A Python Toolkit for Data Management of Parameter Explorations
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines. PMID:27610080
pypet: A Python Toolkit for Data Management of Parameter Explorations.
Meyer, Robert; Obermayer, Klaus
2016-01-01
pypet (Python parameter exploration toolkit) is a new multi-platform Python toolkit for managing numerical simulations. Sampling the space of model parameters is a key aspect of simulations and numerical experiments. pypet is designed to allow easy and arbitrary sampling of trajectories through a parameter space beyond simple grid searches. pypet collects and stores both simulation parameters and results in a single HDF5 file. This collective storage allows fast and convenient loading of data for further analyses. pypet provides various additional features such as multiprocessing and parallelization of simulations, dynamic loading of data, integration of git version control, and supervision of experiments via the electronic lab notebook Sumatra. pypet supports a rich set of data formats, including native Python types, Numpy and Scipy data, Pandas DataFrames, and BRIAN(2) quantities. Besides these formats, users can easily extend the toolkit to allow customized data types. pypet is a flexible tool suited for both short Python scripts and large scale projects. pypet's various features, especially the tight link between parameters and results, promote reproducible research in computational neuroscience and simulation-based disciplines.
Singh, R.; Archfield, S.A.; Wagener, T.
2014-01-01
Daily streamflow information is critical for solving various hydrologic problems, though observations of continuous streamflow for model calibration are available at only a small fraction of the world’s rivers. One approach to estimate daily streamflow at an ungauged location is to transfer rainfall–runoff model parameters calibrated at a gauged (donor) catchment to an ungauged (receiver) catchment of interest. Central to this approach is the selection of a hydrologically similar donor. No single metric or set of metrics of hydrologic similarity have been demonstrated to consistently select a suitable donor catchment. We design an experiment to diagnose the dominant controls on successful hydrologic model parameter transfer. We calibrate a lumped rainfall–runoff model to 83 stream gauges across the United States. All locations are USGS reference gauges with minimal human influence. Parameter sets from the calibrated models are then transferred to each of the other catchments and the performance of the transferred parameters is assessed. This transfer experiment is carried out both at the scale of the entire US and then for six geographic regions. We use classification and regression tree (CART) analysis to determine the relationship between catchment similarity and performance of transferred parameters. Similarity is defined using physical/climatic catchment characteristics, as well as streamflow response characteristics (signatures such as baseflow index and runoff ratio). Across the entire US, successful parameter transfer is governed by similarity in elevation and climate, and high similarity in streamflow signatures. Controls vary for different geographic regions though. Geology followed by drainage, topography and climate constitute the dominant similarity metrics in forested eastern mountains and plateaus, whereas agricultural land use relates most strongly with successful parameter transfer in the humid plains.
Analyzing ROC curves using the effective set-size model
NASA Astrophysics Data System (ADS)
Samuelson, Frank W.; Abbey, Craig K.; He, Xin
2018-03-01
The Effective Set-Size model has been used to describe uncertainty in various signal detection experiments. The model regards images as if they were an effective number (M*) of searchable locations, where the observer treats each location as a location-known-exactly detection task with signals having average detectability d'. The model assumes a rational observer behaves as if he searches an effective number of independent locations and follows signal detection theory at each location. Thus the location-known-exactly detectability (d') and the effective number of independent locations M* fully characterize search performance. In this model the image rating in a single-response task is assumed to be the maximum response that the observer would assign to these many locations. The model has been used by a number of other researchers, and is well corroborated. We examine this model as a way of differentiating imaging tasks that radiologists perform. Tasks involving more searching or location uncertainty may have higher estimated M* values. In this work we applied the Effective Set-Size model to a number of medical imaging data sets. The data sets include radiologists reading screening and diagnostic mammography with and without computer-aided diagnosis (CAD), and breast tomosynthesis. We developed an algorithm to fit the model parameters using two-sample maximum-likelihood ordinal regression, similar to the classic bi-normal model. The resulting model ROC curves are rational and fit the observed data well. We find that the distributions of M* and d' differ significantly among these data sets, and differ between pairs of imaging systems within studies. For example, on average tomosynthesis increased readers' d' values, while CAD reduced the M* parameters. We demonstrate that the model parameters M* and d' are correlated. We conclude that the Effective Set-Size model may be a useful way of differentiating location uncertainty from the diagnostic uncertainty in medical imaging tasks.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
Molecular dynamic simulation for nanometric cutting of single-crystal face-centered cubic metals.
Huang, Yanhua; Zong, Wenjun
2014-01-01
In this work, molecular dynamics simulations are performed to investigate the influence of material properties on the nanometric cutting of single crystal copper and aluminum with a diamond cutting tool. The atomic interactions in the two metallic materials are modeled by two sets of embedded atom method (EAM) potential parameters. Simulation results show that although the plastic deformation of the two materials is achieved by dislocation activities, the deformation behavior and related physical phenomena, such as the machining forces, machined surface quality, and chip morphology, are significantly different for different materials. Furthermore, the influence of material properties on the nanometric cutting has a strong dependence on the operating temperature.
Yeager, John D.; Luscher, Darby J.; Vogel, Sven C.; ...
2016-02-02
Triaminotrinitrobenzene (TATB) is a highly anisotropic molecular crystal used in several plastic-bonded explosive (PBX) formulations. TATB-based explosives exhibit irreversible volume expansion (“ratchet growth”) when thermally cycled. A theoretical understanding of the relationship between anisotropy of the crystal, crystal orientation distribution (texture) of polycrystalline aggregates, and the intergranular interactions leading to this irreversible growth is necessary to accurately develop physics-based predictive models for TATB-based PBXs under various thermal environments. In this work, TATB lattice parameters were measured using neutron diffraction during thermal cycling of loose powder and a pressed pellet. The measured lattice parameters help clarify conflicting reports in the literaturemore » as these new results are more consistent with one set of previous results than another. The lattice parameters of pressed TATB were also measured as a function of temperature, showing some differences from the powder. This data is used along with anisotropic single-crystal stiffness moduli reported in the literature to model the nominal stresses associated with intergranular constraints during thermal expansion. The texture of both specimens were characterized and the pressed pellet exhibits preferential orientation of (001) poles along the pressing direction, whereas no preferred orientation was found for the loose powder. Lastly, thermal strains for single-crystal TATB computed from lattice parameter data for the powder is input to a self-consistent micromechanical model, which predicts the lattice parameters of the constrained TATB crystals within the pellet. The agreement of these model results with the diffraction data obtained from the pellet is discussed along with future directions of research.« less
Sastry, Madhavi; Lowrie, Jeffrey F; Dixon, Steven L; Sherman, Woody
2010-05-24
A systematic virtual screening study on 11 pharmaceutically relevant targets has been conducted to investigate the interrelation between 8 two-dimensional (2D) fingerprinting methods, 13 atom-typing schemes, 13 bit scaling rules, and 12 similarity metrics using the new cheminformatics package Canvas. In total, 157 872 virtual screens were performed to assess the ability of each combination of parameters to identify actives in a database screen. In general, fingerprint methods, such as MOLPRINT2D, Radial, and Dendritic that encode information about local environment beyond simple linear paths outperformed other fingerprint methods. Atom-typing schemes with more specific information, such as Daylight, Mol2, and Carhart were generally superior to more generic atom-typing schemes. Enrichment factors across all targets were improved considerably with the best settings, although no single set of parameters performed optimally on all targets. The size of the addressable bit space for the fingerprints was also explored, and it was found to have a substantial impact on enrichments. Small bit spaces, such as 1024, resulted in many collisions and in a significant degradation in enrichments compared to larger bit spaces that avoid collisions.
Indications of a late-time interaction in the dark sector.
Salvatelli, Valentina; Said, Najla; Bruni, Marco; Melchiorri, Alessandro; Wands, David
2014-10-31
We show that a general late-time interaction between cold dark matter and vacuum energy is favored by current cosmological data sets. We characterize the strength of the coupling by a dimensionless parameter q(V) that is free to take different values in four redshift bins from the primordial epoch up to today. This interacting scenario is in agreement with measurements of cosmic microwave background temperature anisotropies from the Planck satellite, supernovae Ia from Union 2.1 and redshift space distortions from a number of surveys, as well as with combinations of these different data sets. Our analysis of the 4-bin interaction shows that a nonzero interaction is likely at late times. We then focus on the case q(V)≠0 in a single low-redshift bin, obtaining a nested one parameter extension of the standard ΛCDM model. We study the Bayesian evidence, with respect to ΛCDM, of this late-time interaction model, finding moderate evidence for an interaction starting at z=0.9, dependent upon the prior range chosen for the interaction strength parameter q(V). For this case the null interaction (q(V)=0, i.e., ΛCDM) is excluded at 99% C.L.
An improved nuclear mass model: FRDM (2012)
NASA Astrophysics Data System (ADS)
Moller, Peter
2011-10-01
We have developed an improved nuclear mass model which we plan to finalize in 2012, so we designate it FRDM(2012). Relative to our previous mass table in 1995 we do a full four-dimensional variation of the shape coordinates EPS2, EPS3, EPS4, and EPS6, we consider axial asymmetric shape degrees of freedom and we vary the density symmetry parameter L. Other additional features are also implemented. With respect to the Audi 2003 data base we now have an accuracy of 0.57 MeV. We have carefully tested the extrapolation properties of the new mass table by adjusting model parameters to limited data sets and testing on extended data sets and find it is highly reliable in new regions of nuclei. We discuss what the remaining differences between model calculations and experiment tell us about the limitations of the currently used effective single-particle potential and possible extensions. DOE No. DE-AC52-06NA25396.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Yi-Geng; Data Center for High Energy Density Physics, Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088; Wu, Yong, E-mail: wu-yong@iapcm.ac.cn
2016-02-07
K-vacancy Auger states of N{sup q+} (q = 2-5) ions are studied by using the complex multireference single- and double-excitation configuration interaction (CMRD-CI) method. The calculated resonance parameters are in good agreement with the available experimental and theoretical data. It shows that the resonance positions and widths converge quickly with the increase of the atomic basis sets in the CMRD-CI calculations; the standard atomic basis set can be employed to describe the atomic K-vacancy Auger states well. The strong correlations between the valence and core electrons play important roles in accurately determining those resonance parameters, Rydberg electrons contribute negligibly inmore » the calculations. Note that it is the first time that the complex scaling method has been successfully applied for the B-like nitrogen. CMRD-CI is readily extended to treat the resonance states of molecules in the near future.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slopsema, R. L., E-mail: rslopsema@floridaproton.org; Flampouri, S.; Yeung, D.
2014-09-15
Purpose: The purpose of this investigation is to determine if a single set of beam data, described by a minimal set of equations and fitting variables, can be used to commission different installations of a proton double-scattering system in a commercial pencil-beam dose calculation algorithm. Methods: The beam model parameters required to commission the pencil-beam dose calculation algorithm (virtual and effective SAD, effective source size, and pristine-peak energy spread) are determined for a commercial double-scattering system. These parameters are measured in a first room and parameterized as function of proton energy and nozzle settings by fitting four analytical equations tomore » the measured data. The combination of these equations and fitting values constitutes the golden beam data (GBD). To determine the variation in dose delivery between installations, the same dosimetric properties are measured in two additional rooms at the same facility, as well as in a single room at another facility. The difference between the room-specific measurements and the GBD is evaluated against tolerances that guarantee the 3D dose distribution in each of the rooms matches the GBD-based dose distribution within clinically reasonable limits. The pencil-beam treatment-planning algorithm is commissioned with the GBD. The three-dimensional dose distribution in water is evaluated in the four treatment rooms and compared to the treatment-planning calculated dose distribution. Results: The virtual and effective SAD measurements fall between 226 and 257 cm. The effective source size varies between 2.4 and 6.2 cm for the large-field options, and 1.0 and 2.0 cm for the small-field options. The pristine-peak energy spread decreases from 1.05% at the lowest range to 0.6% at the highest. The virtual SAD as well as the effective source size can be accurately described by a linear relationship as function of the inverse of the residual energy. An additional linear correction term as function of RM-step thickness is required for accurate parameterization of the effective SAD. The GBD energy spread is given by a linear function of the exponential of the beam energy. Except for a few outliers, the measured parameters match the GBD within the specified tolerances in all of the four rooms investigated. For a SOBP field with a range of 15 g/cm{sup 2} and an air gap of 25 cm, the maximum difference in the 80%–20% lateral penumbra between the GBD-commissioned treatment-planning system and measurements in any of the four rooms is 0.5 mm. Conclusions: The beam model parameters of the double-scattering system can be parameterized with a limited set of equations and parameters. This GBD closely matches the measured dosimetric properties in four different rooms.« less
Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario
2011-03-01
The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.
Effect of a Single Musical Cakra Activation Manoeuvre on Body Temperature: An Exploratory Study
Sumathy, Sundar; Parmar, Parin N
2016-01-01
Cakra activation/balancing and music therapy are part of the traditional Indian healing system. Little is known about effect of musical (vocal) technique of cakra activation on body temperature. We conducted a single-session exploratory study to evaluate effects of a single musical (vocal) cakra activation manoeuvre on body temperature in controlled settings. Seven healthy adults performed a single musical (vocal) cakra activation manoeuvre for approximately 12 minutes in controlled environmental conditions. Pre- and post-manoeuvre body temperatures were recorded with a clinical mercury thermometer. After a single manoeuvre, increase in body temperature was recorded in all seven subjects. The range of increase in body temperature was from 0.2°F to 1.4°F; with mean temperature rise being 0.5°F and median temperature rise being 0.4°F. We conclude that a single session of musical (vocal) technique of cakra activation elevated body temperatures in all 7 subjects. Further research is required to study effects of various cakra activation techniques on body temperature and other physiological parameters. PMID:28182030
Effect of a Single Musical Cakra Activation Manoeuvre on Body Temperature: An Exploratory Study.
Sumathy, Sundar; Parmar, Parin N
2016-01-01
Cakra activation/balancing and music therapy are part of the traditional Indian healing system. Little is known about effect of musical (vocal) technique of cakra activation on body temperature. We conducted a single-session exploratory study to evaluate effects of a single musical (vocal) cakra activation manoeuvre on body temperature in controlled settings. Seven healthy adults performed a single musical (vocal) cakra activation manoeuvre for approximately 12 minutes in controlled environmental conditions. Pre- and post-manoeuvre body temperatures were recorded with a clinical mercury thermometer. After a single manoeuvre, increase in body temperature was recorded in all seven subjects. The range of increase in body temperature was from 0.2°F to 1.4°F; with mean temperature rise being 0.5°F and median temperature rise being 0.4°F. We conclude that a single session of musical (vocal) technique of cakra activation elevated body temperatures in all 7 subjects. Further research is required to study effects of various cakra activation techniques on body temperature and other physiological parameters.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuhn, A. M.; Fennel, K.; Bianucci, L.
2016-02-01
A key feature of the North Atlantic Ocean's biological dynamics is the annual phytoplankton spring bloom. In the region comprising the continental shelf and adjacent deep ocean of the northwest North Atlantic, we identified two patterns of bloom development: 1) locations with cold temperatures and deep winter mixed layers, where the spring bloom peaks around April and the annual chlorophyll cycle has a large amplitude, and 2) locations with warmer temperatures and shallow winter mixed layers, where the spring bloom peaks earlier in the year, sometimes indiscernible from the fall bloom. These patterns result from a combination of limiting environmental factors and interactions among planktonic groups with different optimal requirements. Simple models that represent the ecosystem with a single phytoplankton (P) and a single zooplankton (Z) group are challenged to reproduce these ecological interactions. Here we investigate the effect that added complexity has on determining spatio-temporal chlorophyll. We compare two ecosystem models, one that contains one P and one Z group, and one with two P and three Z groups. We consider three types of changes in complexity: 1) added dependencies among variables (e.g., temperature dependent rates), 2) modified structural pathways, and 3) added pathways. Subsets of the most sensitive parameters are optimized in each model to replicate observations in the region. For computational efficiency, the parameter optimization is performed using 1D surrogates of a 3D model. We evaluate how model complexity affects model skill, and whether the optimized parameter sets found for each model modify the interpretation of ecosystem functioning. Spatial differences in the parameter sets that best represent different areas hint at the existence of different ecological communities or at physical-biological interactions that are not represented in the simplest model. Our methodology emphasizes the combined use of observations, 1D models to help identifying patterns, and 3D models able to simulate the environment modre realistically, as a means to acquire predictive understanding of the ocean's ecology.
Satellite accretion on to massive galaxies with central black holes
NASA Astrophysics Data System (ADS)
Boylan-Kolchin, Michael; Ma, Chung-Pei
2007-02-01
Minor mergers of galaxies are expected to be common in a hierarchical cosmology such as Λ cold dark matter. Though less disruptive than major mergers, minor mergers are more frequent and thus have the potential to affect galactic structure significantly. In this paper, we dissect the case-by-case outcome from a set of numerical simulations of a single satellite elliptical galaxy accreting on to a massive elliptical galaxy. We take care to explore cosmologically relevant orbital parameters and to set up realistic initial galaxy models that include all three relevant dynamical components: dark matter haloes, stellar bulges, and central massive black holes (BHs). The effects of several different parameters are considered, including orbital energy and angular momentum, satellite density and inner density profile, satellite-to-host mass ratio, and presence of a BH at the centre of the host. BHs play a crucial role in protecting the shallow stellar cores of the hosts, as satellites merging on to a host with a central BH are more strongly disrupted than those merging on to hosts without BHs. Orbital parameters play an important role in determining the degree of disruption: satellites on less-bound or more-eccentric orbits are more easily destroyed than those on more-bound or more-circular orbits as a result of an increased number of pericentric passages and greater cumulative effects of gravitational shocking and tidal stripping. In addition, satellites with densities typical of faint elliptical galaxies are disrupted relatively easily, while denser satellites can survive much better in the tidal field of the host. Over the range of parameters explored, we find that the accretion of a single satellite elliptical galaxy can result in a broad variety of changes, in both signs, in the surface brightness profile and colour of the central part of an elliptical galaxy. Our results show that detailed properties of the stellar components of merging satellites can strongly affect the properties of the remnants.
NASA Astrophysics Data System (ADS)
Oby, Emily R.; Perel, Sagi; Sadtler, Patrick T.; Ruff, Douglas A.; Mischel, Jessica L.; Montez, David F.; Cohen, Marlene R.; Batista, Aaron P.; Chase, Steven M.
2016-06-01
Objective. A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach. We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2018-01-01
Objective A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain–computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). Approach We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. Main Results The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. Significance How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue. PMID:27097901
Oby, Emily R; Perel, Sagi; Sadtler, Patrick T; Ruff, Douglas A; Mischel, Jessica L; Montez, David F; Cohen, Marlene R; Batista, Aaron P; Chase, Steven M
2016-06-01
A traditional goal of neural recording with extracellular electrodes is to isolate action potential waveforms of an individual neuron. Recently, in brain-computer interfaces (BCIs), it has been recognized that threshold crossing events of the voltage waveform also convey rich information. To date, the threshold for detecting threshold crossings has been selected to preserve single-neuron isolation. However, the optimal threshold for single-neuron identification is not necessarily the optimal threshold for information extraction. Here we introduce a procedure to determine the best threshold for extracting information from extracellular recordings. We apply this procedure in two distinct contexts: the encoding of kinematic parameters from neural activity in primary motor cortex (M1), and visual stimulus parameters from neural activity in primary visual cortex (V1). We record extracellularly from multi-electrode arrays implanted in M1 or V1 in monkeys. Then, we systematically sweep the voltage detection threshold and quantify the information conveyed by the corresponding threshold crossings. The optimal threshold depends on the desired information. In M1, velocity is optimally encoded at higher thresholds than speed; in both cases the optimal thresholds are lower than are typically used in BCI applications. In V1, information about the orientation of a visual stimulus is optimally encoded at higher thresholds than is visual contrast. A conceptual model explains these results as a consequence of cortical topography. How neural signals are processed impacts the information that can be extracted from them. Both the type and quality of information contained in threshold crossings depend on the threshold setting. There is more information available in these signals than is typically extracted. Adjusting the detection threshold to the parameter of interest in a BCI context should improve our ability to decode motor intent, and thus enhance BCI control. Further, by sweeping the detection threshold, one can gain insights into the topographic organization of the nearby neural tissue.
Tanabe, Katsuaki
2016-01-01
We modeled the dynamics of hydrogen and deuterium adsorbed on palladium nanoparticles including the heat generation induced by the chemical adsorption and desorption, as well as palladium-catalyzed reactions. Our calculations based on the proposed model reproduce the experimental time-evolution of pressure and temperature with a single set of fitting parameters for hydrogen and deuterium injection. The model we generated with a highly generalized set of formulations can be applied for any combination of a gas species and a catalytic adsorbent/absorbent. Our model can be used as a basis for future research into hydrogen storage and solid-state nuclear fusion technologies.
Mesoscopic kinetic Monte Carlo modeling of organic photovoltaic device characteristics
NASA Astrophysics Data System (ADS)
Kimber, Robin G. E.; Wright, Edward N.; O'Kane, Simon E. J.; Walker, Alison B.; Blakesley, James C.
2012-12-01
Measured mobility and current-voltage characteristics of single layer and photovoltaic (PV) devices composed of poly{9,9-dioctylfluorene-co-bis[N,N'-(4-butylphenyl)]bis(N,N'-phenyl-1,4-phenylene)diamine} (PFB) and poly(9,9-dioctylfluorene-co-benzothiadiazole) (F8BT) have been reproduced by a mesoscopic model employing the kinetic Monte Carlo (KMC) approach. Our aim is to show how to avoid the uncertainties common in electrical transport models arising from the need to fit a large number of parameters when little information is available, for example, a single current-voltage curve. Here, simulation parameters are derived from a series of measurements using a self-consistent “building-blocks” approach, starting from data on the simplest systems. We found that site energies show disorder and that correlations in the site energies and a distribution of deep traps must be included in order to reproduce measured charge mobility-field curves at low charge densities in bulk PFB and F8BT. The parameter set from the mobility-field curves reproduces the unipolar current in single layers of PFB and F8BT and allows us to deduce charge injection barriers. Finally, by combining these disorder descriptions and injection barriers with an optical model, the external quantum efficiency and current densities of blend and bilayer organic PV devices can be successfully reproduced across a voltage range encompassing reverse and forward bias, with the recombination rate the only parameter to be fitted, found to be 1×107 s-1. These findings demonstrate an approach that removes some of the arbitrariness present in transport models of organic devices, which validates the KMC as an accurate description of organic optoelectronic systems, and provides information on the microscopic origins of the device behavior.
Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data
Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.
1999-01-01
We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.
Acoustical characterization and parameter optimization of polymeric noise control materials
NASA Astrophysics Data System (ADS)
Homsi, Emile N.
2003-10-01
The sound transmission loss (STL) characteristics of polymer-based materials are considered. Analytical models that predict, characterize and optimize the STL of polymeric materials, with respect to physical parameters that affect performance, are developed for single layer panel configuration and adapted for layered panel construction with homogenous core. An optimum set of material parameters is selected and translated into practical applications for validation. Sound attenuating thermoplastic materials designed to be used as barrier systems in the automotive and consumer industries have certain acoustical characteristics that vary in function of the stiffness and density of the selected material. The validity and applicability of existing theory is explored, and since STL is influenced by factors such as the surface mass density of the panel's material, a method is modified to improve STL performance and optimize load-bearing attributes. An experimentally derived function is applied to the model for better correlation. In-phase and out-of-phase motion of top and bottom layers are considered. It was found that the layered construction of the co-injection type would exhibit fused planes at the interface and move in-phase. The model for the single layer case is adapted to the layered case where it would behave as a single panel. Primary physical parameters that affect STL are identified and manipulated. Theoretical analysis is linked to the resin's matrix attribute. High STL material with representative characteristics is evaluated versus standard resins. It was found that high STL could be achieved by altering materials' matrix and by integrating design solution in the low frequency range. A suggested numerical approach is described for STL evaluation of simple and complex geometries. In practice, validation on actual vehicle systems proved the adequacy of the acoustical characterization process.
Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement
NASA Astrophysics Data System (ADS)
Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.
2013-09-01
Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
Empirical Green's function analysis: Taking the next step
Hough, S.E.
1997-01-01
An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.
Full hyperfine structure analysis of singly ionized molybdenum
NASA Astrophysics Data System (ADS)
Bouazza, Safa
2017-03-01
For a first time a parametric study of hyperfine structure of Mo II configuration levels is presented. The newly measured A and B hyperfine structure (hfs) constants values of Mo II 4d5, 4d45s and 4d35s2 configuration levels, for both 95 and 97 isotopes, using Fast-ion-beam laser-induced fluorescence spectroscopy [1] are gathered with other few data available in literature. A fitting procedure of an isolated set of these three lowest even-parity configuration levels has been performed by taking into account second-order of perturbation theory including the effects of closed shell-open shell excitations. Moreover the same study was done for Mo II odd-parity levels; for both parities two sets of fine structure parameters as well as the leading eigenvector percentages of levels and Landé-factor gJ, relevant for this paper are given. We present also predicted singlet, triplet and quintet positions of missing experimental levels up to 85000 cm-1. The single-electron hfs parameter values were extracted in their entirety for 97Mo II and for 95Mo II: for instance for 95Mo II, a4d 01 =-133.37 MHz and a5p 01 =-160.25 MHz for 4d45p; a4d 01 =-140.84 MHz, a5p 01 =-170.18 MHz and a5s 10 =-2898 MHz for 4d35s5p; a5s 10 =-2529 (2) MHz and a4d 01 =-135.17 (0.44) MHz for the 4d45s. These parameter values were analysed and compared with diverse ab-initio calculations. We closed this work with giving predicted values of magnetic dipole and electric quadrupole hfs constants of all known levels, whose splitting are not yet measured.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramadhar, Timothy R.; Zheng, Shao -Liang; Chen, Yu -Sheng
A detailed set of synthetic and crystallographic guidelines for the crystalline sponge method based upon the analysis of expediently synthesized crystal sponges using third-generation synchrotron radiation are reported. The procedure for the synthesis of the zinc-based metal–organic framework used in initial crystal sponge reports has been modified to yield competent crystals in 3 days instead of 2 weeks. These crystal sponges were tested on some small molecules, with two being unexpectedly difficult cases for analysis with in-house diffractometers in regard to data quality and proper space-group determination. These issues were easily resolved by the use of synchrotron radiation using data-collectionmore » times of less than an hour. One of these guests induced a single-crystal-to-single-crystal transformation to create a larger unit cell with over 500 non-H atoms in the asymmetric unit. This led to a non-trivial refinement scenario that afforded the best Flack x absolute stereochemical determination parameter to date for these systems. The structures did not require the use of PLATON/SQUEEZE or other solvent-masking programs, and are the highest-quality crystalline sponge systems reported to date where the results are strongly supported by the data. A set of guidelines for the entire crystallographic process were developed through these studies. In particular, the refinement guidelines include strategies to refine the host framework, locate guests and determine occupancies, discussion of the proper use of geometric and anisotropic displacement parameter restraints and constraints, and whether to perform solvent squeezing/masking. The single-crystal-to-single-crystal transformation process for the crystal sponges is also discussed. The presented general guidelines will be invaluable for researchers interested in using the crystalline sponge method at in-house diffraction or synchrotron facilities, will facilitate the collection and analysis of reliable high-quality data, and will allow construction of chemically and physically sensible models for guest structural determination.« less
Ramadhar, Timothy R.; Zheng, Shao -Liang; Chen, Yu -Sheng; ...
2015-01-01
A detailed set of synthetic and crystallographic guidelines for the crystalline sponge method based upon the analysis of expediently synthesized crystal sponges using third-generation synchrotron radiation are reported. The procedure for the synthesis of the zinc-based metal–organic framework used in initial crystal sponge reports has been modified to yield competent crystals in 3 days instead of 2 weeks. These crystal sponges were tested on some small molecules, with two being unexpectedly difficult cases for analysis with in-house diffractometers in regard to data quality and proper space-group determination. These issues were easily resolved by the use of synchrotron radiation using data-collectionmore » times of less than an hour. One of these guests induced a single-crystal-to-single-crystal transformation to create a larger unit cell with over 500 non-H atoms in the asymmetric unit. This led to a non-trivial refinement scenario that afforded the best Flack x absolute stereochemical determination parameter to date for these systems. The structures did not require the use of PLATON/SQUEEZE or other solvent-masking programs, and are the highest-quality crystalline sponge systems reported to date where the results are strongly supported by the data. A set of guidelines for the entire crystallographic process were developed through these studies. In particular, the refinement guidelines include strategies to refine the host framework, locate guests and determine occupancies, discussion of the proper use of geometric and anisotropic displacement parameter restraints and constraints, and whether to perform solvent squeezing/masking. The single-crystal-to-single-crystal transformation process for the crystal sponges is also discussed. The presented general guidelines will be invaluable for researchers interested in using the crystalline sponge method at in-house diffraction or synchrotron facilities, will facilitate the collection and analysis of reliable high-quality data, and will allow construction of chemically and physically sensible models for guest structural determination.« less
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Single-Receiver GPS Phase Bias Resolution
NASA Technical Reports Server (NTRS)
Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.
2010-01-01
Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp
The evolution of phylogeographic data sets.
Garrick, Ryan C; Bonatelli, Isabel A S; Hyseni, Chaz; Morales, Ariadna; Pelletier, Tara A; Perez, Manolo F; Rice, Edwin; Satler, Jordan D; Symula, Rebecca E; Thomé, Maria Tereza C; Carstens, Bryan C
2015-03-01
Empirical phylogeographic studies have progressively sampled greater numbers of loci over time, in part motivated by theoretical papers showing that estimates of key demographic parameters improve as the number of loci increases. Recently, next-generation sequencing has been applied to questions about organismal history, with the promise of revolutionizing the field. However, no systematic assessment of how phylogeographic data sets have changed over time with respect to overall size and information content has been performed. Here, we quantify the changing nature of these genetic data sets over the past 20 years, focusing on papers published in Molecular Ecology. We found that the number of independent loci, the total number of alleles sampled and the total number of single nucleotide polymorphisms (SNPs) per data set has improved over time, with particularly dramatic increases within the past 5 years. Interestingly, uniparentally inherited organellar markers (e.g. animal mitochondrial and plant chloroplast DNA) continue to represent an important component of phylogeographic data. Single-species studies (cf. comparative studies) that focus on vertebrates (particularly fish and to some extent, birds) represent the gold standard of phylogeographic data collection. Based on the current trajectory seen in our survey data, forecast modelling indicates that the median number of SNPs per data set for studies published by the end of the year 2016 may approach ~20,000. This survey provides baseline information for understanding the evolution of phylogeographic data sets and underscores the fact that development of analytical methods for handling very large genetic data sets will be critical for facilitating growth of the field. © 2015 John Wiley & Sons Ltd.
Berthele, H; Sella, O; Lavarde, M; Mielcarek, C; Pense-Lheritier, A-M; Pirnay, S
2014-02-01
Ethanol, pH and water activity are three well-known parameters that can influence the preservation of cosmetic products. With the new constraints regarding the antimicrobial effectiveness and the restrictive use of preservatives, a D-optimal design was set up to evaluate the influence of these three parameters on the microbiological conservation. To monitor the effectiveness of the different combination of these set parameters, a challenge test in compliance with the International standard ISO 11930: 2012 was implemented. The formulations established in our study could support wide variations of ethanol concentration, pH values and glycerin concentration without noticeable effects on the stability of the products. In the conditions of the study, determining the value of a single parameter, with the tested concentration, could not guarantee microbiological conservation. However, a high concentration of ethanol associated with an extreme pH could inhibit bacteria growth from the first day (D0). Besides, it appears that despite an aw above 0.6 (even 0.8) and without any preservatives incorporated in formulas, it was possible to guarantee the microbiological stability of the cosmetic product when maintaining the right combination of the selected parameters. Following the analysis of the different values obtained during the experimentation, there seems to be a correlation between the aw and the selected parameters aforementioned. An application of this relationship could be to define the aw of cosmetic products by using the formula, thus avoiding the evaluation of this parameter with a measuring device. © 2013 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Predicting distant failure in early stage NSCLC treated with SBRT using clinical parameters.
Zhou, Zhiguo; Folkert, Michael; Cannon, Nathan; Iyengar, Puneeth; Westover, Kenneth; Zhang, Yuanyuan; Choy, Hak; Timmerman, Robert; Yan, Jingsheng; Xie, Xian-J; Jiang, Steve; Wang, Jing
2016-06-01
The aim of this study is to predict early distant failure in early stage non-small cell lung cancer (NSCLC) treated with stereotactic body radiation therapy (SBRT) using clinical parameters by machine learning algorithms. The dataset used in this work includes 81 early stage NSCLC patients with at least 6months of follow-up who underwent SBRT between 2006 and 2012 at a single institution. The clinical parameters (n=18) for each patient include demographic parameters, tumor characteristics, treatment fraction schemes, and pretreatment medications. Three predictive models were constructed based on different machine learning algorithms: (1) artificial neural network (ANN), (2) logistic regression (LR) and (3) support vector machine (SVM). Furthermore, to select an optimal clinical parameter set for the model construction, three strategies were adopted: (1) clonal selection algorithm (CSA) based selection strategy; (2) sequential forward selection (SFS) method; and (3) statistical analysis (SA) based strategy. 5-cross-validation is used to validate the performance of each predictive model. The accuracy was assessed by area under the receiver operating characteristic (ROC) curve (AUC), sensitivity and specificity of the system was also evaluated. The AUCs for ANN, LR and SVM were 0.75, 0.73, and 0.80, respectively. The sensitivity values for ANN, LR and SVM were 71.2%, 72.9% and 83.1%, while the specificity values for ANN, LR and SVM were 59.1%, 63.6% and 63.6%, respectively. Meanwhile, the CSA based strategy outperformed SFS and SA in terms of AUC, sensitivity and specificity. Based on clinical parameters, the SVM with the CSA optimal parameter set selection strategy achieves better performance than other strategies for predicting distant failure in lung SBRT patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ultimate Limit to the Spatial Resolution in Magnetic Imaging
NASA Astrophysics Data System (ADS)
Matthews, John; Wellstood, Frederick C.; Chatraphorn, Sojiphong
2003-03-01
Motivated by the continual improvement in the spatial resolution of source currents detected by magnetic field imaging, in particular scanning SQUID microscopy, we have determined a theoretical limit to the spatial resolution for a given set of parameters. The guiding principle here is that by adding known information (e.g. CAD diagram) about the source currents into the inversion algorithm, we reduce the number of unknown parameters and hence lower the uncertainty in the remaining parameters. We consider the ultimate limit to be the case where all the information about the system is known, except for a single parameter, e.g. the separation w of two long, straight wires each carrying a current I/2. For this particular example we find that for a current I=100;μA, with magnetic field noise Δ B=10 pT, at a standoff z=100;μm, the minimum resolvable separation is 2;μm, about an order of magnitude less than the present limit.
Derivatives of Horn hypergeometric functions with respect to their parameters
NASA Astrophysics Data System (ADS)
Ancarani, L. U.; Del Punta, J. A.; Gasaneo, G.
2017-07-01
The derivatives of eight Horn hypergeometric functions [four Appell F1, F2, F3, and F4, and four (degenerate) confluent Φ1, Φ2, Ψ1, and Ξ1] with respect to their parameters are studied. The first derivatives are expressed, systematically, as triple infinite summations or, alternatively, as single summations of two-variable Kampé de Fériet functions. Taking advantage of previously established expressions for the derivative of the confluent or Gaussian hypergeometric functions, the generalization to the nth derivative of Horn's functions with respect to their parameters is rather straightforward in most cases; the results are expressed in terms of n + 2 infinite summations. Following a similar procedure, mixed derivatives are also treated. An illustration of the usefulness of the derivatives of F1, with respect to the first and third parameters, is given with the study of autoionization of atoms occurring as part of a post-collisional process. Their evaluation setting the Coulomb charge to zero provides the coefficients of a Born-like expansion of the interaction.
NASA Astrophysics Data System (ADS)
Xing, Wanqiu; Wang, Weiguang; Shao, Quanxi; Yong, Bin
2018-01-01
Quantifying precipitation (P) partition into evapotranspiration (E) and runoff (Q) is of great importance for global and regional water availability assessment. Budyko framework serves as a powerful tool to make simple and transparent estimation for the partition, using a single parameter, to characterize the shape of the Budyko curve for a "specific basin", where the single parameter reflects the overall effect by not only climatic seasonality, catchment characteristics (e.g., soil, topography and vegetation) but also agricultural activities (e.g., cultivation and irrigation). At the regional scale, these influencing factors are interconnected, and the interactions between them can also affect the single parameter of Budyko-type equations' estimating. Here we employ the multivariate adaptive regression splines (MARS) model to estimate the Budyko curve shape parameter (n in the Choudhury's equation, one form of the Budyko framework) of the selected 96 catchments across China using a data set of long-term averages for climatic seasonality, catchment characteristics and agricultural activities. Results show average storm depth (ASD), vegetation coverage (M), and seasonality index of precipitation (SI) are three statistically significant factors affecting the Budyko parameter. More importantly, four pairs of interactions are recognized by the MARS model as: The interaction between CA (percentage of cultivated land area to total catchment area) and ASD shows that the cultivation can weaken the reducing effect of high ASD (>46.78 mm) on the Budyko parameter estimating. Drought (represented by the value of Palmer drought severity index < -0.74) and uneven distribution of annual rainfall (represented by the value of coefficient of variation of precipitation > 0.23) tend to enhance the Budyko parameter reduction by large SI (>0.797). Low vegetation coverage (34.56%) is likely to intensify the rising effect on evapotranspiration ratio by IA (percentage of irrigation area to total catchment area). The Budyko n values estimated by the MARS model reproduce the calculated ones by the observation well for the selected 96 catchments (with R = 0.817, MAE = 4.09). Compared to the multiple stepwise regression model estimating the parameter n taken the influencing factors as independent inputs, the MARS model enhances the capability of the Budyko framework for assessing water availability at regional scale using readily available data.
Retrieval of Aerosol Parameters from Continuous H24 Lidar-Ceilometer Measurements
NASA Astrophysics Data System (ADS)
Dionisi, D.; Barnaba, F.; Costabile, F.; Di Liberto, L.; Gobbi, G. P.; Wille, H.
2016-06-01
Ceilometer technology is increasingly applied to the monitoring and the characterization of tropospheric aerosols. In this work, a method to estimate some key aerosol parameters (extinction coefficient, surface area concentration and volume concentration) from ceilometer measurements is presented. A numerical model has been set up to derive a mean functional relationships between backscatter and the above mentioned parameters based on a large set of simulated aerosol optical properties. A good agreement was found between the modeled backscatter and extinction coefficients and the ones measured by the EARLINET Raman lidars. The developed methodology has then been applied to the measurements acquired by a prototype Polarization Lidar-Ceilometer (PLC). This PLC instrument was developed within the EC- LIFE+ project "DIAPASON" as an upgrade of the commercial, single-channel Jenoptik CHM15k system. The PLC run continuously (h24) close to Rome (Italy) for a whole year (2013-2014). Retrievals of the aerosol backscatter coefficient at 1064 nm and of the relevant aerosol properties were performed using the proposed methodology. This information, coupled to some key aerosol type identification made possible by the depolarization channel, allowed a year-round characterization of the aerosol field at this site. Examples are given to show how this technology coupled to appropriate data inversion methods is potentially useful in the operational monitoring of parameters of air quality and meteorological interest.
Urzhumtseva, Ludmila; Lunina, Natalia; Fokine, Andrei; Samama, Jean Pierre; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2004-09-01
The connectivity-based phasing method has been demonstrated to be capable of finding molecular packing and envelopes even for difficult cases of structure determination, as well as of identifying, in favorable cases, secondary-structure elements of protein molecules in the crystal. This method uses a single set of structure factor magnitudes and general topological features of a crystallographic image of the macromolecule under study. This information is expressed through a number of parameters. Most of these parameters are easy to estimate, and the results of phasing are practically independent of these parameters when they are chosen within reasonable limits. By contrast, the correct choice for such parameters as the expected number of connected regions in the unit cell is sometimes ambiguous. To study these dependencies, numerous tests were performed with simulated data, experimental data and mixed data sets, where several reflections missed in the experiment were completed by computed data. This paper demonstrates that the procedure is able to control this choice automatically and helps in difficult cases to identify the correct number of molecules in the asymmetric unit. In addition, the procedure behaves abnormally if the space group is defined incorrectly and therefore may distinguish between the rotation and screw axes even when high-resolution data are not available.
Prediction and Computation of Corrosion Rates of A36 Mild Steel in Oilfield Seawater
NASA Astrophysics Data System (ADS)
Paul, Subir; Mondal, Rajdeep
2018-04-01
The parameters which primarily control the corrosion rate and life of steel structures are several and they vary across the different ocean and seawater as well as along the depth. While the effect of single parameter on corrosion behavior is known, the conjoint effects of multiple parameters and the interrelationship among the variables are complex. Millions sets of experiments are required to understand the mechanism of corrosion failure. Statistical modeling such as ANN is one solution that can reduce the number of experimentation. ANN model was developed using 170 sets of experimental data of A35 mild steel in simulated seawater, varying the corrosion influencing parameters SO4 2-, Cl-, HCO3 -,CO3 2-, CO2, O2, pH and temperature as input and the corrosion current as output. About 60% of experimental data were used to train the model, 20% for testing and 20% for validation. The model was developed by programming in Matlab. 80% of the validated data could predict the corrosion rate correctly. Corrosion rates predicted by the ANN model are displayed in 3D graphics which show many interesting phenomenon of the conjoint effects of multiple variables that might throw new ideas of mitigation of corrosion by simply modifying the chemistry of the constituents. The model could predict the corrosion rates of some real systems.
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Distribution-centric 3-parameter thermodynamic models of partition gas chromatography.
Blumberg, Leonid M
2017-03-31
If both parameters (the entropy, ΔS, and the enthalpy, ΔH) of the classic van't Hoff model of dependence of distribution coefficients (K) of analytes on temperature (T) are treated as the temperature-independent constants then the accuracy of the model is known to be insufficient for the needed accuracy of retention time prediction. A more accurate 3-parameter Clarke-Glew model offers a way to treat ΔS and ΔH as functions, ΔS(T) and ΔH(T), of T. A known T-centric construction of these functions is based on relating them to the reference values (ΔS ref and ΔH ref ) corresponding to a predetermined reference temperature (T ref ). Choosing a single T ref for all analytes in a complex sample or in a large database might lead to practically irrelevant values of ΔS ref and ΔH ref for those analytes that have too small or too large retention factors at T ref . Breaking all analytes in several subsets each with its own T ref leads to discontinuities in the analyte parameters. These problems are avoided in the K-centric modeling where ΔS(T) and ΔS(T) and other analyte parameters are described in relation to their values corresponding to a predetermined reference distribution coefficient (K Ref ) - the same for all analytes. In this report, the mathematics of the K-centric modeling are described and the properties of several types of K-centric parameters are discussed. It has been shown that the earlier introduced characteristic parameters of the analyte-column interaction (the characteristic temperature, T char , and the characteristic thermal constant, θ char ) are a special chromatographically convenient case of the K-centric parameters. Transformations of T-centric parameters into K-centric ones and vice-versa as well as the transformations of one set of K-centric parameters into another set and vice-versa are described. Copyright © 2017 Elsevier B.V. All rights reserved.
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Modelling and simulation of parallel triangular triple quantum dots (TTQD) by using SIMON 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fathany, Maulana Yusuf, E-mail: myfathany@gmail.com; Fuada, Syifaul, E-mail: fsyifaul@gmail.com; Lawu, Braham Lawas, E-mail: bram-labs@rocketmail.com
2016-04-19
This research presents analysis of modeling on Parallel Triple Quantum Dots (TQD) by using SIMON (SIMulation Of Nano-structures). Single Electron Transistor (SET) is used as the basic concept of modeling. We design the structure of Parallel TQD by metal material with triangular geometry model, it is called by Triangular Triple Quantum Dots (TTQD). We simulate it with several scenarios using different parameters; such as different value of capacitance, various gate voltage, and different thermal condition.
Taking the Measure of the Universe: Precision Astrometry with SIM Planetquest (Preprint)
2006-10-09
the orbits of nearby galaxies and groups going out to the distance of the Virgo Cluster . The orbits are in comoving coordinates. This is just a...single solution of a set of several solutions using present 3-d positions as inputs. The four massive objects ( Virgo Cluster , Coma Group, CenA Group, and... Virgo Cluster from a Numerical Action Method calculation with parameters M/L = 90 for spirals and 155 for ellipticals, Ωm = 0.24, ΩΛ = 0.76. The axes are
1984-05-23
the disorder was accurately known. Inverse Transform To isolate the EAFS contribution due to a single feature in the Fourier transform, the inverse ...is associated with setting the "fold" components to 27 zero in r-space. An inverse transform (real part) of the major feature of the Fig. 4 Fourier...phase of the resulting inverse transform represents only any differences between the material being studied and the reference. This residual is
The Stratway Program for Strategic Conflict Resolution: User's Guide
NASA Technical Reports Server (NTRS)
Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.
2016-01-01
Stratway is a strategic conflict detection and resolution program. It provides both intent-based conflict detection and conflict resolution for a single ownship in the presence of multiple traffic aircraft and weather cells defined by moving polygons. It relies on a set of heuristic search strategies to solve conflicts. These strategies are user configurable through multiple parameters. The program can be called from other programs through an application program interface (API) and can also be executed from a command line.
NASA Astrophysics Data System (ADS)
Frigenti, G.; Arjmand, M.; Barucci, A.; Baldini, F.; Berneschi, S.; Farnesi, D.; Gianfreda, M.; Pelli, S.; Soria, S.; Aray, A.; Dumeige, Y.; Féron, P.; Nunzi Conti, G.
2018-06-01
An original method able to fully characterize high-Q resonators in an add-drop configuration has been implemented. The method is based on the study of two cavity ringdown (CRD) signals, which are produced at the transmission and drop ports by wavelength sweeping a resonance in a time interval comparable with the photon cavity lifetime. All the resonator parameters can be assessed with a single set of simultaneous measurements. We first developed a model describing the two CRD output signals and a fitting program able to deduce the key parameters from the measured profiles. We successfully validated the model with an experiment based on a fiber ring resonator of known characteristics. Finally, we characterized a high-Q, home-made, MgF2 whispering gallery mode disk resonator in the add-drop configuration, assessing its intrinsic and coupling parameters.
A Study of Chaos in Cellular Automata
NASA Astrophysics Data System (ADS)
Kamilya, Supreeti; Das, Sukanta
This paper presents a study of chaos in one-dimensional cellular automata (CAs). The communication of information from one part of the system to another has been taken into consideration in this study. This communication is formalized as a binary relation over the set of cells. It is shown that this relation is an equivalence relation and all the cells form a single equivalence class when the cellular automaton (CA) is chaotic. However, the communication between two cells is sometimes blocked in some CAs by a subconfiguration which appears in between the cells during evolution. This blocking of communication by a subconfiguration has been analyzed in this paper with the help of de Bruijn graph. We identify two types of blocking — full and partial. Finally a parameter has been developed for the CAs. We show that the proposed parameter performs better than the existing parameters.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
Liu, Jing; Chen, Chaoyang; Yang, Guangsong; Chen, Yushan; Yang, Cheng-Fu
2017-01-01
The nanosphere lithography (NSL) method can be developed to deposit the Au-Ag triangle hexagonal nanoparticle arrays for the generation of localized surface plasmon resonance. Previously, we have found that the parameters used to form the NSL masks and the physical methods required to deposit the Au-Ag thin films had large effects on the geometry properties of the nanoparticle arrays. Considering this, the different parameters used to grow the Au-Ag triangle hexagonal nanoparticle arrays were investigated. A single-layer NSL mask was formed by using self-assembly nano-scale polystyrene (PS) nanospheres with an average radius of 265 nm. At first, the concentration of the nano-scale PS nanospheres in the solution was set at 6 wt %. Two coating methods, drop-coating and spin-coating, were used to coat the nano-scale PS nanospheres as a single-layer NSL mask. From the observations of scanning electronic microscopy (SEM), we found that the matrixes of the PS nanosphere masks fabricated by using the drop-coating method were more uniform and exhibited a smaller gap than those fabricated by the spin-coating method. Next, the drop-coating method was used to form the single-layer NSL mask and the concentration of nano-scale PS nanospheres in a solution that was changed from 4 to 10 wt %, for further study. The SEM images showed that when the concentrations of PS nanospheres in the solution were 6 and 8 wt %, the matrixes of the PS nanosphere masks were more uniform than those of 4 and 10 wt %. The effects of the one-side lifting angle of substrates and the vaporization temperature for the solvent of one-layer self-assembly PS nanosphere thin films, were also investigated. Finally, the concentration of the nano-scale PS nanospheres in the solution was set at 8 wt % to form the PS nanosphere masks by the drop-coating method. Three different physical deposition methods, including thermal evaporation, radio-frequency magnetron sputtering, and e-gun deposition, were used to deposit the Au-Ag triangle hexagonal periodic nanoparticle arrays. The SEM images showed that as the single-layer PS nanosphere mask was well controlled, the thermal evaporation could deposit the Au-Ag triangle hexagonal nanoparticle arrays with a higher quality than the other two methods. PMID:28772741
Liu, Jing; Chen, Chaoyang; Yang, Guangsong; Chen, Yushan; Yang, Cheng-Fu
2017-04-03
The nanosphere lithography (NSL) method can be developed to deposit the Au-Ag triangle hexagonal nanoparticle arrays for the generation of localized surface plasmon resonance. Previously, we have found that the parameters used to form the NSL masks and the physical methods required to deposit the Au-Ag thin films had large effects on the geometry properties of the nanoparticle arrays. Considering this, the different parameters used to grow the Au-Ag triangle hexagonal nanoparticle arrays were investigated. A single-layer NSL mask was formed by using self-assembly nano-scale polystyrene (PS) nanospheres with an average radius of 265 nm. At first, the concentration of the nano-scale PS nanospheres in the solution was set at 6 wt %. Two coating methods, drop-coating and spin-coating, were used to coat the nano-scale PS nanospheres as a single-layer NSL mask. From the observations of scanning electronic microscopy (SEM), we found that the matrixes of the PS nanosphere masks fabricated by using the drop-coating method were more uniform and exhibited a smaller gap than those fabricated by the spin-coating method. Next, the drop-coating method was used to form the single-layer NSL mask and the concentration of nano-scale PS nanospheres in a solution that was changed from 4 to 10 wt %, for further study. The SEM images showed that when the concentrations of PS nanospheres in the solution were 6 and 8 wt %, the matrixes of the PS nanosphere masks were more uniform than those of 4 and 10 wt %. The effects of the one-side lifting angle of substrates and the vaporization temperature for the solvent of one-layer self-assembly PS nanosphere thin films, were also investigated. Finally, the concentration of the nano-scale PS nanospheres in the solution was set at 8 wt % to form the PS nanosphere masks by the drop-coating method. Three different physical deposition methods, including thermal evaporation, radio-frequency magnetron sputtering, and e-gun deposition, were used to deposit the Au-Ag triangle hexagonal periodic nanoparticle arrays. The SEM images showed that as the single-layer PS nanosphere mask was well controlled, the thermal evaporation could deposit the Au-Ag triangle hexagonal nanoparticle arrays with a higher quality than the other two methods.
[Subjective Gait Stability in the Elderly].
Hirsch, Theresa; Lampe, Jasmin; Michalk, Katrin; Röder, Lotte; Munsch, Karoline; Marquardt, Jonas
2017-07-10
It can be assumed that the feeling of gait stability or gait instability in the elderly may be independent of a possible fear of falling or a history of falling when walking. Up to now, there has been a lack of spatiotemporal gait parameters for older people who subjectively feel secure when walking. The aim of the study is to analyse the distribution of various gait parameters for older people who subjectively feel secure when walking. In a cross-sectional study, the gait parameters stride time, step time, stride length, step length, double support, single support, and walking speed were measured using a Vicon three-dimensional motion capture system (Plug-In Gait Lower-Body Marker Set) in 31 healthy people aged 65 years and older (mean age 72 ± 3.54 years) who subjectively feel secure when walking. There was a homogeneous distribution in the gait parameters examined, with no abnormalities. The mean values have a low variance with narrow confidence intervals. This study provides evidence that people who subjectively feel secure when walking demonstrate similarly objective gait parameters..
NASA Astrophysics Data System (ADS)
Chan, C.; Drake, T. E.; Abegg, R.; Frekers, D.; Häusser, O.; Hicks, K.; Hutcheon, D. A.; Lee, L.; Miller, C. A.; Schubank, R.; Yen, S.
1990-04-01
The complete set of Wolfenstein parameters, the polarization, the asymmetry of scattering and the unpolarized double-differential cross section are presented for inclusive quasielastic proton scattering from 12C at a central momentum transfer of q = 1.9 fm -1 and incident energies of 290 and 420 MeV. The spin observables D0, Dx, Dy and Dz as well as the longitudinal-to-transverse ratio of spin-flip probabilities are extracted from the data. Across the quasielastic continuum, the experimental data is compared to the variations expected from a single-scattering Fermi-gas approximation using the free NN amplitudes. Medium effects are evident in the pronounced quenching of the polarization parameter relative to the free value.
A practical guide to the design and construction of a single wire beverage antenna
NASA Astrophysics Data System (ADS)
Spong, H. L.
1980-09-01
Theoretical results are presented which show the performance likely to result from using differing antenna heights, lengths and wire sizes and from operating with different ground conductivities. These studies were undertaken to provide practical advice for constructors and operators. Design parameters can be easily obtained with the aid of computer programs and an antenna can be rapidly constructed from readily available materials. Directivity can be increased by adding more elements, either in parallel or on a radial basis. A particular performance can be achieved with great latitude in the parameters. Good low angle performance can be achieved without large ground screens. A directional array can be made by switching between a number of elements set up on different bearings.
Single-particle strength from nucleon transfer in oxygen isotopes: Sensitivity to model parameters
NASA Astrophysics Data System (ADS)
Flavigny, F.; Keeley, N.; Gillibert, A.; Obertelli, A.
2018-03-01
In the analysis of transfer reaction data to extract nuclear structure information the choice of input parameters to the reaction model such as distorting potentials and overlap functions has a significant impact. In this paper we consider a set of data for the (d ,t ) and (d ,3He ) reactions on 14,16,18O as a well-delimited subject for a study of the sensitivity of such analyses to different choices of distorting potentials and overlap functions with particular reference to a previous investigation of the variation of valence nucleon correlations as a function of the difference in nucleon separation energy Δ S =| Sp-Sn| [Phys. Rev. Lett. 110, 122503 (2013), 10.1103/PhysRevLett.110.122503].
EPR study of a gamma-irradiated (2-hydroxyethyl)triphenylphosphonium chloride single crystal
NASA Astrophysics Data System (ADS)
Karakaş, E.; Türkkan, E.; Dereli, Ö.; Sayιn, Ü.; Tapramaz, R.
2011-12-01
In this study, gamma-irradiated single crystals of (2-hydroxyethyl)triphenylphosphonium chloride [CH2CH2OH P(C6H5)3Cl] were investigated with electron paramagnetic resonance (EPR) spectroscopy at room temperature for different orientations in the magnetic field. The single crystals were irradiated with a 60Co-γ-ray source at 0.818 kGy/h for about 36 h. Taking the chemical structure and the experimental spectra of the irradiated single crystal of the title compound into consideration, a paramagnetic species was produced with the unpaired electron delocalized around 31P and several 1H nuclei. The anisotropic hyperfine values due to the 31P nucleus, slightly anisotropic hyperfine values due to the 1H nuclei and the g-tensor of the radical were measured from the spectra. Depending on the molecular structure and measured parameters, three possible radicals were modeled using the B3LYP/6-31+G(d) level of density-functional theory, and EPR parameters were calculated for modeled radicals using the B3LYP/TZVP method/basis set combination. The calculated hyperfine coupling constants were found to be in good agreement with the observed EPR parameters. The experimental and theoretically simulated spectra for each of the three crystallographic axes were well matched with one of the modeled radicals (discussed in the text). We thus identified the radical C˙H2CH2 P(C 6H5)3 Cl as a paramagnetic species produced in a single crystal of the title compound in two magnetically distinct sites. The experimental g-factor and hyperfine coupling constants of the radical were found to be anisotropic, with the isotropic values g iso = 2.0032, ? G, ? G, ? G and ? G for site 1 and g iso=2.0031, ? G, ? G ? G and ? G for site 2.
Multifunctional and Context-Dependent Control of Vocal Acoustics by Individual Muscles
Srivastava, Kyle H.; Elemans, Coen P.H.
2015-01-01
The relationship between muscle activity and behavioral output determines how the brain controls and modifies complex skills. In vocal control, ensembles of muscles are used to precisely tune single acoustic parameters such as fundamental frequency and sound amplitude. If individual vocal muscles were dedicated to the control of single parameters, then the brain could control each parameter independently by modulating the appropriate muscle or muscles. Alternatively, if each muscle influenced multiple parameters, a more complex control strategy would be required to selectively modulate a single parameter. Additionally, it is unknown whether the function of single muscles is fixed or varies across different vocal gestures. A fixed relationship would allow the brain to use the same changes in muscle activation to, for example, increase the fundamental frequency of different vocal gestures, whereas a context-dependent scheme would require the brain to calculate different motor modifications in each case. We tested the hypothesis that single muscles control multiple acoustic parameters and that the function of single muscles varies across gestures using three complementary approaches. First, we recorded electromyographic data from vocal muscles in singing Bengalese finches. Second, we electrically perturbed the activity of single muscles during song. Third, we developed an ex vivo technique to analyze the biomechanical and acoustic consequences of single-muscle perturbations. We found that single muscles drive changes in multiple parameters and that the function of single muscles differs across vocal gestures, suggesting that the brain uses a complex, gesture-dependent control scheme to regulate vocal output. PMID:26490859
Reconstructing Folding Energy Landscapes by Single-Molecule Force Spectroscopy
Woodside, Michael T.; Block, Steven M.
2015-01-01
Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. PMID:24895850
Using single top rapidity to measure V{sub td}, V{sub ts}, V{sub tb} at hadron colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilar-Saavedra, J. A.; Onofre, A.; Departamento de Fisica, Universidade do Minho, P-4710-057 Braga
2011-04-01
Single top production processes are usually regarded as the ones in which V{sub tb} can be directly measured at hadron colliders. We show that the analysis of the single top rapidity distribution in t-channel and tW production can also set direct limits on V{sub td}. At LHC with 10 fb{sup -1} at 14 TeV, the combined limits on V{sub td} may be reduced by almost a factor of 2 when the top rapidity distribution is used. This also implies that the limits on V{sub tb} can also be reduced by 15%, since both parameters, as well as V{sub ts}, mustmore » be simultaneously obtained from a global fit to data. At Tevatron, the exploitation of this distribution would require very high statistics.« less
Peng, Ran; Li, Dongqing
2016-10-07
The ability to create reproducible and inexpensive nanofluidic chips is essential to the fundamental research and applications of nanofluidics. This paper presents a novel and cost-effective method for fabricating a single nanochannel or multiple nanochannels in PDMS chips with controllable channel size and spacing. Single nanocracks or nanocrack arrays, positioned by artificial defects, are first generated on a polystyrene surface with controllable size and spacing by a solvent-induced method. Two sets of optimal working parameters are developed to replicate the nanocracks onto the polymer layers to form the nanochannel molds. The nanochannel molds are used to make the bi-layer PDMS microchannel-nanochannel chips by simple soft lithography. An alignment system is developed for bonding the nanofluidic chips under an optical microscope. Using this method, high quality PDMS nanofluidic chips with a single nanochannel or multiple nanochannels of sub-100 nm width and height and centimeter length can be obtained with high repeatability.
Supercontinuum as a light source for miniaturized endoscopes.
Lu, M K; Lin, H Y; Hsieh, C C; Kao, F J
2016-09-01
In this work, we have successfully implemented supercontinuum based illumination through single fiber coupling. The integration of a single fiber illumination with a miniature CMOS sensor forms a very slim and powerful camera module for endoscopic imaging. A set of tests and in vivo animal experiments are conducted accordingly to characterize the corresponding illuminance, spectral profile, intensity distribution, and image quality. The key illumination parameters of the supercontinuum, including color rendering index (CRI: 72%~97%) and correlated color temperature (CCT: 3,100K~5,200K), are modified with external filters and compared with those from a LED light source (CRI~76% & CCT~6,500K). The very high spatial coherence of the supercontinuum allows high luminosity conduction through a single multimode fiber (core size~400μm), whose distal end tip is attached with a diffussion tip to broaden the solid angle of illumination (from less than 10° to more than 80°).
NASA Astrophysics Data System (ADS)
Buterakos, Donovan; Throckmorton, Robert E.; Das Sarma, S.
2018-01-01
In addition to magnetic field and electric charge noise adversely affecting spin-qubit operations, performing single-qubit gates on one of multiple coupled singlet-triplet qubits presents a new challenge: crosstalk, which is inevitable (and must be minimized) in any multiqubit quantum computing architecture. We develop a set of dynamically corrected pulse sequences that are designed to cancel the effects of both types of noise (i.e., field and charge) as well as crosstalk to leading order, and provide parameters for these corrected sequences for all 24 of the single-qubit Clifford gates. We then provide an estimate of the error as a function of the noise and capacitive coupling to compare the fidelity of our corrected gates to their uncorrected versions. Dynamical error correction protocols presented in this work are important for the next generation of singlet-triplet qubit devices where coupling among many qubits will become relevant.
NASA Astrophysics Data System (ADS)
Cermak, P.; Ruleova, P.; Holy, V.; Prokleska, J.; Kucek, V.; Palka, K.; Benes, L.; Drasar, C.
2018-02-01
Thermoelectric effects are one of the promising ways to utilize waste heat. Novel approaches have appeared in recent decades aiming to enhance thermoelectric conversion. The theory of energy filtering of free carriers by inclusions is among the latest developed methods. Although the basic idea is clear, experimental evidence of this phenomenon is rare. Based on this concept, we searched suitable systems with stable structures showing energy filtering. Here, we report on the anomalous behavior of Cr-doped single-crystal Bi2Se3 that indicates energy filtering. The solubility of chromium in Bi2Se3 was studied, which is the key parameter in the formation process of inclusions. We present recent results on the effect of Cr-doping on the transport coefficients on a wide set of single crystalline samples. Magnetic measurements were used to corroborate the conclusions drawn from the transport and X-ray measurements.
Simultaneous Retrieval of Multiple Aerosol Parameters Using a Multi-Angular Approach
NASA Technical Reports Server (NTRS)
Kuo, K.-S.; Weger, R. C.; Welch, R. M.
1997-01-01
Atmospheric aerosol particles, both natural and anthropogenic, are important to the earth's radiative balance through their direct and indirect effects. They scatter the incoming solar radiation (direct effect) and modify the shortwave reflective properties of clouds by acting as cloud condensation nuclei (indirect effect). Although it has been suggested that aerosols exert a net cooling influence on climate, this effect has received less attention than the radiative forcing due to clouds and greenhouse gases. In order to understand the role that aerosols play in a changing climate, detailed and accurate observations are a prerequisite. The retrieval of aerosol optical properties by satellite remote sensing has proven to be a difficult task. The difficulty results mainly from the tenuous nature and variable composition of aerosols. To date, with single-angle satellite observations, we can only retrieve reliably against dark backgrounds, such as over oceans and dense vegetation. Even then, assumptions must be made concerning the chemical composition of aerosols. In this investigation we examine the feasibility of simultaneous retrieval of multiple aerosol optical parameters using reflectances from a typical set of twelve angles observed by the French POLDER instrument. The retrieved aerosol optical parameters consist of asymmetry factor, single scattering albedo, surface albedo, and optical thickness.
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
Jacobsen, Svein; Stauffer, Paul R
2007-02-21
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
NASA Astrophysics Data System (ADS)
Jacobsen, Svein; Stauffer, Paul R.
2007-02-01
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
Clinical assessment of spatiotemporal gait parameters in patients and older adults.
Item-Glatthorn, Julia F; Maffiuletti, Nicola A
2014-11-07
Spatial and temporal characteristics of human walking are frequently evaluated to identify possible gait impairments, mainly in orthopedic and neurological patients, but also in healthy older adults. The quantitative gait analysis described in this protocol is performed with a recently-introduced photoelectric system (see Materials table) which has the potential to be used in the clinic because it is portable, easy to set up (no subject preparation is required before a test), and does not require maintenance and sensor calibration. The photoelectric system consists of series of high-density floor-based photoelectric cells with light-emitting and light-receiving diodes that are placed parallel to each other to create a corridor, and are oriented perpendicular to the line of progression. The system simply detects interruptions in light signal, for instance due to the presence of feet within the recording area. Temporal gait parameters and 1D spatial coordinates of consecutive steps are subsequently calculated to provide common gait parameters such as step length, single limb support and walking velocity, whose validity against a criterion instrument has recently been demonstrated. The measurement procedures are very straightforward; a single patient can be tested in less than 5 min and a comprehensive report can be generated in less than 1 min.
Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing
2012-04-01
The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment. © 2012 American Institute of Physics
NASA Astrophysics Data System (ADS)
Wei, Ying-Chieh; Wei, Ying-Yu; Chang, Kai-Hsiung; Young, Ming-Shing
2012-04-01
The objective of this study is to design and develop a programmable electrocardiogram (ECG) generator with frequency domain characteristics of heart rate variability (HRV) which can be used to test the efficiency of ECG algorithms and to calibrate and maintain ECG equipment. We simplified and modified the three coupled ordinary differential equations in McSharry's model to a single differential equation to obtain the ECG signal. This system not only allows the signal amplitude, heart rate, QRS-complex slopes, and P- and T-wave position parameters to be adjusted, but can also be used to adjust the very low frequency, low frequency, and high frequency components of HRV frequency domain characteristics. The system can be tuned to function with HRV or not. When the HRV function is on, the average heart rate can be set to a value ranging from 20 to 122 beats per minute (BPM) with an adjustable variation of 1 BPM. When the HRV function is off, the heart rate can be set to a value ranging from 20 to 139 BPM with an adjustable variation of 1 BPM. The amplitude of the ECG signal can be set from 0.0 to 330 mV at a resolution of 0.005 mV. These parameters can be adjusted either via input through a keyboard or through a graphical user interface (GUI) control panel that was developed using LABVIEW. The GUI control panel depicts a preview of the ECG signal such that the user can adjust the parameters to establish a desired ECG morphology. A complete set of parameters can be stored in the flash memory of the system via a USB 2.0 interface. Our system can generate three different types of synthetic ECG signals for testing the efficiency of an ECG algorithm or calibrating and maintaining ECG equipment.
Modified neural networks for rapid recovery of tokamak plasma parameters for real time control
NASA Astrophysics Data System (ADS)
Sengupta, A.; Ranjan, P.
2002-07-01
Two modified neural network techniques are used for the identification of the equilibrium plasma parameters of the Superconducting Steady State Tokamak I from external magnetic measurements. This is expected to ultimately assist in a real time plasma control. As different from the conventional network structure where a single network with the optimum number of processing elements calculates the outputs, a multinetwork system connected in parallel does the calculations here in one of the methods. This network is called the double neural network. The accuracy of the recovered parameters is clearly more than the conventional network. The other type of neural network used here is based on the statistical function parametrization combined with a neural network. The principal component transformation removes linear dependences from the measurements and a dimensional reduction process reduces the dimensionality of the input space. This reduced and transformed input set, rather than the entire set, is fed into the neural network input. This is known as the principal component transformation-based neural network. The accuracy of the recovered parameters in the latter type of modified network is found to be a further improvement over the accuracy of the double neural network. This result differs from that obtained in an earlier work where the double neural network showed better performance. The conventional network and the function parametrization methods have also been used for comparison. The conventional network has been used for an optimization of the set of magnetic diagnostics. The effective set of sensors, as assessed by this network, are compared with the principal component based network. Fault tolerance of the neural networks has been tested. The double neural network showed the maximum resistance to faults in the diagnostics, while the principal component based network performed poorly. Finally the processing times of the methods have been compared. The double network and the principal component network involve the minimum computation time, although the conventional network also performs well enough to be used in real time.
Tuning fuzzy PD and PI controllers using reinforcement learning.
Boubertakh, Hamid; Tadjine, Mohamed; Glorennec, Pierre-Yves; Labiod, Salim
2010-10-01
In this paper, we propose a new auto-tuning fuzzy PD and PI controllers using reinforcement Q-learning (QL) algorithm for SISO (single-input single-output) and TITO (two-input two-output) systems. We first, investigate the design parameters and settings of a typical class of Fuzzy PD (FPD) and Fuzzy PI (FPI) controllers: zero-order Takagi-Sugeno controllers with equidistant triangular membership functions for inputs, equidistant singleton membership functions for output, Larsen's implication method, and average sum defuzzification method. Secondly, the analytical structures of these typical fuzzy PD and PI controllers are compared to their classical counterpart PD and PI controllers. Finally, the effectiveness of the proposed method is proven through simulation examples. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Lot sizing and unequal-sized shipment policy for an integrated production-inventory system
NASA Astrophysics Data System (ADS)
Giri, B. C.; Sharma, S.
2014-05-01
This article develops a single-manufacturer single-retailer production-inventory model in which the manufacturer delivers the retailer's ordered quantity in unequal shipments. The manufacturer's production process is imperfect and it may produce some defective items during a production run. The retailer performs a screening process immediately after receiving the order from the manufacturer. The expected average total cost of the integrated production-inventory system is derived using renewal theory and a solution procedure is suggested to determine the optimal production and shipment policy. An extensive numerical study based on different sets of parameter values is conducted and the optimal results so obtained are analysed to examine the relative performance of the models under equal and unequal shipment policies.
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Krewald, S.; Reinhard, P.-G.
2016-09-01
We present results of the time blocking approximation (TBA) for giant resonances in light-, medium-, and heavy-mass nuclei. The TBA is an extension of the widely used random-phase approximation (RPA) adding complex configurations by coupling to phonon excitations. A new method for handling the single-particle continuum is developed and applied in the present calculations. We investigate in detail the dependence of the numerical results on the size of the single-particle space and the number of phonons as well as on nuclear matter properties. Our approach is self-consistent, based on an energy-density functional of Skyrme type where we used seven different parameter sets. The numerical results are compared with experimental data.
Single evolution equation in a light-matter pairing system
NASA Astrophysics Data System (ADS)
Bugaychuk, S.; Tobisch, E.
2018-03-01
The coupled system including wave mixing and nonlinear dynamics of a nonlocal optical medium is usually studied (1) numerically, with the medium being regarded as a black box, or (2) experimentally, making use of some empirical assumptions. In this paper we deduce for the first time a single evolution equation describing the dynamics of the pairing system as a holistic complex. For a non-degenerate set of parameters, we obtain the nonlinear Schrödinger equation with coefficients being written out explicitly. Analytical solutions of this equation can be experimentally realized in any photorefractive medium, e.g. in photorefractive, liquid or photonic crystals. For instance, a soliton-like solution can be used in dynamical holography for designing an artificial grating with maximal amplification of an image.
Acetabular rim and surface segmentation for hip surgery planning and dysplasia evaluation
NASA Astrophysics Data System (ADS)
Tan, Sovira; Yao, Jianhua; Yao, Lawrence; Summers, Ronald M.; Ward, Michael M.
2008-03-01
Knowledge of the acetabular rim and surface can be invaluable for hip surgery planning and dysplasia evaluation. The acetabular rim can also be used as a landmark for registration purposes. At the present time acetabular features are mostly extracted manually at great cost of time and human labor. Using a recent level set algorithm that can evolve on the surface of a 3D object represented by a triangular mesh we automatically extracted rims and surfaces of acetabulae. The level set is guided by curvature features on the mesh. It can segment portions of a surface that are bounded by a line of extremal curvature (ridgeline or crestline). The rim of the acetabulum is such an extremal curvature line. Our material consists of eight hemi-pelvis surfaces. The algorithm is initiated by putting a small circle (level set seed) at the center of the acetabular surface. Because this surface distinctively has the form of a cup we were able to use the Shape Index feature to automatically extract an approximate center. The circle then expands and deforms so as to take the shape of the acetabular rim. The results were visually inspected. Only minor errors were detected. The algorithm also proved to be robust. Seed placement was satisfactory for the eight hemi-pelvis surfaces without changing any parameters. For the level set evolution we were able to use a single set of parameters for seven out of eight surfaces.
The compression–error trade-off for large gridded data sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silver, Jeremy D.; Zender, Charles S.
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
The compression–error trade-off for large gridded data sets
Silver, Jeremy D.; Zender, Charles S.
2017-01-27
The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less
NASA Astrophysics Data System (ADS)
Menezes, Marcos; Capaz, Rodrigo
Black Phosphorus (BP) is a promising material for applications in electronics, especially due to the tuning of its band gap by increasing the number of layers. In single-layer BP, also called Phosphorene, the P atoms form two staggered chains bonded by sp3 hybridization, while neighboring layers are bonded by Van-der-Waals interactions. In this work, we present a Tight-Binding (TB) parametrization of the electronic structure of single and few-layer BP, based on the Slater-Koster model within the two-center approximation. Our model includes all 3s and 3p orbitals, which makes this problem more complex than that of graphene, where only 2pz orbitals are needed for most purposes. The TB parameters are obtained from a least-squares fit of DFT calculations carried on the SIESTA code. We compare the results for different basis-sets used to expand the ab-initio wavefunctions and discuss their applicability. Our model can fit a larger number of bands than previously reported calculations based on Wannier functions. Moreover, our parameters have a clear physical interpretation based on chemical bonding. As such, we expect our results to be useful in a further understanding of multilayer BP and other 2D-materials characterized by strong sp3 hybridization. CNPq, FAPERJ, INCT-Nanomateriais de Carbono.
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
Speaker verification system using acoustic data and non-acoustic data
Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA
2006-03-21
A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.
Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A
2011-12-09
Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.
2017-12-01
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
A Set of Functional Brain Networks for the Comprehensive Evaluation of Human Characteristics.
Sung, Yul-Wan; Kawachi, Yousuke; Choi, Uk-Su; Kang, Daehun; Abe, Chihiro; Otomo, Yuki; Ogawa, Seiji
2018-01-01
Many human characteristics must be evaluated to comprehensively understand an individual, and measurements of the corresponding cognition/behavior are required. Brain imaging by functional MRI (fMRI) has been widely used to examine brain function related to human cognition/behavior. However, few aspects of cognition/behavior of individuals or experimental groups can be examined through task-based fMRI. Recently, resting state fMRI (rs-fMRI) signals have been shown to represent functional infrastructure in the brain that is highly involved in processing information related to cognition/behavior. Using rs-fMRI may allow diverse information about the brain through a single MRI scan to be obtained, as rs-fMRI does not require stimulus tasks. In this study, we attempted to identify a set of functional networks representing cognition/behavior that are related to a wide variety of human characteristics and to evaluate these characteristics using rs-fMRI data. If possible, these findings would support the potential of rs-fMRI to provide diverse information about the brain. We used resting-state fMRI and a set of 130 psychometric parameters that cover most human characteristics, including those related to intelligence and emotional quotients and social ability/skill. We identified 163 brain regions by VBM analysis using regression analysis with 130 psychometric parameters. Next, using a 163 × 163 correlation matrix, we identified functional networks related to 111 of the 130 psychometric parameters. Finally, we made an 8-class support vector machine classifiers corresponding to these 111 functional networks. Our results demonstrate that rs-fMRI signals contain intrinsic information about brain function related to cognition/behaviors and that this set of 111 networks/classifiers can be used to comprehensively evaluate human characteristics.
Optimized positioning of autonomous surgical lamps
NASA Astrophysics Data System (ADS)
Teuber, Jörn; Weller, Rene; Kikinis, Ron; Oldhafer, Karl-Jürgen; Lipp, Michael J.; Zachmann, Gabriel
2017-03-01
We consider the problem of finding automatically optimal positions of surgical lamps throughout the whole surgical procedure, where we assume that future lamps could be robotized. We propose a two-tiered optimization technique for the real-time autonomous positioning of those robotized surgical lamps. Typically, finding optimal positions for surgical lamps is a multi-dimensional problem with several, in part conflicting, objectives, such as optimal lighting conditions at every point in time while minimizing the movement of the lamps in order to avoid distractions of the surgeon. Consequently, we use multi-objective optimization (MOO) to find optimal positions in real-time during the entire surgery. Due to the conflicting objectives, there is usually not a single optimal solution for such kinds of problems, but a set of solutions that realizes a Pareto-front. When our algorithm selects a solution from this set it additionally has to consider the individual preferences of the surgeon. This is a highly non-trivial task because the relationship between the solution and the parameters is not obvious. We have developed a novel meta-optimization that considers exactly this challenge. It delivers an easy to understand set of presets for the parameters and allows a balance between the lamp movement and lamp obstruction. This metaoptimization can be pre-computed for different kinds of operations and it then used by our online optimization for the selection of the appropriate Pareto solution. Both optimization approaches use data obtained by a depth camera that captures the surgical site but also the environment around the operating table. We have evaluated our algorithms with data recorded during a real open abdominal surgery. It is available for use for scientific purposes. The results show that our meta-optimization produces viable parameter sets for different parts of an intervention even when trained on a small portion of it.
Cotter, Christopher; Turcotte, Julie Catherine; Crawford, Bruce; Sharp, Gregory; Mah'D, Mufeed
2015-01-01
This work aims at three goals: first, to define a set of statistical parameters and plan structures for a 3D pretreatment thoracic and prostate intensity‐modulated radiation therapy (IMRT) quality assurance (QA) protocol; secondly, to test if the 3D QA protocol is able to detect certain clinical errors; and third, to compare the 3D QA method with QA performed with single ion chamber and 2D gamma test in detecting those errors. The 3D QA protocol measurements were performed on 13 prostate and 25 thoracic IMRT patients using IBA's COMPASS system. For each treatment planning structure included in the protocol, the following statistical parameters were evaluated: average absolute dose difference (AADD), percent structure volume with absolute dose difference greater than 6% (ADD6), and 3D gamma test. To test the 3D QA protocol error sensitivity, two prostate and two thoracic step‐and‐shoot IMRT patients were investigated. Errors introduced to each of the treatment plans included energy switched from 6 MV to 10 MV, multileaf collimator (MLC) leaf errors, linac jaws errors, monitor unit (MU) errors, MLC and gantry angle errors, and detector shift errors. QA was performed on each plan using a single ion chamber and 2D array of ion chambers for 2D and 3D QA. Based on the measurements performed, we established a uniform set of tolerance levels to determine if QA passes for each IMRT treatment plan structure: maximum allowed AADD is 6%; maximum 4% of any structure volume can be with ADD6 greater than 6%, and maximum 4% of any structure volume may fail 3D gamma test with test parameters 3%/3 mm DTA. Out of the three QA methods tested the single ion chamber performed the worst by detecting 4 out of 18 introduced errors, 2D QA detected 11 out of 18 errors, and 3D QA detected 14 out of 18 errors. PACS number: 87.56.Fc PMID:26699299
Single and tandem Fabry-Perot etalons as solar background filters for lidar.
McKay, J A
1999-09-20
Atmospheric lidar is difficult in daylight because of sunlight scattered into the receiver field of view. In this research methods for the design and performance analysis of Fabry-Perot etalons as solar background filters are presented. The factor by which the signal to background ratio is enhanced is defined as a measure of the performance of the etalon as a filter. Equations for evaluating this parameter are presented for single-, double-, and triple-etalon filter systems. The role of reflective coupling between etalons is examined and shown to substantially reduce the contributions of the second and third etalons to the filter performance. Attenuators placed between the etalons can improve the filter performance, at modest cost to the signal transmittance. The principal parameter governing the performance of the etalon filters is the etalon defect finesse. Practical limitations on etalon plate smoothness and parallelism cause the defect finesse to be relatively low, especially in the ultraviolet, and this sets upper limits to the capability of tandem etalon filters to suppress the solar background at tolerable cost to the signal.
Determination of the stability and control derivatives of the NASA F/A-18 HARV using flight data
NASA Technical Reports Server (NTRS)
Napolitano, Marcello R.; Spagnuolo, Joelle M.
1993-01-01
This report documents the research conducted for the NASA-Ames Cooperative Agreement No. NCC 2-759 with West Virginia University. A complete set of the stability and control derivatives for varying angles of attack from 10 deg to 60 deg were estimated from flight data of the NASA F/A-18 HARV. The data were analyzed with the use of the pEst software which implements the output-error method of parameter estimation. Discussions of the aircraft equations of motion, parameter estimation process, design of flight test maneuvers, and formulation of the mathematical model are presented. The added effects of the thrust vectoring and single surface excitation systems are also addressed. The results of the longitudinal and lateral directional derivative estimates at varying angles of attack are presented and compared to results from previous analyses. The results indicate a significant improvement due to the independent control surface deflections induced by the single surface excitation system, and at the same time, a need for additional flight data especially at higher angles of attack.
NASA Technical Reports Server (NTRS)
Harrington, W. W.
1973-01-01
The reduction is discussed of the discrete tones generated by jet engines which is essential for jet aircraft to meet present and proposed noise standards. The discrete tones generated by the blades and vanes propagate in the inlet and exhaust duct in the form of spiraling acoustic waves, or spinning modes. The reduction of these spinning modes by the cancellation effect of the combination of two acoustic fields was investigated. The spinning mode synthesizer provided the means for effective study of this noise reduction scheme. Two sets of electrical-acoustical transducers located in an equally-spaced circular array simultaneously generate a specified spinning mode and the cancelling mode. Analysis of the wave equation for the synthesizer established the optimum cancelling array acoustic parameters for maximum sound pressure level reduction. The parameter dependence of the frequency ranges of propagation of single, specified circumferential modes generated by a single array, and of effective cancellation of the modes generated by two arrays, was determined. Substantial sound pressure level reduction was obtained for modes within these limits.
Accessing the molecular frame through strong-field alignment of distributions of gas phase molecules
NASA Astrophysics Data System (ADS)
Reid, Katharine L.
2018-03-01
A rationale for creating highly aligned distributions of molecules is that it enables vector properties referenced to molecule-fixed axes (the molecular frame) to be determined. In the present work, the degree of alignment that is necessary for this to be achieved in practice is explored. Alignment is commonly parametrized in experiments by a single parameter, ?, which is insufficient to enable predictive calculations to be performed. Here, it is shown that, if the full distribution of molecular axes takes a Gaussian form, this single parameter can be used to determine the complete set of alignment moments needed to characterize the distribution. In order to demonstrate the degree of alignment that is required to approach the molecular frame, the alignment moments corresponding to a few chosen values of ? are used to project a model molecular frame photoelectron angular distribution into the laboratory frame. These calculations show that ? needs to approach 0.9 in order to avoid significant blurring to be caused by averaging. This article is part of the theme issue `Modern theoretical chemistry'.
Khachatrian, Ani; Roche, Nicolas J. -H.; Buchner, Stephen P.; ...
2016-12-19
A focused, pulsed x-ray beam was used to compare SET characteristics in pristine and proton-irradiated Al 0.3Ga 0.7N/GaN HEMTs. Measured SET amplitudes and trailing-edge decay times were analyzed as was the collected charge, obtained by integrating the SET pulses over time. SETs generated in proton-irradiated HEMTs differed significantly from those in pristine HEMTs with regard to the decay times and collected charge. The decay times have previously been shown to be attributed to charge trapping by defect states that are caused either by imperfect material growth conditions or by protoninduced displacement damage. The longer decay times observed for proton-irradiated HEMTsmore » are attributed to the presence of additional deep traps created when protons lose energy as they collide with the nuclei of constituent atoms. Comparison of electrical parameters measured before and immediately following exposure to the focused x-ray beam showed little change, confirming the absence of significant charge buildup in passivation layers by the x-rays themselves. In conclusion, a major advantage of the pulsed x-ray technique is that the region under the metal gate can be probed for single-event transients from the top side, an approach incompatible with pulsed-laser SEE testing that involves the use of visible light.« less
NASA Astrophysics Data System (ADS)
Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove
2018-02-01
We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index alpha-1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV. Two procedures for estimating alpha-1 the method of moments and maximum likelihood (ML), are developed and their statistical performance compared. It is concluded that the ML procedure attains the most desirable statistical properties and is hence the recommended statistical estimation procedure for estimating alpha-1. The ML procedure is then generalized for application to a set of real cosmic-ray data and thereby makes this approach applicable to existing cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution, as well as inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives. This is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
A statistical characterization of the finger tapping test: modeling, estimation, and applications.
Austin, Daniel; McNames, James; Klein, Krystal; Jimison, Holly; Pavel, Misha
2015-03-01
Sensory-motor performance is indicative of both cognitive and physical function. The Halstead-Reitan finger tapping test is a measure of sensory-motor speed commonly used to assess function as part of a neuropsychological evaluation. Despite the widespread use of this test, the underlying motor and cognitive processes driving tapping behavior during the test are not well characterized or understood. This lack of understanding may make clinical inferences from test results about health or disease state less accurate because important aspects of the task such as variability or fatigue are unmeasured. To overcome these limitations, we enhanced the tapper with a sensor that enables us to more fully characterize all the aspects of tapping. This modification enabled us to decompose the tapping performance into six component phases and represent each phase with a set of parameters having clear functional interpretation. This results in a set of 29 total parameters for each trial, including change in tapping over time, and trial-to-trial and tap-to-tap variability. These parameters can be used to more precisely link different aspects of cognition or motor function to tapping behavior. We demonstrate the benefits of this new instrument with a simple hypothesis-driven trial comparing single and dual-task tapping.
A cluster pattern algorithm for the analysis of multiparametric cell assays.
Kaufman, Menachem; Bloch, David; Zurgil, Naomi; Shafran, Yana; Deutsch, Mordechai
2005-09-01
The issue of multiparametric analysis of complex single cell assays of both static and flow cytometry (SC and FC, respectively) has become common in recent years. In such assays, the analysis of changes, applying common statistical parameters and tests, often fails to detect significant differences between the investigated samples. The cluster pattern similarity (CPS) measure between two sets of gated clusters is based on computing the difference between their density distribution functions' set points. The CPS was applied for the discrimination between two observations in a four-dimensional parameter space. The similarity coefficient (r) ranges between 0 (perfect similarity) to 1 (dissimilar). Three CPS validation tests were carried out: on the same stock samples of fluorescent beads, yielding very low r's (0, 0.066); and on two cell models: mitogenic stimulation of peripheral blood mononuclear cells (PBMC), and apoptosis induction in Jurkat T cell line by H2O2. In both latter cases, r indicated similarity (r < 0.23) within the same group, and dissimilarity (r > 0.48) otherwise. This classification and algorithm approach offers a measure of similarity between samples. It relies on the multidimensional pattern of the sample parameters. The algorithm compensates for environmental drifts in this apparatus and assay; it also may be applied to more than four dimensions.
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Reuse Requirements for Generating Long Term Climate Data Sets
NASA Astrophysics Data System (ADS)
Fleig, A. J.
2007-12-01
Creating long term climate data sets from remotely sensed data requires a specialized form of code reuse. To detect long term trends in a geophysical parameter, such as global ozone amount or mean sea surface temperature, it is essential to be able to differentiate between real changes in the measurement and artifacts related to changes in processing algorithms or instrument characteristics. The ability to rerun the exact algorithm used to produce a given data set many years after the data was originally made is essential to create consistent long term data sets. It is possible to quickly develop a basic algorithm that will convert a perfect instrument measurement into a geophysical parameter value for a well specified set of conditions. However the devil is in the details and it takes a massive effort to develop and verify a processing system to generate high quality global climate data over all necessary conditions. As an example, from 1976 until now, over a hundred man years and eight complete reprocessings have been spent on deriving thirty years of total ozone data from multiple backscattered ultraviolet instruments. To obtain a global data set it is necessary to make numerous assumptions and to handle many special conditions (e.g. "What happens at high solar zenith angles with scattered clouds for snow covered terrain at high altitudes"?) It is easier to determine the precision of a remotely sensed data set than to determine its absolute accuracy. Fortunately if the entire data set is made with a single instrument and a constant algorithm the ability to detect long term trends is primarily determined by the precision of the measurement system rather than its absolute accuracy. However no instrument runs forever and new processing algorithms are developed over time. Introducing the resulting changes can impact the estimate of product precision and reduce the ability to estimate long term trends.Given an extended period of time when both the initial measurement system and the new one provide simultaneous measurements it may be possible to identify differences between the two systems and produce a consistent merged long term data set. Unfortunately this is often not the case. Instead it is necessary to understand the exact details of all the assumptions built into the initial processing system and to evaluate the impact of changes in each of these assumptions and of new features introduced into the next generation processing system. This is not possible without complete understanding of exactly how the original data was produced. While scientific papers and algorithm theoretical basis documents provide substantial details about the concepts they do not provide the necessary detail. Only exact processing codes with all the necessary ancillary data to run them provide the needed information. Since it will be necessary to modify the code for the new instrument it is also necessary to provide all of the tools such as table generation routines and input parameters used to generate the code. This has not been a problem for the people that make the first set of measurements of a given parameter. There was no similar predecessor global data set to match and they know what they assumed in making their measurements. But we are entering an era when it is necessary to consider the next generation. For instance the entire 30 year global ozone data set that started with the Total Ozone Mapping Spectrometer instrument launched in 1978 on the Nimbus 7 spacecraft was produced by a single science team. Similar measurements will be made well into the middle of the coming century with instruments to be flown on the National Polar Orbiting Environmental Satellite System but the original science team (unfortunately) will not be there to explain what they did over that period
A sequence-dependent rigid-base model of DNA
NASA Astrophysics Data System (ADS)
Gonzalez, O.; Petkevičiutė, D.; Maddocks, J. H.
2013-02-01
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.
A sequence-dependent rigid-base model of DNA.
Gonzalez, O; Petkevičiūtė, D; Maddocks, J H
2013-02-07
A novel hierarchy of coarse-grain, sequence-dependent, rigid-base models of B-form DNA in solution is introduced. The hierarchy depends on both the assumed range of energetic couplings, and the extent of sequence dependence of the model parameters. A significant feature of the models is that they exhibit the phenomenon of frustration: each base cannot simultaneously minimize the energy of all of its interactions. As a consequence, an arbitrary DNA oligomer has an intrinsic or pre-existing stress, with the level of this frustration dependent on the particular sequence of the oligomer. Attention is focussed on the particular model in the hierarchy that has nearest-neighbor interactions and dimer sequence dependence of the model parameters. For a Gaussian version of this model, a complete coarse-grain parameter set is estimated. The parameterized model allows, for an oligomer of arbitrary length and sequence, a simple and explicit construction of an approximation to the configuration-space equilibrium probability density function for the oligomer in solution. The training set leading to the coarse-grain parameter set is itself extracted from a recent and extensive database of a large number of independent, atomic-resolution molecular dynamics (MD) simulations of short DNA oligomers immersed in explicit solvent. The Kullback-Leibler divergence between probability density functions is used to make several quantitative assessments of our nearest-neighbor, dimer-dependent model, which is compared against others in the hierarchy to assess various assumptions pertaining both to the locality of the energetic couplings and to the level of sequence dependence of its parameters. It is also compared directly against all-atom MD simulation to assess its predictive capabilities. The results show that the nearest-neighbor, dimer-dependent model can successfully resolve sequence effects both within and between oligomers. For example, due to the presence of frustration, the model can successfully predict the nonlocal changes in the minimum energy configuration of an oligomer that are consequent upon a local change of sequence at the level of a single point mutation.
de Mattos, D; Bertrand, J K; Misztal, I
2000-08-01
The objective of this study was to investigate the possibility of genotype x environment interactions for weaning weight (WWT) between different regions of the United States (US) and between Canada (CA), Uruguay (UY), and US for populations of Hereford cattle. Original data were composed of 487,661, 102,986, and 2,322,722 edited weaning weight records from CA, UY, and US, respectively. A total of 359 sires were identified as having progeny across all three countries; 240 of them had at least one progeny with a record in each environment. The data sets within each country were reduced by retaining records from herds with more than 500 WWT records, with an average contemporary group size of greater than nine animals, and that contained WWT records from progeny or maternal grand-progeny of the across-country sires. Data sets within each country were further reduced by randomly selecting among remaining herds. Four regions within US were defined: Upper Plains (UP), Cornbelt (CB), South (S), and Gulf Coast (GC). Similar sampling criteria and common international sires were used to form the within-US regional data sets. A pairwise analysis was done between countries and regions within US (UP-CB vs S-GC, UP vs CB, and S vs GC) for the estimation of (co)variance components and genetic correlation between environments. An accelerated EM-REML algorithm and a multiple-trait animal model that considered WWT as a different trait in each environment were used to estimate parameters in each pairwise analysis. Direct and maternal (in parentheses) estimated genetic correlations for CA vs UY, CA vs US, US vs UY, UP-CB vs S-GC, UP vs CB, and S vs GC were .88 (.84), .86 (.82), .90 (.85), .88 (.87), .88 (.84), and .87 (.85), respectively. The general absence of genotype x country interactions observed in this study, together with a prior study that showed the similarity of genetic and environmental parameters across the three countries, strongly indicates that a joint WWT genetic evaluation for Hereford cattle could be conducted using a model that treated the information from CA, UY, and US as a single population using single population-wide genetic parameters.
Darkhovskii, M B; Pletnev, I V; Tchougréeff, A L
2003-11-15
A computational method targeted to Werner-type complexes is developed on the basis of quantum mechanical effective Hamiltonian crystal field (EHCF) methodology (previously proposed for describing electronic structure of transition metal complexes) combined with the Gillespie-Kepert version of molecular mechanics (MM). It is a special version of the hybrid quantum/MM approach. The MM part is responsible for representing the whole molecule, including ligand atoms and metal ion coordination sphere, but leaving out the effects of the d-shell. The quantum mechanical EHCF part is limited to the metal ion d-shell. The method reproduces with reasonable accuracy geometry and spin states of the Fe(II) complexes with monodentate and polydentate aromatic ligands with nitrogen donor atoms. In this setting a single set of MM parameters set is shown to be sufficient for handling all spin states of the complexes under consideration. Copyright 2003 Wiley Periodicals, Inc.
McKisson, John E.; Barbosa, Fernando
2015-09-01
A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.
3-Iodobenzaldehyde: XRD, FT-IR, Raman and DFT studies.
Kumar, Chandraju Sadolalu Chidan; Parlak, Cemal; Tursun, Mahir; Fun, Hoong-Kun; Rhyman, Lydia; Ramasami, Ponnadurai; Alswaidan, Ibrahim A; Keşan, Gürkan; Chandraju, Siddegowda; Quah, Ching Kheng
2015-06-15
The structure of 3-iodobenzaldehyde (3IB) was characterized by FT-IR, Raman and single-crystal X-ray diffraction techniques. The conformational isomers, optimized geometric parameters, normal mode frequencies and corresponding vibrational assignments of 3IB were examined using density functional theory (DFT) method, with the Becke-3-Lee-Yang-Parr (B3LYP) functional and the 6-311+G(3df,p) basis set for all atoms except for iodine. The LANL2DZ effective core basis set was used for iodine. Potential energy distribution (PED) analysis of normal modes was performed to identify characteristic frequencies. 3IB crystallizes in monoclinic space group P21/c with the O-trans form. There is a good agreement between the theoretically predicted structural parameters, and vibrational frequencies and those obtained experimentally. In order to understand halogen effect, 3-halogenobenzaldehyde [XC6H4CHO; X=F, Cl and Br] was also studied theoretically. The free energy difference between the isomers is small but the rotational barrier is about 8kcal/mol. An atypical behavior of fluorine affecting conformational preference is observed. Copyright © 2015 Elsevier B.V. All rights reserved.
Zhang, Xinyuan; Zheng, Nan; Rosania, Gus R
2008-09-01
Cell-based molecular transport simulations are being developed to facilitate exploratory cheminformatic analysis of virtual libraries of small drug-like molecules. For this purpose, mathematical models of single cells are built from equations capturing the transport of small molecules across membranes. In turn, physicochemical properties of small molecules can be used as input to simulate intracellular drug distribution, through time. Here, with mathematical equations and biological parameters adjusted so as to mimic a leukocyte in the blood, simulations were performed to analyze steady state, relative accumulation of small molecules in lysosomes, mitochondria, and cytosol of this target cell, in the presence of a homogenous extracellular drug concentration. Similarly, with equations and parameters set to mimic an intestinal epithelial cell, simulations were also performed to analyze steady state, relative distribution and transcellular permeability in this non-target cell, in the presence of an apical-to-basolateral concentration gradient. With a test set of ninety-nine monobasic amines gathered from the scientific literature, simulation results helped analyze relationships between the chemical diversity of these molecules and their intracellular distributions.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
A global analysis of Y-chromosomal haplotype diversity for 23 STR loci
Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M.T.; Santos, Lorna H.; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F.; Domingues, Patricia M.; Chamoun, Wafaa Takash; Coble, Michael D.; Hill, Carolyn R.; Corach, Daniel; Caputo, Mariela; D’Amato, Maria E.; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H.D.; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E.; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E.; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M.; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S. Moura; Nogueira, Tatiana L.S.; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K.; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L.; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U.; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M.; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H.; Gwynne, Gareth M.; Jobling, Mark A.; Whittle, Martin R.; Sumita, Denilce R.; Wolańska-Nowak, Paulina; Yong, Rita Y.Y.; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz
2014-01-01
In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. PMID:24854874
Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy
NASA Astrophysics Data System (ADS)
Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.
Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.
NASA Astrophysics Data System (ADS)
Song, Li; Shan-Jun, Chen; Yan, Chen; Peng, Chen
2016-03-01
The SF radical and its singly charged cation and anion, SF+ and SF-, have been investigated on the MRCI/aug-cc-pVXZ (X = Q, 5, 6) levels of theory with Davidson correction. Both the core-valence correlation and the relativistic effect are considered. The extrapolating to the complete basis set (CBS) limit is adopted to remove the basis set truncation error. Geometrical parameters, potential energy curves (PECs), vibrational energy levels, spectroscopic constants, ionization potentials, and electron affinities of the ground electronic state for all these species are obtained. The information with respect to molecular characteristics of the SFn (n = -1, 0, +1) systems derived in this work will help to extend our knowledge and to guide further experimental or theoretical researches. Project supported by the National Natural Science Foundation of China (Grant Nos. 11304023 and 11447172), the Young and Middle-Aged Talent of Education Burea of Hubei Province, China (Grant No. Q20151307), and the Yangtze Youth Talents Fund of Yangtze University, China (Grant No. 2015cqr21).
Modifying and reacting to the environmental pH can drive bacterial interactions
Ratzke, Christoph
2018-01-01
Microbes usually exist in communities consisting of myriad different but interacting species. These interactions are typically mediated through environmental modifications; microbes change the environment by taking up resources and excreting metabolites, which affects the growth of both themselves and also other microbes. We show here that the way microbes modify their environment and react to it sets the interactions within single-species populations and also between different species. A very common environmental modification is a change of the environmental pH. We find experimentally that these pH changes create feedback loops that can determine the fate of bacterial populations; they can either facilitate or inhibit growth, and in extreme cases will cause extinction of the bacterial population. Understanding how single species change the pH and react to these changes allowed us to estimate their pairwise interaction outcomes. Those interactions lead to a set of generic interaction motifs—bistability, successive growth, extended suicide, and stabilization—that may be independent of which environmental parameter is modified and thus may reoccur in different microbial systems. PMID:29538378
Liébana, Susana; Brandão, Delfina; Cortés, Pilar; Campoy, Susana; Alegret, Salvador; Pividori, María Isabel
2016-01-21
A magneto-genosensing approach for the detection of the three most common pathogenic bacteria in food safety, such as Salmonella, Listeria and Escherichia coli is presented. The methodology is based on the detection of the tagged amplified DNA obtained by single-tagging PCR with a set of specific primers for each pathogen, followed by electrochemical magneto-genosensing on silica magnetic particles. A set of primers were selected for the amplification of the invA (278 bp), prfA (217 bp) and eaeA (151 bp) being one of the primers for each set tagged with fluorescein, biotin and digoxigenin coding for Salmonella enterica, Listeria monocytogenes and E. coli, respectively. The single-tagged amplicons were then immobilized on silica MPs based on the nucleic acid-binding properties of silica particles in the presence of the chaotropic agent as guanidinium thiocyanate. The assessment of the silica MPs as a platform for electrochemical magneto-genosensing is described, including the main parameters to selectively attach longer dsDNA fragments instead of shorter ssDNA primers based on their negative charge density of the sugar-phosphate backbone. This approach resulted to be a promising detection tool with sensing features of rapidity and sensitivity very suitable to be implemented on DNA biosensors and microfluidic platforms. Copyright © 2015 Elsevier B.V. All rights reserved.
Venkatesan, Hariram; Godwin, John J; Sivamani, Seralathan
2017-10-01
The article presents the experimental data on the extraction and transesterification of bio-oil derived from Stoechospermum marginatum, a brown macro marine algae. The samples were collected from Mandapam region, Gulf of Mannar, Tamil Nadu, India. The bio-oil was extracted using Soxhlet technique with a lipid extraction efficiency of 24.4%. Single stage transesterification was adopted due to lower free fatty acid content. The yield of biodiesel was optimized by varying the process parameters. The obtained data showed the optimum process parameters as reaction time 90 min, reaction temperature 65 °C, catalyst concentration 0.50 g and 8:1 M ratio. Furthermore, the data pertaining to the physio-chemical properties of the derived algal biodiesel were also presented.
Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains
NASA Astrophysics Data System (ADS)
Zhang, Xiong-Peng; Shao, Bin; Zou, Jian
2017-05-01
Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.
A Method of Trajectory Design for Manned Asteroid Explorations1,2
NASA Astrophysics Data System (ADS)
Gan, Qing-Bo; Zhang, Yang; Zhu, Zheng-Fan; Han, Wei-Hua; Dong, Xin
2015-07-01
A trajectory optimization method for the nuclear-electric propulsion manned asteroid explorations is presented. In the case of launching between 2035 and 2065, based on the two-pulse single-cycle Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory is selected by pruning the flight sequences in two feasible regions. Setting the flight strategy of propelling-taxiing-propelling, and taking the minimal fuel consumption as the performance index, the nuclear-electric propulsion flight trajectory is optimized using the hybrid method. Finally, taking the segmentally optimized parameters as the initial values, in accordance with the overall mission constraints, the globally optimized parameters are obtained. And the numerical and diagrammatical results are given at the same time.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Performance comparison of extracellular spike sorting algorithms for single-channel recordings.
Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert
2012-01-30
Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
Theoretical performance model for single image depth from defocus.
Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme
2014-12-01
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.
Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2017-11-01
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model. The proposed method is used to estimate consecutively the values of the two sets of model parameters. Numerical results corresponding to both synthetic and real functional magnetic resonance imaging measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. Copyright © 2017 John Wiley & Sons, Ltd.
Superconductivity-induced features in the electronic Raman spectrum of monolayer graphene
NASA Astrophysics Data System (ADS)
García-Ruiz, A.; Mucha-Kruczyński, M.; Fal'ko, V. I.
2018-04-01
Using the continuum model, we investigate theoretically the contribution of the low-energy electronic excitations to the Raman spectrum of superconducting monolayer graphene. We consider superconducting phases characterised by an isotropic order parameter in a single valley and find a Raman peak at a shift set by the size of the superconducting gap. The height of this peak is proportional to the square root of the gap and the third power of the Fermi level, and we estimate its quantum efficiency as I ˜10-14 .
Ahmed, Raees; Iqbal, Mobeen; Kashef, Sayed H; Almomatten, Mohammed I
2005-01-01
Whole lung lavage is still the most effective treatment for pulmonary alveolar proteinosis. We report a 21-year-old male diagnosed with pulmonary alveolar proteinosis by open lung biopsy and who underwent whole lung lavage with a modified technique. He showed significant improvement in clinical and functional parameters. The technique of intermittent double lung ventilation during lavage procedure keeps the oxygen saturation in acceptable limits in patients at risk for severe hypoxemia and allows the procedure to be completed in a single setting.
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
Cloud GPU-based simulations for SQUAREMR.
Kantasis, George; Xanthis, Christos G; Haris, Kostas; Heiberg, Einar; Aletras, Anthony H
2017-01-01
Quantitative Magnetic Resonance Imaging (MRI) is a research tool, used more and more in clinical practice, as it provides objective information with respect to the tissues being imaged. Pixel-wise T 1 quantification (T 1 mapping) of the myocardium is one such application with diagnostic significance. A number of mapping sequences have been developed for myocardial T 1 mapping with a wide range in terms of measurement accuracy and precision. Furthermore, measurement results obtained with these pulse sequences are affected by errors introduced by the particular acquisition parameters used. SQUAREMR is a new method which has the potential of improving the accuracy of these mapping sequences through the use of massively parallel simulations on Graphical Processing Units (GPUs) by taking into account different acquisition parameter sets. This method has been shown to be effective in myocardial T 1 mapping; however, execution times may exceed 30min which is prohibitively long for clinical applications. The purpose of this study was to accelerate the construction of SQUAREMR's multi-parametric database to more clinically acceptable levels. The aim of this study was to develop a cloud-based cluster in order to distribute the computational load to several GPU-enabled nodes and accelerate SQUAREMR. This would accommodate high demands for computational resources without the need for major upfront equipment investment. Moreover, the parameter space explored by the simulations was optimized in order to reduce the computational load without compromising the T 1 estimates compared to a non-optimized parameter space approach. A cloud-based cluster with 16 nodes resulted in a speedup of up to 13.5 times compared to a single-node execution. Finally, the optimized parameter set approach allowed for an execution time of 28s using the 16-node cluster, without compromising the T 1 estimates by more than 10ms. The developed cloud-based cluster and optimization of the parameter set reduced the execution time of the simulations involved in constructing the SQUAREMR multi-parametric database thus bringing SQUAREMR's applicability within time frames that would be likely acceptable in the clinic. Copyright © 2016 Elsevier Inc. All rights reserved.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R
2012-08-01
A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.
Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng
2012-01-01
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632
2012-01-01
Background Single embryo transfer (SET) remains underutilized as a strategy to reduce multiple gestation risk in IVF, and its overall lower pregnancy rate underscores the need for improved techniques to select one embryo for fresh transfer. This study explored use of comprehensive chromosomal screening by array CGH (aCGH) to provide this advantage and improve pregnancy rate from SET. Methods First-time IVF patients with a good prognosis (age <35, no prior miscarriage) and normal karyotype seeking elective SET were prospectively randomized into two groups: In Group A, embryos were selected on the basis of morphology and comprehensive chromosomal screening via aCGH (from d5 trophectoderm biopsy) while Group B embryos were assessed by morphology only. All patients had a single fresh blastocyst transferred on d6. Laboratory parameters and clinical pregnancy rates were compared between the two groups. Results For patients in Group A (n = 55), 425 blastocysts were biopsied and analyzed via aCGH (7.7 blastocysts/patient). Aneuploidy was detected in 191/425 (44.9%) of blastocysts in this group. For patients in Group B (n = 48), 389 blastocysts were microscopically examined (8.1 blastocysts/patient). Clinical pregnancy rate was significantly higher in the morphology + aCGH group compared to the morphology-only group (70.9 and 45.8%, respectively; p = 0.017); ongoing pregnancy rate for Groups A and B were 69.1 vs. 41.7%, respectively (p = 0.009). There were no twin pregnancies. Conclusion Although aCGH followed by frozen embryo transfer has been used to screen at risk embryos (e.g., known parental chromosomal translocation or history of recurrent pregnancy loss), this is the first description of aCGH fully integrated with a clinical IVF program to select single blastocysts for fresh SET in good prognosis patients. The observed aneuploidy rate (44.9%) among biopsied blastocysts highlights the inherent imprecision of SET when conventional morphology is used alone. Embryos randomized to the aCGH group implanted with greater efficiency, resulted in clinical pregnancy more often, and yielded a lower miscarriage rate than those selected without aCGH. Additional studies are needed to verify our pilot data and confirm a role for on-site, rapid aCGH for IVF patients contemplating fresh SET. PMID:22551456
Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia
2012-01-01
Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.
NASA Technical Reports Server (NTRS)
Florschuetz, L. W.; Su, C. C.
1985-01-01
Spanwise average heat fluxes, resolved in the streamwise direction to one stream-wise hole spacing were measured for two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate. The jet flow, after impingement, was constrained to exit in a single direction along the channel formed by the jet orifice plate and heat transfer surface. The crossflow originated from the jets following impingement and an initial crossflow was present that approached the array through an upstream extension of the channel. The regional average heat fluxes are considered as a function of parameters associated with corresponding individual spanwise rows within the array. A linear superposition model was employed to formulate appropriate governing parameters for the individual row domain. The effects of flow history upstream of an individual row domain are also considered. The results are formulated in terms of individual spanwise row parameters. A corresponding set of streamwise resolved heat transfer characteristics formulated in terms of flow and geometric parameters characterizing the overall arrays is described.
NASA Astrophysics Data System (ADS)
Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy
2016-07-01
Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.
JMOSFET: A MOSFET parameter extractor with geometry-dependent terms
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Moore, B. T.
1985-01-01
The parameters from metal-oxide-silicon field-effect transistors (MOSFETs) that are included on the Combined Release and Radiation Effects Satellite (CRRES) test chips need to be extracted to have a simple but comprehensive method that can be used in wafer acceptance, and to have a method that is sufficiently accurate that it can be used in integrated circuits. A set of MOSFET parameter extraction procedures that are directly linked to the MOSFET model equations and that facilitate the use of simple, direct curve-fitting techniques are developed. In addition, the major physical effects that affect MOSFET operation in the linear and saturation regions of operation for devices fabricated in 1.2 to 3.0 mm CMOS technology are included. The fitting procedures were designed to establish single values for such parameters as threshold voltage and transconductance and to provide for slope matching between the linear and saturation regions of the MOSFET output current-voltage curves. Four different sizes of transistors that cover a rectangular-shaped region of the channel length-width plane are analyzed.
GEMAS: Unmixing magnetic properties of European agricultural soil
NASA Astrophysics Data System (ADS)
Fabian, Karl; Reimann, Clemens; Kuzina, Dilyara; Kosareva, Lina; Fattakhova, Leysan; Nurgaliev, Danis
2016-04-01
High resolution magnetic measurements provide new methods for world-wide characterization and monitoring of agricultural soil which is essential for quantifying geologic and human impact on the critical zone environment and consequences of climatic change, for planning economic and ecological land use, and for forensic applications. Hysteresis measurements of all Ap samples from the GEMAS survey yield a comprehensive overview of mineral magnetic properties in European agricultural soil on a continental scale. Low (460 Hz), and high frequency (4600 Hz) magnetic susceptibility k were measured using a Bartington MS2B sensor. Hysteresis properties were determined by a J-coercivity spectrometer, built at the paleomagnetic laboratory of Kazan University, providing for each sample a modified hysteresis loop, backfield curve, acquisition curve of isothermal remanent magnetization, and a viscous IRM decay spectrum. Each measurement set is obtained in a single run from zero field up to 1.5 T and back to -1.5 T. The resulting data are used to create the first continental-scale maps of magnetic soil parameters. Because the GEMAS geochemical atlas contains a comprehensive set of geochemical data for the same soil samples, the new data can be used to map magnetic parameters in relation to chemical and geological parameters. The data set also provides a unique opportunity to analyze the magnetic mineral fraction of the soil samples by unmixing their IRM acquisition curves. The endmember coefficients are interpreted by linear inversion for other magnetic, physical and chemical properties which results in an unprecedented and detailed view of the mineral magnetic composition of European agricultural soils.
Pinto, Anabela; Almeida, José Pedro; Pinto, Susana; Pereira, João; Oliveira, António Gouveia; de Carvalho, Mamede
2010-11-01
Non-invasive ventilation (NIV) is an efficient method for treating respiratory failure in patients with amyotrophic lateral sclerosis (ALS). However, it requires a process of adaptation not always achieved due to poor compliance. The role of telemonitoring of NIV is not yet established. To test the advantage of using modem communication in NIV of ALS patients. Prospective, single blinded controlled trial. Population and methods According to their residence, 40 consecutive ventilated ALS patients were assigned to one of two groups: a control group (G1, n=20) in which compliance and ventilator parameter settings were assessed during office visits; or an intervention group (G2, n=20) in which patients received a modem device connected to the ventilator. The number of office and emergency room visits and hospital admissions during the entire span of NIV use and the number of parameter setting changes to achieve full compliance were the primary outcome measurements. Demographic and clinical features were similar between the two groups at admission. No difference in compliance was found between the groups. The incidence of changes in parameter settings throughout the survival period with NIV was lower in G2 (p<0.0001) but it was increased during the initial period needed to achieve full compliance. The number of office or emergency room visits and inhospital admissions was significantly lower in G2 (p<0.0001). Survival showed a trend favouring G2 (p=0.13). This study shows that telemonitoring reduces health care utilisation with probable favourable implications on costs, survival and functional status.
Conical Fourier shell correlation applied to electron tomograms.
Diebolder, C A; Faas, F G A; Koster, A J; Koning, R I
2015-05-01
The resolution of electron tomograms is anisotropic due to geometrical constraints during data collection, such as the limited tilt range and single axis tilt series acquisition. Acquisition of dual axis tilt series can decrease these effects. However, in cryo-electron tomography, to limit the electron radiation damage that occurs during imaging, the total dose should not increase and must be fractionated over the two tilt series. Here we set out to determine whether it is beneficial fractionate electron dose for recording dual axis cryo electron tilt series or whether it is better to perform single axis acquisition. To assess the quality of tomographic reconstructions in different directions here we introduce conical Fourier shell correlation (cFSCe/o). Employing cFSCe/o, we compared the resolution isotropy of single-axis and dual-axis (cryo-)electron tomograms using even/odd split data sets. We show that the resolution of dual-axis simulated and cryo-electron tomograms in the plane orthogonal to the electron beam becomes more isotropic compared to single-axis tomograms and high resolution peaks along the tilt axis disappear. cFSCe/o also allowed us to compare different methods for the alignment of dual-axis tomograms. We show that different tomographic reconstruction programs produce different anisotropic resolution in dual axis tomograms. We anticipate that cFSCe/o can also be useful for comparisons of acquisition and reconstruction parameters, and different hardware implementations. Copyright © 2015 Elsevier Inc. All rights reserved.
Raymond, G M; Bassingthwaighte, J B
This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model improves confidence in the results. This modeling effort is also an example of reproducible science available at html://www.physiome.org/jsim/models/webmodel/NSR/SalicylicAcidClearance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl
The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less
The cloud radiation impact from optics simulation and airborne observation
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Kuznetsov, Anatoly; Gatebe, Charles
2017-02-01
The analytical approach of inverse asymptotic formulas of the radiative transfer theory is used for solving inverse problems of cloud optics. The method has advantages because it does not impose strict constraints, but it is tied to the desired solution. Observations are accomplished in extended stratus cloudiness, above a homogeneous ocean surface. Data from NASA`s Cloud Absorption Radiometer (CAR) during two airborne experiments (SAFARI-2000 and ARCTAS-2008) were analyzed. The analytical method of inverse asymptotic formulas was used to retrieve cloud optical parameters (optical thickness, single scattering albedo and asymmetry parameter of the phase function) and ground albedo in all 8 spectral channels independently. The method is free from a priori restrictions and there is no links to parameters, and it has been applied to data set of different origin and geometry of observations. Results obtained from different airborne, satellite and ground radiative experiments appeared consistence and showed common features of values of cloud parameters and its spectral dependence (Vasiluev, Melnikova, 2004; Gatebe et al., 2014). Optical parameters, retrieved here, are used for calculation of radiative divergence, reflected and transmitted irradiance and heating rates in cloudy atmosphere, that agree with previous observational data.
Lo, Kam W
2017-03-01
When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.
Effect of Electron Beam Freeform Fabrication (EBF3) Processing Parameters on Composition of Ti-6-4
NASA Technical Reports Server (NTRS)
Lach, Cynthia L.; Taminger, Karen; Schuszler, A. Bud, II; Sankaran, Sankara; Ehlers, Helen; Nasserrafi, Rahbar; Woods, Bryan
2007-01-01
The Electron Beam Freeform Fabrication (EBF3) process developed at NASA Langley Research Center was evaluated using a design of experiments approach to determine the effect of processing parameters on the composition and geometry of Ti-6-4 deposits. The effects of three processing parameters: beam power, translation speed, and wire feed rate, were investigated by varying one while keeping the remaining parameters constant. A three-factorial, three-level, fully balanced mutually orthogonal array (L27) design of experiments approach was used to examine the effects of low, medium, and high settings for the processing parameters on the chemistry, geometry, and quality of the resulting deposits. Single bead high deposits were fabricated and evaluated for 27 experimental conditions. Loss of aluminum in Ti-6-4 was observed in EBF3 processing due to selective vaporization of the aluminum from the sustained molten pool in the vacuum environment; therefore, the chemistries of the deposits were measured and compared with the composition of the initial wire and base plate to determine if the loss of aluminum could be minimized through careful selection of processing parameters. The influence of processing parameters and coupling between these parameters on bulk composition, measured by Direct Current Plasma (DCP), local microchemistries determined by Wavelength Dispersive Spectrometry (WDS), and deposit geometry will also be discussed.
Simultaneous Retrieval of Multiple Aerosol Parameters Using a Multi-Angular Approach
NASA Technical Reports Server (NTRS)
Kuo, K. S.; Weger, R. C.; Welch, R. M.
1997-01-01
Atmospheric aerosol particles, both natural and anthropogenic, are important to the earth's radiative balance through their direct and indirect effects. They scatter the incoming solar radiation (direct effect) and modify the shortwave reflective properties of clouds by acting as cloud condensation nuclei (indirect effect). Although it has been suggested that aerosols exert a net cooling influence on climate, this effect has received less attention than the radiative forcing due to clouds and greenhouse gases. In order to understand the role that aerosols play in a changing climate, detailed and accurate observations are a prerequisite. The retrieval of aerosol optical properties by satellite remote sensing has proven to be a difficult task. The difficulty results mainly from the tenuous nature and variable composition of aerosols. To date, with single-angle satellite observations, we can only retrieve reliably against dark backgrounds, such as over oceans and dense vegetation. Even then, assumptions must be made concerning the chemical composition of aerosols. The best hope we have for aerosol retrievals over bright backgrounds are observations from multiple angles, such as those provided by the MISR and POLDER instruments. In this investigation we examine the feasibility of simultaneous retrieval of multiple aerosol optical parameters using reflectances from a typical set of twelve angles observed by the French POLDER instrument. The retrieved aerosol optical parameters consist of asymmetry factor, single scattering albedo, surface albedo, and optical thickness.
A phase I study to assess the single and multiple dose pharmacokinetics of THC/CBD oromucosal spray.
Stott, C G; White, L; Wright, S; Wilbraham, D; Guy, G W
2013-05-01
A Phase I study to assess the single and multipledose pharmacokinetics (PKs) and safety and tolerability of oromucosally administered Δ(9)-tetrahydrocannabinol (THC)/cannabidiol (CBD) spray, an endocannabinoid system modulator, in healthy male subjects. Subjects received either single doses of THC/CBD spray as multiple sprays [2 (5.4 mg THC and 5.0 mg CBD), 4 (10.8 mg THC and 10.0 mg CBD) or 8 (21.6 mg THC and 20.0 mg CBD) daily sprays] or multiple doses of THC/CBD spray (2, 4 or 8 sprays once daily) for nine consecutive days, following fasting for a minimum of 10 h overnight prior to each dosing. Plasma samples were analyzed by gas chromatography-mass spectrometry for CBD, THC, and its primary metabolite 11-hydroxy-THC, and various PK parameters were investigated. Δ(9)-Tetrahydrocannabinol and CBD were rapidly absorbed following single-dose administration. With increasing single and multiple doses of THC/CBD spray, the mean peak plasma concentration (Cmax) increased for all analytes. There was evidence of dose-proportionality in the single but not the multiple dosing data sets. The bioavailability of THC was greater than CBD at single and multiple doses, and there was no evidence of accumulation for any analyte with multiple dosing. Inter-subject variability ranged from moderate to high for all PK parameters in this study. The time to peak plasma concentration (Tmax) was longest for all analytes in the eight spray group, but was similar in the two and four spray groups. THC/CBD spray was well-tolerated in this study and no serious adverse events were reported. The mean Cmax values (<12 ng/mL) recorded in this study were well below those reported in patients who smoked/inhaled cannabis, which is reassuring since elevated Cmax values are linked to significant psychoactivity. There was also no evidence of accumulation on repeated dosing.
Reconstructing population histories from single nucleotide polymorphism data.
Sirén, Jukka; Marttinen, Pekka; Corander, Jukka
2011-01-01
Population genetics encompasses a strong theoretical and applied research tradition on the multiple demographic processes that shape genetic variation present within a species. When several distinct populations exist in the current generation, it is often natural to consider the pattern of their divergence from a single ancestral population in terms of a binary tree structure. Inference about such population histories based on molecular data has been an intensive research topic in the recent years. The most common approach uses coalescent theory to model genealogies of individuals sampled from the current populations. Such methods are able to compare several different evolutionary scenarios and to estimate demographic parameters. However, their major limitation is the enormous computational complexity associated with the indirect modeling of the demographies, which limits the application to small data sets. Here, we propose a novel Bayesian method for inferring population histories from unlinked single nucleotide polymorphisms, which is applicable also to data sets harboring large numbers of individuals from distinct populations. We use an approximation to the neutral Wright-Fisher diffusion to model random fluctuations in allele frequencies. The population histories are modeled as binary rooted trees that represent the historical order of divergence of the different populations. A combination of analytical, numerical, and Monte Carlo integration techniques are utilized for the inferences. A particularly important feature of our approach is that it provides intuitive measures of statistical uncertainty related with the estimates computed, which may be entirely lacking for the alternative methods in this context. The potential of our approach is illustrated by analyses of both simulated and real data sets.
Quantification of dental prostheses on cone‐beam CT images by the Taguchi method
Kuo, Rong‐Fu; Fang, Kwang‐Ming; TY, Wong
2016-01-01
The gray values accuracy of dental cone‐beam computed tomography (CBCT) is affected by dental metal prostheses. The distortion of dental CBCT gray values could lead to inaccuracies of orthodontic and implant treatment. The aim of this study was to quantify the effect of scanning parameters and dental metal prostheses on the accuracy of dental cone‐beam computed tomography (CBCT) gray values using the Taguchi method. Eight dental model casts of an upper jaw including prostheses, and a ninth prosthesis‐free dental model cast, were scanned by two dental CBCT devices. The mean gray value of the selected circular regions of interest (ROIs) were measured using dental CBCT images of eight dental model casts and were compared with those measured from CBCT images of the prosthesis‐free dental model cast. For each image set, four consecutive slices of gingiva were selected. The seven factors (CBCTs, occlusal plane canting, implant connection, prosthesis position, coping material, coping thickness, and types of dental restoration) were used to evaluate scanning parameter and dental prostheses effects. Statistical methods of signal to noise ratio (S/N) and analysis of variance (ANOVA) with 95% confidence were applied to quantify the effects of scanning parameters and dental prostheses on dental CBCT gray values accuracy. For ROIs surrounding dental prostheses, the accuracy of CBCT gray values were affected primarily by implant connection (42%), followed by type of restoration (29%), prostheses position (19%), coping material (4%), and coping thickness (4%). For a single crown prosthesis (without support of implants) placed in dental model casts, gray value differences for ROIs 1–9 were below 12% and gray value differences for ROIs 13–18 away from prostheses were below 10%. We found the gray value differences set to be between 7% and 8% for regions next to a single implant‐supported titanium prosthesis, and between 46% and 59% for regions between double implant‐supported, nickel‐chromium alloys (Ni‐Cr) prostheses. Quantification of the effect of prostheses and scanning parameters on dental CBCT gray values was assessed. PACS numbers: 87.59.bd, 87.57Q PMID:26894354
Mech, Agnieszka; Gajek, Zbigniew; Karbowiak, Mirosław; Rudowicz, Czesław
2008-09-24
Optical absorption measurements of Nd(3+) ions in single crystals of [Nd(hfa)(4)(H(2)O)](N(C(2)H(5))(4)) (hfa = hexafluoroacetyloacetonate), denoted Nd(hfa) for short, have been carried out at 4.2 and 298 K. This compound crystallizes in the monoclinic system (space group P 2(1)/n). Each Nd ion is coordinated to eight oxygen atoms that originate from the hexafluoroacetylacetonate ligands and one oxygen atom from the water molecule. A total of 85 experimental crystal-field (CF) energy levels arising from the Nd(3+) (4f(3)) electronic configuration were identified in the optical spectra and assigned. A three-step CF analysis was carried out in terms of a parametric Hamiltonian for the actual C(1) symmetry at the Nd(3+) ion sites. In the first step, a total of 27 CF parameters (CFPs) in the Wybourne notation B(kq), admissible by group theory, were determined in a preliminary fitting constrained by the angular overlap model predictions. The resulting CFP set was reduced to 24 specific independent CFPs using appropriate standardization transformations. Optimizations of the second-rank CFPs and extended scanning of the parameter space were employed in the second step to improve reliability of the CFP sets, which is rather a difficult task in the case of no site symmetry. Finally, seven free-ion parameters and 24 CFPs were freely varied, yielding an rms deviation between the calculated energy levels and the 85 observed ones of 11.1 cm(-1). Our approach also allows prediction of the energy levels of Nd(3+) ions that are hidden in the spectral range overlapping with strong ligand absorption, which is essential for understanding the inter-ionic energy transfer. The orientation of the axis system associated with the fitted CF parameters w.r.t. the crystallographic axes is established. The procedure adopted in our calculations may be considered as a general framework for analysis of CF levels of lanthanide ions at low (triclinic) symmetry sites.
NASA Astrophysics Data System (ADS)
Mech, Agnieszka; Gajek, Zbigniew; Karbowiak, Mirosław; Rudowicz, Czesław
2008-09-01
Optical absorption measurements of Nd3+ ions in single crystals of [Nd(hfa)4(H2O)](N(C2H5)4) (hfa = hexafluoroacetyloacetonate), denoted Nd(hfa) for short, have been carried out at 4.2 and 298 K. This compound crystallizes in the monoclinic system (space group P 21/n). Each Nd ion is coordinated to eight oxygen atoms that originate from the hexafluoroacetylacetonate ligands and one oxygen atom from the water molecule. A total of 85 experimental crystal-field (CF) energy levels arising from the Nd3+ (4f3) electronic configuration were identified in the optical spectra and assigned. A three-step CF analysis was carried out in terms of a parametric Hamiltonian for the actual C1 symmetry at the Nd3+ ion sites. In the first step, a total of 27 CF parameters (CFPs) in the Wybourne notation Bkq, admissible by group theory, were determined in a preliminary fitting constrained by the angular overlap model predictions. The resulting CFP set was reduced to 24 specific independent CFPs using appropriate standardization transformations. Optimizations of the second-rank CFPs and extended scanning of the parameter space were employed in the second step to improve reliability of the CFP sets, which is rather a difficult task in the case of no site symmetry. Finally, seven free-ion parameters and 24 CFPs were freely varied, yielding an rms deviation between the calculated energy levels and the 85 observed ones of 11.1 cm-1. Our approach also allows prediction of the energy levels of Nd3+ ions that are hidden in the spectral range overlapping with strong ligand absorption, which is essential for understanding the inter-ionic energy transfer. The orientation of the axis system associated with the fitted CF parameters w.r.t. the crystallographic axes is established. The procedure adopted in our calculations may be considered as a general framework for analysis of CF levels of lanthanide ions at low (triclinic) symmetry sites.
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.
Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X
2017-12-05
Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
Star clusters: age, metallicity and extinction from integrated spectra
NASA Astrophysics Data System (ADS)
González Delgado, Rosa M.; Cid Fernandes, Roberto
2010-01-01
Integrated optical spectra of star clusters in the Magellanic Clouds and a few Galactic globular clusters are fitted using high-resolution spectral models for single stellar populations. The goal is to estimate the age, metallicity and extinction of the clusters, and evaluate the degeneracies among these parameters. Several sets of evolutionary models that were computed with recent high-spectral-resolution stellar libraries (MILES, GRANADA, STELIB), are used as inputs to the starlight code to perform the fits. The comparison of the results derived from this method and previous estimates available in the literature allow us to evaluate the pros and cons of each set of models to determine star cluster properties. In addition, we quantify the uncertainties associated with the age, metallicity and extinction determinations resulting from variance in the ingredients for the analysis.
Ab initio atomic recombination reaction energetics on model heat shield surfaces
NASA Technical Reports Server (NTRS)
Senese, Fredrick; Ake, Robert
1992-01-01
Ab initio quantum mechanical calculations on small hydration complexes involving the nitrate anion are reported. The self-consistent field method with accurate basis sets has been applied to compute completely optimized equilibrium geometries, vibrational frequencies, thermochemical parameters, and stable site labilities of complexes involving 1, 2, and 3 waters. The most stable geometries in the first hydration shell involve in-plane waters bridging pairs of nitrate oxygens with two equal and bent hydrogen bonds. A second extremely labile local minimum involves out-of-plane waters with a single hydrogen bond and lies about 2 kcal/mol higher. The potential in the region of the second minimum is extremely flat and qualitatively sensitive to changes in the basis set; it does not correspond to a true equilibrium structure.
Hassani, S. A.; Oemisch, M.; Balcarras, M.; Westendorff, S.; Ardid, S.; van der Meer, M. A.; Tiesinga, P.; Womelsdorf, T.
2017-01-01
Noradrenaline is believed to support cognitive flexibility through the alpha 2A noradrenergic receptor (a2A-NAR) acting in prefrontal cortex. Enhanced flexibility has been inferred from improved working memory with the a2A-NA agonist Guanfacine. But it has been unclear whether Guanfacine improves specific attention and learning mechanisms beyond working memory, and whether the drug effects can be formalized computationally to allow single subject predictions. We tested and confirmed these suggestions in a case study with a healthy nonhuman primate performing a feature-based reversal learning task evaluating performance using Bayesian and Reinforcement learning models. In an initial dose-testing phase we found a Guanfacine dose that increased performance accuracy, decreased distractibility and improved learning. In a second experimental phase using only that dose we examined the faster feature-based reversal learning with Guanfacine with single-subject computational modeling. Parameter estimation suggested that improved learning is not accounted for by varying a single reinforcement learning mechanism, but by changing the set of parameter values to higher learning rates and stronger suppression of non-chosen over chosen feature information. These findings provide an important starting point for developing nonhuman primate models to discern the synaptic mechanisms of attention and learning functions within the context of a computational neuropsychiatry framework. PMID:28091572
AVR Microcontroller-based automated technique for analysis of DC motors
NASA Astrophysics Data System (ADS)
Kaur, P.; Chatterji, S.
2014-01-01
This paper provides essential information on the development of a 'dc motor test and analysis control card' using AVR series ATMega32 microcontroller. This card can be interfaced to PC and calculates parameters like motor losses, efficiency and plot characteristics for dc motors. Presently, there are different tests and methods available to evaluate motor parameters, but a single and universal user-friendly automated set-up has been discussed in this paper. It has been accomplished by designing a data acquisition and SCR bridge firing hardware based on AVR ATMega32 microcontroller. This hardware has the capability to drive the phase-controlled rectifiers and acquire real-time values of current, voltage, temperature and speed of motor. Various analyses feasible with the designed hardware are of immense importance for dc motor manufacturers and quality-sensitive users. Authors, through this paper aim to provide details of this AVR-based hardware which can be used for dc motor parameter analysis and also for motor control applications.
Analytical design of modified Smith predictor for unstable second-order processes with time delay
NASA Astrophysics Data System (ADS)
Ajmeri, Moina; Ali, Ahmad
2017-06-01
In this paper, a modified Smith predictor using three controllers, namely, stabilising (Gc), set-point tracking (Gc1), and load disturbance rejection (Gc2) controllers is proposed for second-order unstable processes with time delay. Controllers of the proposed structure are tuned using direct synthesis approach as this method enables the user to achieve a trade-off between the performance and robustness by adjusting a single design parameter. Furthermore, suitable values of the tuning parameters are recommended after studying their effect on the closed-loop performance and robustness. This is the main advantage of the proposed work over other recently published manuscripts, where authors provide only suitable ranges for the tuning parameters in spite of giving their suitable values. Simulation studies show that the proposed method results in satisfactory performance and improved robustness as compared to the recently reported control schemes. It is observed that the proposed scheme is able to work in the noisy environment also.
Sah, Parimal; Das, Bijoy Krishna
2018-03-20
It has been shown that a fundamental mode adiabatically launched into a multimode SOI waveguide with submicron grating offers well-defined flat-top bandpass filter characteristics in transmission. The transmitted spectral bandwidth is controlled by adjusting both waveguide and grating design parameters. The bandwidth is further narrowed down by cascading two gratings with detuned parameters. A semi-analytical model is used to analyze the filter characteristics (1500 nm≤λ≤1650 nm) of the device operating in transverse-electric polarization. The proposed devices were fabricated with an optimized set of design parameters in a SOI substrate with a device layer thickness of 250 nm. The pass bandwidth of waveguide devices integrated with single-stage gratings are measured to be ∼24 nm, whereas the device with two cascaded gratings with slightly detuned periods (ΔΛ=2 nm) exhibits a pass bandwidth down to ∼10 nm.
NASA Astrophysics Data System (ADS)
Huang, Xian Bin; Ren, Xiao Dong; Dan, Jia Kun; Wang, Kun Lun; Xu, Qiang; Zhou, Shao Tong; Zhang, Si Qun; Cai, Hong Chun; Li, Jing; Wei, Bing; Ji, Ce; Feng, Shu Ping; Wang, Meng; Xie, Wei Ping; Deng, Jian Jun
2017-09-01
The preliminary experimental results of Z-pinch dynamic hohlraums conducted on the Primary Test Stand (PTS) facility are presented herein. Six different types of dynamic hohlraums were used in order to study the influence of load parameters on radiation characteristics and implosion dynamics, including dynamic hohlraums driven by single and nested arrays with different array parameters and different foams. The PTS facility can deliver a current of 6-8 MA in the peak current and 60-70 ns in the 10%-90% rising time to dynamic hohlraum loads. A set of diagnostics monitor the implosion dynamics of plasmas, the evolution of shock waves in the foam and the axial/radial X-ray radiation, giving the key parameters characterizing the features of dynamic hohlraums, such as the trajectory and related velocity of shock waves, radiation temperature, and so on. The experimental results presented here put our future study on Z-pinch dynamic hohlraums on the PTS facility on a firm basis.
NASA Astrophysics Data System (ADS)
Seelos, F. P.; Arvidson, R. E.; Guinness, E. A.; Wolff, M. J.
2004-12-01
The Mars Exploration Rover (MER) Panoramic Camera (Pancam) observation strategy included the acquisition of multispectral data sets specifically designed to support the photometric analysis of Martian surface materials (J. R. Johnson, this conference). We report on the numerical inversion of observed Pancam radiance-on-sensor data to determine the best-fit surface bidirectional reflectance parameters as defined by Hapke theory. The model bidirectional reflectance parameters for the Martian surface provide constraints on physical and material properties and allow for the direct comparison of Pancam and orbital data sets. The parameter optimization procedure consists of a spatial multigridding strategy driving a Levenberg-Marquardt nonlinear least squares optimization engine. The forward radiance models and partial derivatives (via finite-difference approximation) are calculated using an implementation of the DIScrete Ordinate Radiative Transfer (DISORT) algorithm with the four-parameter Hapke bidirectional reflectance function and the two-parameter Henyey-Greenstein phase function defining the lower boundary. The DISORT implementation includes a plane-parallel model of the Martian atmosphere derived from a combination of Thermal Emission Spectrometer (TES), Pancam, and Mini-TES atmospheric data acquired near in time to the surface observations. This model accounts for bidirectional illumination from the attenuated solar beam and hemispherical-directional skylight illumination. The initial investigation was limited to treating the materials surrounding the rover as a single surface type, consistent with the spatial resolution of orbital observations. For more detailed analyses the observation geometry can be calculated from the correlation of Pancam stereo pairs (J. M. Soderblom et al., this conference). With improved geometric control, the radiance inversion can be applied to constituent surface material classes such as ripple and dune forms in addition to the soils on the Meridiani plain. Under the assumption of a Henyey-Greenstein phase function, initial results for the Opportunity site suggest a single scattering albedo on the order of 0.25 and a Henyey-Greenstein forward fraction approaching unity at an effective wavelength of 753 nm. As an extension of the photometric modeling, the radiance inversion also provides a means of calculating surface reflectance independent of the radiometric calibration target. This method for determining observed reflectance will provide an additional constraint on the dust deposition model for the calibration target.
Corona-Strauss, Farah I; Delb, Wolfgang; Bloching, Marc; Strauss, Daniel J
2008-01-01
We have recently shown that click evoked auditory brainstem responses (ABRs) single sweeps can efficiently be processed by a hybrid novelty detection system. This approach allowed for the objective detection of hearing thresholds in a fraction of time of conventional schemes, making it appropriate for the efficient implementation of newborn hearing screening procedures. It is the objective of this study to evaluate whether this approach might further be improved by different stimulation paradigms and electrode settings. In particular, we evaluate chirp stimulations which compensate the basilar-membrane dispersion and active electrodes which are less sensitive to movements. This is the first study which is directed to a single sweep processing of chirp evoked ABRs. By concentrating on transparent features and a minimum number of adjustable parameters, we present an objective comparison of click vs.chirp stimulations and active vs. passive electrodes in the ultrafast ABR detection. We show that chirp evoked brainstem responses and active electrodes might improve the single sweeps analysis of ABRs.Consequently, we conclude that a single sweep processing of ABRs for the objective determination of hearing thresholds can further be improved by the use of optimized chirp stimulations and active electrodes.
Guo, Xuezhen; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W
2014-06-01
Economic analysis of hazard surveillance in livestock production chains is essential for surveillance organizations (such as food safety authorities) when making scientifically based decisions on optimization of resource allocation. To enable this, quantitative decision support tools are required at two levels of analysis: (1) single-hazard surveillance system and (2) surveillance portfolio. This paper addresses the first level by presenting a conceptual approach for the economic analysis of single-hazard surveillance systems. The concept includes objective and subjective aspects of single-hazard surveillance system analysis: (1) a simulation part to derive an efficient set of surveillance setups based on the technical surveillance performance parameters (TSPPs) and the corresponding surveillance costs, i.e., objective analysis, and (2) a multi-criteria decision making model to evaluate the impacts of the hazard surveillance, i.e., subjective analysis. The conceptual approach was checked for (1) conceptual validity and (2) data validity. Issues regarding the practical use of the approach, particularly the data requirement, were discussed. We concluded that the conceptual approach is scientifically credible for economic analysis of single-hazard surveillance systems and that the practicability of the approach depends on data availability. Copyright © 2014 Elsevier B.V. All rights reserved.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
Regression with Small Data Sets: A Case Study using Code Surrogates in Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, C.; Fan, Y. J.
There has been an increasing interest in recent years in the mining of massive data sets whose sizes are measured in terabytes. While it is easy to collect such large data sets in some application domains, there are others where collecting even a single data point can be very expensive, so the resulting data sets have only tens or hundreds of samples. For example, when complex computer simulations are used to understand a scientific phenomenon, we want to run the simulation for many different values of the input parameters and analyze the resulting output. The data set relating the simulationmore » inputs and outputs is typically quite small, especially when each run of the simulation is expensive. However, regression techniques can still be used on such data sets to build an inexpensive \\surrogate" that could provide an approximate output for a given set of inputs. A good surrogate can be very useful in sensitivity analysis, uncertainty analysis, and in designing experiments. In this paper, we compare different regression techniques to determine how well they predict melt-pool characteristics in the problem domain of additive manufacturing. Our analysis indicates that some of the commonly used regression methods do perform quite well even on small data sets.« less
Optimal radiotherapy dose schedules under parametric uncertainty
NASA Astrophysics Data System (ADS)
Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin
2016-01-01
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.
NASA Astrophysics Data System (ADS)
Thiboult, A.; Anctil, F.
2015-10-01
Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Québec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.
NASA Astrophysics Data System (ADS)
Karuppasamy, Ayyanar; Udhaya kumar, Chandran; Karthikeyan, Subramanian; Velayutham Pillai, Muthiah Pillai; Ramalingan, Chennan
2017-11-01
A novel conjugated octylcarbazole ornamented 3-phenothiazinal, 10-(9-octyl-9H-carbazol-3-yl)-10H-phenothiazine-3-carbaldehyde (OCPTC) was synthesized and fully characterized by 1H-NMR, 13C-NMR, elemental and single crystal XRD analyses. The optimized geometrical structure, vibrational frequencies and NMR have been computed with M06-2X method using 6-31+G(d,p) basis set. Total electronic energies and HOMO-LUMO energy gaps in gas phase are discussed. The geometrical parameters of the title compound obtained from single crystal XRD studies have been found in accord with the calculated (DFT) values. The experimental and theoretical FT-IR and NMR results of the title molecule have been investigated. The experimentally observed vibrational frequencies have been compared with the calculated ones, which are in good agreement with each other. Single crystal X-ray structural analysis of OCPTC, evidences the ''butterfly conformation'' of phenothiazine ring with nearly perpendicular orientation of the carbazole structural motif to the phenothiazine moiety.
Conformation of repaglinide: A solvent dependent structure
NASA Astrophysics Data System (ADS)
Chashmniam, Saeed; Tafazzoli, Mohsen
2017-09-01
Experimental and theoretical conformational study of repaglinide in chloroform and dimethyl sulfoxide was investigated. By applying potential energy scanning (PES) at B3LYP/6-311++g** and B3LYP-D3/6-311++g** level of theory on rotatable single bonds, four stable conformers (R1-R4) were identified. Spin-spin coupling constant values were obtained from a set of 2D NMR spectra (Hsbnd H COSY, Hsbnd C HMQC and Hsbnd C HMBC) and compared to its calculated values. Interestingly, from 1HNMR and 2D-NOESY NMR, it has been found that repaglinide structure is folded in CDCl3 and cause all single bonds to rotate at an extremely slow rate. On the other hand, in DMSO-d6, with strong solvent-solute intermolecular interactions, the single bonds rotate freely. Also, energy barrier and thermodynamic parameters for chair to chair interconversion was measured (13.04 kcal mol-1) in CDCl3 solvent by using temperature dynamic NMR.
A Single Mode Study of a Quasi-Geostrophic Convection-Driven Dynamo Model
NASA Astrophysics Data System (ADS)
Plumley, M.; Calkins, M. A.; Julien, K. A.; Tobias, S.
2017-12-01
Planetary magnetic fields are thought to be the product of hydromagnetic dynamo action. For Earth, this process occurs within the convecting, turbulent and rapidly rotating outer core, where the dynamics are characterized by low Rossby, low magnetic Prandtl and high Rayleigh numbers. Progress in studying dynamos has been limited by current computing capabilities and the difficulties in replicating the extreme values that define this setting. Asymptotic models that embrace these extreme parameter values and enforce the dominant balance of geostrophy provide an option for the study of convective flows with actual relevance to geophysics. The quasi-geostrophic dynamo model (QGDM) is a multiscale, fully-nonlinear Cartesian dynamo model that is valid in the asymptotic limit of low Rossby number. We investigate the QGDM using a simplified class of solutions that consist of a single horizontal wavenumber which enforces a horizontal structure on the solutions. This single mode study is used to explore multiscale time stepping techniques and analyze the influence of the magnetic field on convection.
Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min
2013-09-01
In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Precise regional baseline estimation using a priori orbital information
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey
1990-01-01
A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.
Strategy selection in structured populations.
Tarnita, Corina E; Ohtsuki, Hisashi; Antal, Tibor; Fu, Feng; Nowak, Martin A
2009-08-07
Evolutionary game theory studies frequency dependent selection. The fitness of a strategy is not constant, but depends on the relative frequencies of strategies in the population. This type of evolutionary dynamics occurs in many settings of ecology, infectious disease dynamics, animal behavior and social interactions of humans. Traditionally evolutionary game dynamics are studied in well-mixed populations, where the interaction between any two individuals is equally likely. There have also been several approaches to study evolutionary games in structured populations. In this paper we present a simple result that holds for a large variety of population structures. We consider the game between two strategies, A and B, described by the payoff matrix(abcd). We study a mutation and selection process. For weak selection strategy A is favored over B if and only if sigma a+b>c+sigma d. This means the effect of population structure on strategy selection can be described by a single parameter, sigma. We present the values of sigma for various examples including the well-mixed population, games on graphs, games in phenotype space and games on sets. We give a proof for the existence of such a sigma, which holds for all population structures and update rules that have certain (natural) properties. We assume weak selection, but allow any mutation rate. We discuss the relationship between sigma and the critical benefit to cost ratio for the evolution of cooperation. The single parameter, sigma, allows us to quantify the ability of a population structure to promote the evolution of cooperation or to choose efficient equilibria in coordination games.
Heller, David N; Nochetto, Cristina B; Rummel, Nathan G; Thomas, Michael H
2006-07-26
A method was developed for detection of a variety of polar drug residues in eggs via liquid chromatography/tandem mass spectrometry (LC/MS/MS) with electrospray ionization (ESI). A total of twenty-nine target analytes from four drug classes-sulfonamides, tetracyclines, fluoroquinolones, and beta-lactams-were extracted from eggs using a hydrophilic-lipophilic balance polymer solid-phase extraction (SPE) cartridge. The extraction technique was developed for use at a target concentration of 100 ng/mL (ppb), and it was applied to eggs containing incurred residues from dosed laying hens. The ESI source was tuned using a single, generic set of tuning parameters, and analytes were separated with a phenyl-bonded silica cartridge column using an LC gradient. In a related study, residues of beta-lactam drugs were not found by LC/MS/MS in eggs from hens dosed orally with beta-lactam drugs. LC/MS/MS performance was evaluated on two generations of ion trap mass spectrometers, and key operational parameters were identified for each instrument. The ion trap acquisition methods could be set up for screening (a single product ion) or confirmation (multiple product ions). The lower limit of detection for screening purposes was 10-50 ppb (sulfonamides), 10-20 ppb (fluoroquinolones), and 10-50 ppb (tetracyclines), depending on the drug, instrument, and acquisition method. Development of this method demonstrates the feasibility of generic SPE, LC, and MS conditions for multiclass LC/MS residue screening.
NASA Astrophysics Data System (ADS)
Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting
2017-04-01
Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a pixel spacing of 40 meters near Prydz Bay area, East Antarctica. Main work is listed as follows: 1) A mixture statistical distribution based CRF algorithm has been developed for leads detection from Sentinel-1A dual polarization images. 2) The assessment of the proposed mixture statistical distribution based CRF method and single distribution based CRF algorithm has been presented. 3) The preferable parameters sets including statistical distributions, the aspect ratio threshold and spatial smoothing window size have been provided. In the future, the proposed algorithm will be developed for the operational Sentinel series data sets processing due to its less time consuming cost and high accuracy in leads detection.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
NASA Astrophysics Data System (ADS)
Merka, J.; Dolan, C. F.
2015-12-01
Finding and retrieving space physics data is often a complicated taskeven for publicly available data sets: Thousands of relativelysmall and many large data sets are stored in various formats and, inthe better case, accompanied by at least some documentation. VirtualHeliospheric and Magnetospheric Observatories (VHO and VMO) help researches by creating a single point of uniformdiscovery, access, and use of heliospheric (VHO) and magnetospheric(VMO) data.The VMO and VHO functionality relies on metadata expressed using theSPASE data model. This data model is developed by the SPASE WorkingGroup which is currently the only international group supporting globaldata management for Solar and Space Physics. The two Virtual Observatories(VxOs) have initiated and lead a development of a SPASE-related standardnamed SPASE Query Language for provided a standard way of submittingqueries and receiving results.The VMO and VHO use SPASE and SPASEQL for searches based on various criteria such as, for example, spatial location, time of observation, measurement type, parameter values, etc. The parameter values are represented by their statisticalestimators calculated typically over 10-minute intervals: mean, median, standard deviation, minimum, and maximum. The use of statistical estimatorsenables science driven data queries that simplify and shorten the effort tofind where and/or how often the sought phenomenon is observed, as we will present.
NASA Astrophysics Data System (ADS)
Zhang, Shun-Rong; Holt, John M.; Erickson, Philip J.; Goncharenko, Larisa P.
2018-05-01
Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) and Mikhailov et al. (2017, https://doi.org/10.1002/2017JA023909) have recently examined thermospheric and ionospheric long-term trends using a data set of four thermospheric parameters (Tex, [O], [N2], and [O2]) and solar EUV flux. These data were derived from one single ionospheric parameter, foF1, using a nonlinear fitting procedure involving a photochemical model for the F1 peak. The F1 peak is assumed at the transition height ht with the linear recombination for atomic oxygen ions being equal to the quadratic recombination for molecular ions. This procedure has a number of obvious problems that are not addressed or not sufficiently justified. The potentially large ambiguities and biases in derived parameters make them unsuitable for precise quantitative ionospheric and thermospheric long-term trend studies. Furthermore, we assert that Perrone and Mikhailov (2017, https://doi.org/10.1002/2017JA024193) conclusions regarding incoherent scatter radar (ISR) ion temperature analysis for long-term trend studies are incorrect and in particular are based on a misunderstanding of the nature of the incoherent scatter radar measurement process. Large ISR data sets remain a consistent and statistically robust method for determining long term secular plasma temperature trends.
Echocardiography and cardiac resynchronisation therapy, friends or foes?
van Everdingen, W M; Schipper, J C; van 't Sant, J; Ramdat Misier, K; Meine, M; Cramer, M J
2016-01-01
Echocardiography is used in cardiac resynchronisation therapy (CRT) to assess cardiac function, and in particular left ventricular (LV) volumetric status, and prediction of response. Despite its widespread applicability, LV volumes determined by echocardiography have inherent measurement errors, interobserver and intraobserver variability, and discrepancies with the gold standard magnetic resonance imaging. Echocardiographic predictors of CRT response are based on mechanical dyssynchrony. However, parameters are mainly tested in single-centre studies or lack feasibility. Speckle tracking echocardiography can guide LV lead placement, improving volumetric response and clinical outcome by guiding lead positioning towards the latest contracting segment. Results on optimisation of CRT device settings using echocardiographic indices have so far been rather disappointing, as results suffer from noise. Defining response by echocardiography seems valid, although re-assessment after 6 months is advisable, as patients can show both continuous improvement as well as deterioration after the initial response. Three-dimensional echocardiography is interesting for future implications, as it can determine volume, dyssynchrony and viability in a single recording, although image quality needs to be adequate. Deformation patterns from the septum and the derived parameters are promising, although validation in a multicentre trial is required. We conclude that echocardiography has a pivotal role in CRT, although clinicians should know its shortcomings.
NASA Astrophysics Data System (ADS)
Arjunan, V.; Marchewka, Mariusz K.; Pietraszko, A.; Kalaivani, M.
2012-11-01
The structural investigations of the molecular complex of 2-methyl-4-nitroaniline with trichloroacetic acid, namely 2-methyl-4-nitroanilinium trichloroacetate trichloroacetic acid (C11H10Cl6N2O6) have been performed by means of single crystal and powder X-ray diffraction method. The complex was formed with accompanying proton transfer from trichloroacetic acid molecule to 2-methyl-4-nitroaniline. The studied crystal is built up of singly protonated 2-methyl-4-nitroanilinium cations, trichloroacetate anions and neutral trichloroacetic acid molecules. The crystals are monoclinic, space group P21/c, with a = 14.947 Å, b = 6.432 Å, c = 19.609 Å and Z = 4. The vibrational assignments and analysis of 2-methyl-4-nitroanilinium trichloroacetate trichloroacetic acid have also been performed by FTIR, FT-Raman and far-infrared spectral studies. More support on the experimental findings were added from the quantum chemical studies performed with DFT (B3LYP) method using 6-31G**, cc-pVDZ, 6-31G and 6-31++G basis sets. The structural parameters, energies, thermodynamic parameters and the NBO charges of 2M4NATCA were also determined by the DFT methods.
Structure parameters in rotating Couette-Poiseuille channel flow
NASA Technical Reports Server (NTRS)
Knightly, George H.; Sather, D.
1986-01-01
It is well-known that a number of steady state problems in fluid mechanics involving systems of nonlinear partial differential equations can be reduced to the problem of solving a single operator equation of the form: v + lambda Av + lambda B(v) = 0, v is the summation of H, lambda is the summation of one-dimensional Euclid space, where H is an appropriate (real or complex) Hilbert space. Here lambda is a typical load parameter, e.g., the Reynolds number, A is a linear operator, and B is a quadratic operator generated by a bilinear form. In this setting many bifurcation and stability results for problems were obtained. A rotating Couette-Poiseuille channel flow was studied, and it showed that, in general, the superposition of a Poiseuille flow on a rotating Couette channel flow is destabilizing.
Equilibrium properties of dense hydrogen isotope gases based on the theory of simple fluids.
Kowalczyk, Piotr; MacElroy, J M D
2006-08-03
We present a new method for the prediction of the equilibrium properties of dense gases containing hydrogen isotopes. The proposed approach combines the Feynman-Hibbs effective potential method and a deconvolution scheme introduced by Weeks et al. The resulting equations of state and the chemical potentials as functions of pressure for each of the hydrogen isotope gases depend on a single set of Lennard-Jones parameters. In addition to its simplicity, the proposed method with optimized Lennard-Jones potential parameters accurately describes the equilibrium properties of hydrogen isotope fluids in the regime of moderate temperatures and pressures. The present approach should find applications in the nonlocal density functional theory of inhomogeneous quantum fluids and should also be of particular relevance to hydrogen (clean energy) storage and to the separation of quantum isotopes by novel nanomaterials.
Impedance Flow Cytometry as a Tool to Analyze Microspore and Pollen Quality.
Heidmann, Iris; Di Berardino, Marco
2017-01-01
Analyzing pollen quality in an efficient and reliable manner is of great importance to the industries involved in seed and fruit production, plant breeding, and plant research. Pollen quality parameters, viability and germination capacity, are analyzed by various staining methods or by in vitro germination assays, respectively. These methods are time-consuming, species-dependent, and require a lab environment. Furthermore, the obtained viability data are often poorly related to in vivo pollen germination and seed set. Here, we describe a quick, label-free method to analyze pollen using microfluidic chips inserted into an impedance flow cytometer (IFC). Using this approach, pollen quality parameters are determined by a single measurement in a species-independent manner. The advantage of this protocol is that pollen viability and germination can be analyzed quickly by a reliable and standardized method.
Kadam, Ashish A; Karbowiak, Thomas; Voilley, Andrée; Debeaufort, Frédéric
2015-05-01
The mass transfer parameters diffusion and sorption in food and packaging or between them are the key parameters for assessing a food product's shelf-life in reference to consumer safety. This has become of paramount importance owing to the legislations set by the regulated markets. The technical capabilities that can be exploited for analyzing product-package interactions have been growing rapidly. Different techniques categorized according to the state of the diffusant (gas or liquid) in contact with the packaging material are emphasized in this review. Depending on the diffusant and on the analytical question under review, the different ways to study sorption and/or migration are presented and compared. Some examples have been suggested to reach the best possible choice, consisting of a single technique or a combination of different approaches. © 2014 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Lei, Jie
2011-03-01
In order to understand the electronic and transport properties of organic field-effect transistor (FET) materials, we theoretically studied the polarons in two-dimensional systems using a tight-binding model with the Holstein type and Su--Schrieffer--Heeger type electron--lattice couplings. By numerical calculations, it was found that a carrier accepts four kinds of localization, which are named the point polaron, two-dimensional polaron, one-dimensional polaron, and the extended state. The degree of localization is sensitive to the following parameters in the model: the strength and type of electron--lattice couplings, and the signs and relative magnitudes of transfer integrals. When a parameter set for a single-crystal phase of pentacene is applied within the Holstein model, a considerably delocalized hole polaron is found, consistent with the bandlike transport mechanism.
Carbon Nanotubes as FET Channel: Analog Design Optimization considering CNT Parameter Variability
NASA Astrophysics Data System (ADS)
Samar Ansari, Mohd.; Tripathi, S. K.
2017-08-01
Carbon nanotubes (CNTs), both single-walled as well as multi-walled, have been employed in a plethora of applications pertinent to semiconductor materials and devices including, but not limited to, biotechnology, material science, nanoelectronics and nano-electro mechanical systems (NEMS). The Carbon Nanotube Field Effect Transistor (CNFET) is one such electronic device which effectively utilizes CNTs to achieve a boost in the channel conduction thereby yielding superior performance over standard MOSFETs. This paper explores the effects of variability in CNT physical parameters viz. nanotube diameter, pitch, and number of CNT in the transistor channel, on the performance of a chosen analog circuit. It is further shown that from the analyses performed, an optimal design of the CNFETs can be derived for optimizing the performance of the analog circuit as per a given specification set.
NASA Technical Reports Server (NTRS)
Arain, Altaf M.; Shuttleworth, W. James; Yang, Z-Liang; Michaud, Jene; Dolman, Johannes
1997-01-01
A coupled model, which combines the Biosphere-Atmosphere Transfer Scheme (BATS) with an advanced atmospheric boundary-layer model, was used to validate hypothetical aggregation rules for BATS-specific surface cover parameters. The model was initialized and tested with observations from the Anglo-Brazilian Amazonian Climate Observational Study and used to simulate surface fluxes for rain forest and pasture mixes at a site near Manaus in Brazil. The aggregation rules are shown to estimate parameters which give area-average surface fluxes similar to those calculated with explicit representation of forest and pasture patches for a range of meteorological and surface conditions relevant to this site, but the agreement deteriorates somewhat when there are large patch-to-patch differences in soil moisture. The aggregation rules, validated as above, were then applied to remotely sensed 1 km land cover data set to obtain grid-average values of BATS vegetation parameters for 2.8 deg x 2.8 deg and 1 deg x 1 deg grids within the conterminous United States. There are significant differences in key vegetation parameters (aerodynamic roughness length, albedo, leaf area index, and stomatal resistance) when aggregate parameters are compared to parameters for the single, dominant cover within the grid. However, the surface energy fluxes calculated by stand-alone BATS with the 2-year forcing, data from the International Satellite Land Surface Climatology Project (ISLSCP) CDROM were reasonably similar using aggregate-vegetation parameters and dominant-cover parameters, but there were some significant differences, particularly in the western USA.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki
2011-04-01
In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.
Proportional-delayed controllers design for LTI-systems: a geometric approach
NASA Astrophysics Data System (ADS)
Hernández-Díez, J.-E.; Méndez-Barrios, C.-F.; Mondié, S.; Niculescu, S.-I.; González-Galván, E. J.
2018-04-01
This paper focuses on the design of P-δ controllers for single-input-single-output linear time-invariant systems. The basis of this work is a geometric approach allowing to partitioning the parameter space in regions with constant number of unstable roots. This methodology defines the hyper-planes separating the aforementioned regions and characterises the way in which the number of unstable roots changes when crossing such a hyper-plane. The main contribution of the paper is that it provides an explicit tool to find P-δ gains ensuring the stability of the closed-loop system. In addition, the proposed methodology allows to design a non-fragile controller with a desired exponential decay rate σ. Several numerical examples illustrate the results and a haptic experimental set-up shows the effectiveness of P-δ controllers.
Search for Production of Single Top Quarks Via tcg and tug Flavor-Changing-Neutral-Current Couplings
NASA Astrophysics Data System (ADS)
Abazov, V. M.; Abbott, B.; Abolins, M.; Acharya, B. S.; Adams, M.; Adams, T.; Aguilo, E.; Ahn, S. H.; Ahsan, M.; Alexeev, G. D.; Alkhazov, G.; Alton, A.; Alverson, G.; Alves, G. A.; Anastasoaie, M.; Ancu, L. S.; Andeen, T.; Anderson, S.; Andrieu, B.; Anzelc, M. S.; Arnoud, Y.; Arov, M.; Askew, A.; Åsman, B.; Assis Jesus, A. C. S.; Atramentov, O.; Autermann, C.; Avila, C.; Ay, C.; Badaud, F.; Baden, A.; Bagby, L.; Baldin, B.; Bandurin, D. V.; Banerjee, P.; Banerjee, S.; Barberis, E.; Barfuss, A.-F.; Bargassa, P.; Baringer, P.; Barnes, C.; Barreto, J.; Bartlett, J. F.; Bassler, U.; Bauer, D.; Beale, S.; Bean, A.; Begalli, M.; Begel, M.; Belanger-Champagne, C.; Bellantoni, L.; Bellavance, A.; Benitez, J. A.; Beri, S. B.; Bernardi, G.; Bernhard, R.; Berntzon, L.; Bertram, I.; Besançon, M.; Beuselinck, R.; Bezzubov, V. A.; Bhat, P. C.; Bhatnagar, V.; Binder, M.; Biscarat, C.; Blackler, I.; Blazey, G.; Blekman, F.; Blessing, S.; Bloch, D.; Bloom, K.; Boehnlein, A.; Boline, D.; Bolton, T. A.; Boos, E. E.; Borissov, G.; Bos, K.; Bose, T.; Brandt, A.; Brock, R.; Brooijmans, G.; Bross, A.; Brown, D.; Buchanan, N. J.; Buchholz, D.; Buehler, M.; Buescher, V.; Bunichev, V.; Burdin, S.; Burke, S.; Burnett, T. H.; Busato, E.; Buszello, C. P.; Butler, J. M.; Calfayan, P.; Calvet, S.; Cammin, J.; Caron, S.; Carvalho, W.; Casey, B. C. K.; Cason, N. M.; Castilla-Valdez, H.; Chakrabarti, S.; Chakraborty, D.; Chan, K.; Chan, K. M.; Chandra, A.; Charles, F.; Cheu, E.; Chevallier, F.; Cho, D. K.; Choi, S.; Choudhary, B.; Christofek, L.; Christoudias, T.; Claes, D.; Clément, B.; Clément, C.; Coadou, Y.; Cooke, M.; Cooper, W. E.; Corcoran, M.; Couderc, F.; Cousinou, M.-C.; Cox, B.; Crépé-Renaudin, S.; Cutts, D.; Ćwiok, M.; da Motta, H.; Das, A.; Davies, B.; Davies, G.; de, K.; de Jong, P.; de Jong, S. J.; de La Cruz-Burelo, E.; de Oliveira Martins, C.; Degenhardt, J. D.; Déliot, F.; Demarteau, M.; Demina, R.; Denisov, D.; Denisov, S. P.; Desai, S.; Diehl, H. T.; Diesburg, M.; Doidge, M.; Dominguez, A.; Dong, H.; Dudko, L. V.; Duflot, L.; Dugad, S. R.; Duggan, D.; Duperrin, A.; Dyer, J.; Dyshkant, A.; Eads, M.; Edmunds, D.; Ellison, J.; Elvira, V. D.; Enari, Y.; Eno, S.; Ermolov, P.; Evans, H.; Evdokimov, A.; Evdokimov, V. N.; Ferapontov, A. V.; Ferbel, T.; Fiedler, F.; Filthaut, F.; Fisher, W.; Fisk, H. E.; Ford, M.; Fortner, M.; Fox, H.; Fu, S.; Fuess, S.; Gadfort, T.; Galea, C. F.; Gallas, E.; Galyaev, E.; Garcia, C.; Garcia-Bellido, A.; Gavrilov, V.; Gay, P.; Geist, W.; Gelé, D.; Gerber, C. E.; Gershtein, Y.; Gillberg, D.; Ginther, G.; Gollub, N.; Gómez, B.; Goussiou, A.; Grannis, P. D.; Greenlee, H.; Greenwood, Z. D.; Gregores, E. M.; Grenier, G.; Gris, Ph.; Grivaz, J.-F.; Grohsjean, A.; Grünendahl, S.; Grünewald, M. W.; Guo, F.; Guo, J.; Gutierrez, G.; Gutierrez, P.; Haas, A.; Hadley, N. J.; Haefner, P.; Hagopian, S.; Haley, J.; Hall, I.; Hall, R. E.; Han, L.; Hanagaki, K.; Hansson, P.; Harder, K.; Harel, A.; Harrington, R.; Hauptman, J. M.; Hauser, R.; Hays, J.; Hebbeker, T.; Hedin, D.; Hegeman, J. G.; Heinmiller, J. M.; Heinson, A. P.; Heintz, U.; Hensel, C.; Herner, K.; Hesketh, G.; Hildreth, M. D.; Hirosky, R.; Hobbs, J. D.; Hoeneisen, B.; Hoeth, H.; Hohlfeld, M.; Hong, S. J.; Hooper, R.; Houben, P.; Hu, Y.; Hubacek, Z.; Hynek, V.; Iashvili, I.; Illingworth, R.; Ito, A. S.; Jabeen, S.; Jaffré, M.; Jain, S.; Jakobs, K.; Jarvis, C.; Jenkins, A.; Jesik, R.; Johns, K.; Johnson, C.; Johnson, M.; Jonckheere, A.; Jonsson, P.; Juste, A.; Käfer, D.; Kahn, S.; Kajfasz, E.; Kalinin, A. M.; Kalk, J. M.; Kalk, J. R.; Kappler, S.; Karmanov, D.; Kasper, J.; Kasper, P.; Katsanos, I.; Kau, D.; Kaur, R.; Kehoe, R.; Kermiche, S.; Khalatyan, N.; Khanov, A.; Kharchilava, A.; Kharzheev, Y. M.; Khatidze, D.; Kim, H.; Kim, T. J.; Kirby, M. H.; Klima, B.; Kohli, J. M.; Konrath, J.-P.; Kopal, M.; Korablev, V. M.; Kotcher, J.; Kothari, B.; Koubarovsky, A.; Kozelov, A. V.; Krop, D.; Kryemadhi, A.; Kuhl, T.; Kumar, A.; Kunori, S.; Kupco, A.; Kurča, T.; Kvita, J.; Lam, D.; Lammers, S.; Landsberg, G.; Lazoflores, J.; Lebrun, P.; Lee, W. M.; Leflat, A.; Lehner, F.; Lesne, V.; Leveque, J.; Lewis, P.; Li, J.; Li, L.; Li, Q. Z.; Lietti, S. M.; Lima, J. G. R.; Lincoln, D.; Linnemann, J.; Lipaev, V. V.; Lipton, R.; Liu, Z.; Lobo, L.; Lobodenko, A.; Lokajicek, M.; Lounis, A.; Love, P.; Lubatti, H. J.; Lynker, M.; Lyon, A. L.; Maciel, A. K. A.; Madaras, R. J.; Mättig, P.; Magass, C.; Magerkurth, A.; Makovec, N.; Mal, P. K.; Malbouisson, H. B.; Malik, S.; Malyshev, V. L.; Mao, H. S.; Maravin, Y.; Martin, B.; McCarthy, R.; Melnitchouk, A.; Mendes, A.; Mendoza, L.; Mercadante, P. G.; Merkin, M.; Merritt, K. W.; Meyer, A.; Meyer, J.; Michaut, M.; Miettinen, H.; Millet, T.; Mitrevski, J.; Molina, J.; Mommsen, R. K.; Mondal, N. K.; Monk, J.; Moore, R. W.; Moulik, T.; Muanza, G. S.; Mulders, M.; Mulhearn, M.; Mundal, O.; Mundim, L.; Nagy, E.; Naimuddin, M.; Narain, M.; Naumann, N. A.; Neal, H. A.; Negret, J. P.; Neustroev, P.; Nilsen, H.; Noeding, C.; Nomerotski, A.; Novaes, S. F.; Nunnemann, T.; O'Dell, V.; O'Neil, D. C.; Obrant, G.; Ochando, C.; Oguri, V.; Oliveira, N.; Onoprienko, D.; Oshima, N.; Osta, J.; Otec, R.; Otero Y Garzón, G. J.; Owen, M.; Padley, P.; Pangilinan, M.; Parashar, N.; Park, S.-J.; Park, S. K.; Parsons, J.; Partridge, R.; Parua, N.; Patwa, A.; Pawloski, G.; Perea, P. M.; Perfilov, M.; Peters, K.; Peters, Y.; Pétroff, P.; Petteni, M.; Piegaia, R.; Piper, J.; Pleier, M.-A.; Podesta-Lerma, P. L. M.; Podstavkov, V. M.; Pogorelov, Y.; Pol, M.-E.; Pompoš, A.; Pope, B. G.; Popov, A. V.; Potter, C.; Prado da Silva, W. L.; Prosper, H. B.; Protopopescu, S.; Qian, J.; Quadt, A.; Quinn, B.; Rangel, M. S.; Rani, K. J.; Ranjan, K.; Ratoff, P. N.; Renkel, P.; Reucroft, S.; Rijssenbeek, M.; Ripp-Baudot, I.; Rizatdinova, F.; Robinson, S.; Rodrigues, R. F.; Royon, C.; Rubinov, P.; Ruchti, R.; Sajot, G.; Sánchez-Hernández, A.; Sanders, M. P.; Santoro, A.; Savage, G.; Sawyer, L.; Scanlon, T.; Schaile, D.; Schamberger, R. D.; Scheglov, Y.; Schellman, H.; Schieferdecker, P.; Schmitt, C.; Schwanenberger, C.; Schwartzman, A.; Schwienhorst, R.; Sekaric, J.; Sengupta, S.; Severini, H.; Shabalina, E.; Shamim, M.; Shary, V.; Shchukin, A. A.; Shivpuri, R. K.; Shpakov, D.; Siccardi, V.; Sidwell, R. A.; Simak, V.; Sirotenko, V.; Skubic, P.; Slattery, P.; Smirnov, D.; Smith, R. P.; Snow, G. R.; Snow, J.; Snyder, S.; Söldner-Rembold, S.; Sonnenschein, L.; Sopczak, A.; Sosebee, M.; Soustruznik, K.; Souza, M.; Spurlock, B.; Stark, J.; Steele, J.; Stolin, V.; Stone, A.; Stoyanova, D. A.; Strandberg, J.; Strandberg, S.; Strang, M. A.; Strauss, M.; Ströhmer, R.; Strom, D.; Strovink, M.; Stutte, L.; Sumowidagdo, S.; Svoisky, P.; Sznajder, A.; Talby, M.; Tamburello, P.; Taylor, W.; Telford, P.; Temple, J.; Tiller, B.; Tissandier, F.; Titov, M.; Tokmenin, V. V.; Tomoto, M.; Toole, T.; Torchiani, I.; Trefzger, T.; Trincaz-Duvoid, S.; Tsybychev, D.; Tuchming, B.; Tully, C.; Tuts, P. M.; Unalan, R.; Uvarov, L.; Uvarov, S.; Uzunyan, S.; Vachon, B.; van den Berg, P. J.; van Eijk, B.; van Kooten, R.; van Leeuwen, W. M.; Varelas, N.; Varnes, E. W.; Vartapetian, A.; Vasilyev, I. A.; Vaupel, M.; Verdier, P.; Vertogradov, L. S.; Verzocchi, M.; Villeneuve-Seguier, F.; Vint, P.; Vlimant, J.-R.; von Toerne, E.; Voutilainen, M.; Vreeswijk, M.; Wahl, H. D.; Wang, L.; Wang, M. H. L. S.; Warchol, J.; Watts, G.; Wayne, M.; Weber, G.; Weber, M.; Weerts, H.; Wenger, A.; Wermes, N.; Wetstein, M.; White, A.; Wicke, D.; Wilson, G. W.; Wimpenny, S. J.; Wobisch, M.; Wood, D. R.; Wyatt, T. R.; Xie, Y.; Yacoob, S.; Yamada, R.; Yan, M.; Yasuda, T.; Yatsunenko, Y. A.; Yip, K.; Yoo, H. D.; Youn, S. W.; Yu, C.; Yu, J.; Yurkewicz, A.; Zatserklyaniy, A.; Zeitnitz, C.; Zhang, D.; Zhao, T.; Zhou, B.; Zhu, J.; Zielinski, M.; Zieminska, D.; Zieminski, A.; Zutshi, V.; Zverev, E. G.
2007-11-01
We search for the production of single top quarks via flavor-changing-neutral-current couplings of a gluon to the top quark and a charm (c) or up (u) quark. We analyze 230pb-1 of lepton+jets data from pp¯ collisions at a center of mass energy of 1.96 TeV collected by the D0 detector at the Fermilab Tevatron Collider. We observe no significant deviation from standard model predictions, and hence set upper limits on the anomalous coupling parameters κgc/Λ and κgu/Λ, where κg define the strength of tcg and tug couplings, and Λ defines the scale of new physics. The limits at 95% C.L. are κgc/Λ<0.15TeV-1 and κgu/Λ<0.037TeV-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yueyong; Xu, Yanhui; Zhu, Jieqing
2005-09-01
Single crystals of the central structure domains from mumps virus F protein have been obtained by the hanging-drop vapour-diffusion method. A diffraction data set has been collected to 2.2 Å resolution. Fusion of members of the Paramyxoviridae family involves two glycoproteins: the attachment protein and the fusion protein. Changes in the fusion-protein conformation were caused by binding of the attachment protein to the cellular receptor. In the membrane-fusion process, two highly conserved heptad-repeat (HR) regions, HR1 and HR2, are believed to form a stable six-helix coiled-coil bundle. However, no crystal structure has yet been determined for this state in themore » mumps virus (MuV, a member of the Paramyxoviridae family). In this study, a single-chain protein consisting of two HR regions connected by a flexible amino-acid linker (named 2-Helix) was expressed, purified and crystallized by the hanging-drop vapour-diffusion method. A complete X-ray data set was obtained in-house to 2.2 Å resolution from a single crystal. The crystal belongs to space group C2, with unit-cell parameters a = 161.2, b = 60.8, c = 40.1 Å, β = 98.4°. The crystal structure will help in understanding the molecular mechanism of Paramyxoviridae family membrane fusion.« less
Performance factors in associative learning: assessment of the sometimes competing retrieval model.
Witnauer, James E; Wojick, Brittany M; Polack, Cody W; Miller, Ralph R
2012-09-01
Previous simulations revealed that the sometimes competing retrieval model (SOCR; Stout & Miller, Psychological Review, 114, 759-783, 2007), which assumes local error reduction, can explain many cue interaction phenomena that elude traditional associative theories based on total error reduction. Here, we applied SOCR to a new set of Pavlovian phenomena. Simulations used a single set of fixed parameters to simulate each basic effect (e.g., blocking) and, for specific experiments using different procedures, used fitted parameters discovered through hill climbing. In simulation 1, SOCR was successfully applied to basic acquisition, including the overtraining effect, which is context dependent. In simulation 2, we applied SOCR to basic extinction and renewal. SOCR anticipated these effects with both fixed parameters and best-fitting parameters, although the renewal effects were weaker than those observed in some experiments. In simulation 3a, feature-negative training was simulated, including the often observed transition from second-order conditioning to conditioned inhibition. In simulation 3b, SOCR predicted the observation that conditioned inhibition after feature-negative and differential conditioning depends on intertrial interval. In simulation 3c, SOCR successfully predicted failure of conditioned inhibition to extinguish with presentations of the inhibitor alone under most circumstances. In simulation 4, cue competition, including blocking (4a), recovery from relative validity (4b), and unblocking (4c), was simulated. In simulation 5, SOCR correctly predicted that inhibitors gain more behavioral control than do excitors when they are trained in compound. Simulation 6 demonstrated that SOCR explains the slower acquisition observed following CS-weak shock pairings.
A global analysis of Y-chromosomal haplotype diversity for 23 STR loci.
Purps, Josephine; Siegert, Sabine; Willuweit, Sascha; Nagy, Marion; Alves, Cíntia; Salazar, Renato; Angustia, Sheila M T; Santos, Lorna H; Anslinger, Katja; Bayer, Birgit; Ayub, Qasim; Wei, Wei; Xue, Yali; Tyler-Smith, Chris; Bafalluy, Miriam Baeta; Martínez-Jarreta, Begoña; Egyed, Balazs; Balitzki, Beate; Tschumi, Sibylle; Ballard, David; Court, Denise Syndercombe; Barrantes, Xinia; Bäßler, Gerhard; Wiest, Tina; Berger, Burkhard; Niederstätter, Harald; Parson, Walther; Davis, Carey; Budowle, Bruce; Burri, Helen; Borer, Urs; Koller, Christoph; Carvalho, Elizeu F; Domingues, Patricia M; Chamoun, Wafaa Takash; Coble, Michael D; Hill, Carolyn R; Corach, Daniel; Caputo, Mariela; D'Amato, Maria E; Davison, Sean; Decorte, Ronny; Larmuseau, Maarten H D; Ottoni, Claudio; Rickards, Olga; Lu, Di; Jiang, Chengtao; Dobosz, Tadeusz; Jonkisz, Anna; Frank, William E; Furac, Ivana; Gehrig, Christian; Castella, Vincent; Grskovic, Branka; Haas, Cordula; Wobst, Jana; Hadzic, Gavrilo; Drobnic, Katja; Honda, Katsuya; Hou, Yiping; Zhou, Di; Li, Yan; Hu, Shengping; Chen, Shenglan; Immel, Uta-Dorothee; Lessig, Rüdiger; Jakovski, Zlatko; Ilievska, Tanja; Klann, Anja E; García, Cristina Cano; de Knijff, Peter; Kraaijenbrink, Thirsa; Kondili, Aikaterini; Miniati, Penelope; Vouropoulou, Maria; Kovacevic, Lejla; Marjanovic, Damir; Lindner, Iris; Mansour, Issam; Al-Azem, Mouayyad; Andari, Ansar El; Marino, Miguel; Furfuro, Sandra; Locarno, Laura; Martín, Pablo; Luque, Gracia M; Alonso, Antonio; Miranda, Luís Souto; Moreira, Helena; Mizuno, Natsuko; Iwashima, Yasuki; Neto, Rodrigo S Moura; Nogueira, Tatiana L S; Silva, Rosane; Nastainczyk-Wulf, Marina; Edelmann, Jeanett; Kohl, Michael; Nie, Shengjie; Wang, Xianping; Cheng, Baowen; Núñez, Carolina; Pancorbo, Marian Martínez de; Olofsson, Jill K; Morling, Niels; Onofri, Valerio; Tagliabracci, Adriano; Pamjav, Horolma; Volgyi, Antonia; Barany, Gusztav; Pawlowski, Ryszard; Maciejewska, Agnieszka; Pelotti, Susi; Pepinski, Witold; Abreu-Glowacka, Monica; Phillips, Christopher; Cárdenas, Jorge; Rey-Gonzalez, Danel; Salas, Antonio; Brisighelli, Francesca; Capelli, Cristian; Toscanini, Ulises; Piccinini, Andrea; Piglionica, Marilidia; Baldassarra, Stefania L; Ploski, Rafal; Konarzewska, Magdalena; Jastrzebska, Emila; Robino, Carlo; Sajantila, Antti; Palo, Jukka U; Guevara, Evelyn; Salvador, Jazelyn; Ungria, Maria Corazon De; Rodriguez, Jae Joseph Russell; Schmidt, Ulrike; Schlauderer, Nicola; Saukko, Pekka; Schneider, Peter M; Sirker, Miriam; Shin, Kyoung-Jin; Oh, Yu Na; Skitsa, Iulia; Ampati, Alexandra; Smith, Tobi-Gail; Calvit, Lina Solis de; Stenzl, Vlastimil; Capal, Thomas; Tillmar, Andreas; Nilsson, Helena; Turrina, Stefania; De Leo, Domenico; Verzeletti, Andrea; Cortellini, Venusia; Wetton, Jon H; Gwynne, Gareth M; Jobling, Mark A; Whittle, Martin R; Sumita, Denilce R; Wolańska-Nowak, Paulina; Yong, Rita Y Y; Krawczak, Michael; Nothnagel, Michael; Roewer, Lutz
2014-09-01
In a worldwide collaborative effort, 19,630 Y-chromosomes were sampled from 129 different populations in 51 countries. These chromosomes were typed for 23 short-tandem repeat (STR) loci (DYS19, DYS389I, DYS389II, DYS390, DYS391, DYS392, DYS393, DYS385ab, DYS437, DYS438, DYS439, DYS448, DYS456, DYS458, DYS635, GATAH4, DYS481, DYS533, DYS549, DYS570, DYS576, and DYS643) and using the PowerPlex Y23 System (PPY23, Promega Corporation, Madison, WI). Locus-specific allelic spectra of these markers were determined and a consistently high level of allelic diversity was observed. A considerable number of null, duplicate and off-ladder alleles were revealed. Standard single-locus and haplotype-based parameters were calculated and compared between subsets of Y-STR markers established for forensic casework. The PPY23 marker set provides substantially stronger discriminatory power than other available kits but at the same time reveals the same general patterns of population structure as other marker sets. A strong correlation was observed between the number of Y-STRs included in a marker set and some of the forensic parameters under study. Interestingly a weak but consistent trend toward smaller genetic distances resulting from larger numbers of markers became apparent. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Quality and Control of Water Vapor Winds
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Atkinson, Robert J.
1996-01-01
Water vapor imagery from the geostationary satellites such as GOES, Meteosat, and GMS provides synoptic views of dynamical events on a continual basis. Because the imagery represents a non-linear combination of mid- and upper-tropospheric thermodynamic parameters (three-dimensional variations in temperature and humidity), video loops of these image products provide enlightening views of regional flow fields, the movement of tropical and extratropical storm systems, the transfer of moisture between hemispheres and from the tropics to the mid- latitudes, and the dominance of high pressure systems over particular regions of the Earth. Despite the obvious larger scale features, the water vapor imagery contains significant image variability down to the single 8 km GOES pixel. These features can be quantitatively identified and tracked from one time to the next using various image processing techniques. Merrill et al. (1991), Hayden and Schmidt (1992), and Laurent (1993) have documented the operational procedures and capabilities of NOAA and ESOC to produce cloud and water vapor winds. These techniques employ standard correlation and template matching approaches to wind tracking and use qualitative and quantitative procedures to eliminate bad wind vectors from the wind data set. Techniques have also been developed to improve the quality of the operational winds though robust editing procedures (Hayden and Veldon 1991). These quality and control approaches have limitations, are often subjective, and constrain wind variability to be consistent with model derived wind fields. This paper describes research focused on the refinement of objective quality and control parameters for water vapor wind vector data sets. New quality and control measures are developed and employed to provide a more robust wind data set for climate analysis, data assimilation studies, as well as operational weather forecasting. The parameters are applicable to cloud-tracked winds as well with minor modifications. The improvement in winds through use of these new quality and control parameters is measured without the use of rawinsonde or modeled wind field data and compared with other approaches.
Knacker, T; Schallnaß, H J; Klaschka, U; Ahlers, J
1995-11-01
The criteria for classification and labelling of substances as "dangerous for the environment" agreed upon within the European Union (EU) were applied to two sets of existing chemicals. One set (sample A) consisted of 41 randomly selected compounds listed in the European Inventory of Existing Chemical Substances (EINECS). The other set (sample B) comprised 115 substances listed in Annex I of Directive 67/548/EEC which were classified by the EU Working Group on Classification and Labelling of Existing Chemicals. The aquatic toxicity (fish mortality,Daphnia immobilisation, algal growth inhibition), ready biodegradability and n-octanol/water partition coefficient were measured for sample A by one and the same laboratory. For sample B, the available ecotoxicological data originated from many different sources and therefore was rather heterogeneous. In both samples, algal toxicity was the most sensitive effect parameter for most substances. Furthermore, it was found that, classification based on a single aquatic test result differs in many cases from classification based on a complete data set, although a correlation exists between the biological end-points of the aquatic toxicity test systems.
The collection of MicroED data for macromolecular crystallography.
Shi, Dan; Nannenga, Brent L; de la Cruz, M Jason; Liu, Jinyang; Sawtelle, Steven; Calero, Guillermo; Reyes, Francis E; Hattne, Johan; Gonen, Tamir
2016-05-01
The formation of large, well-ordered crystals for crystallographic experiments remains a crucial bottleneck to the structural understanding of many important biological systems. To help alleviate this problem in crystallography, we have developed the MicroED method for the collection of electron diffraction data from 3D microcrystals and nanocrystals of radiation-sensitive biological material. In this approach, liquid solutions containing protein microcrystals are deposited on carbon-coated electron microscopy grids and are vitrified by plunging them into liquid ethane. MicroED data are collected for each selected crystal using cryo-electron microscopy, in which the crystal is diffracted using very few electrons as the stage is continuously rotated. This protocol gives advice on how to identify microcrystals by light microscopy or by negative-stain electron microscopy in samples obtained from standard protein crystallization experiments. The protocol also includes information about custom-designed equipment for controlling crystal rotation and software for recording experimental parameters in diffraction image metadata. Identifying microcrystals, preparing samples and setting up the microscope for diffraction data collection take approximately half an hour for each step. Screening microcrystals for quality diffraction takes roughly an hour, and the collection of a single data set is ∼10 min in duration. Complete data sets and resulting high-resolution structures can be obtained from a single crystal or by merging data from multiple crystals.
Advanced active quenching circuit for ultra-fast quantum cryptography.
Stipčević, Mario; Christensen, Bradley G; Kwiat, Paul G; Gauthier, Daniel J
2017-09-04
Commercial photon-counting modules based on actively quenched solid-state avalanche photodiode sensors are used in a wide variety of applications. Manufacturers characterize their detectors by specifying a small set of parameters, such as detection efficiency, dead time, dark counts rate, afterpulsing probability and single-photon arrival-time resolution (jitter). However, they usually do not specify the range of conditions over which these parameters are constant or present a sufficient description of the characterization process. In this work, we perform a few novel tests on two commercial detectors and identify an additional set of imperfections that must be specified to sufficiently characterize their behavior. These include rate-dependence of the dead time and jitter, detection delay shift, and "twilighting". We find that these additional non-ideal behaviors can lead to unexpected effects or strong deterioration of the performance of a system using these devices. We explain their origin by an in-depth analysis of the active quenching process. To mitigate the effects of these imperfections, a custom-built detection system is designed using a novel active quenching circuit. Its performance is compared against two commercial detectors in a fast quantum key distribution system with hyper-entangled photons and a random number generator.
3D registration of surfaces for change detection in medical images
NASA Astrophysics Data System (ADS)
Fisher, Elizabeth; van der Stelt, Paul F.; Dunn, Stanley M.
1997-04-01
Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Experimental study and simulation of space charge stimulated discharge
NASA Astrophysics Data System (ADS)
Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.
2002-11-01
The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.
Pancoska, Petr; Moravek, Zdenek; Moll, Ute M
2004-01-01
Nucleic acids are molecules of choice for both established and emerging nanoscale technologies. These technologies benefit from large functional densities of 'DNA processing elements' that can be readily manufactured. To achieve the desired functionality, polynucleotide sequences are currently designed by a process that involves tedious and laborious filtering of potential candidates against a series of requirements and parameters. Here, we present a complete novel methodology for the rapid rational design of large sets of DNA sequences. This method allows for the direct implementation of very complex and detailed requirements for the generated sequences, thus avoiding 'brute force' filtering. At the same time, these sequences have narrow distributions of melting temperatures. The molecular part of the design process can be done without computer assistance, using an efficient 'human engineering' approach by drawing a single blueprint graph that represents all generated sequences. Moreover, the method eliminates the necessity for extensive thermodynamic calculations. Melting temperature can be calculated only once (or not at all). In addition, the isostability of the sequences is independent of the selection of a particular set of thermodynamic parameters. Applications are presented for DNA sequence designs for microarrays, universal microarray zip sequences and electron transfer experiments.
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA
Zheng, Qi; Peng, Limin; He, Xuming
2015-01-01
Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
The study on mechanism of holographic recording in photopolymer with dual monomer
NASA Astrophysics Data System (ADS)
Zhai, Qianli; Tao, Shiquan; Wang, Dayong
2010-06-01
In this paper we study the dynamics of refractive index modulation in a dual-monomer photopolymer through grating growth under different experiment stages. By using different sets of parameters for vinyl monomers (NVC) and acrylate monomers (POEA) respectively, a composite dual-monomer model, extended from the uniform post-exposure (UPE) model for single monomer photopolymer, is proposed and fitted with the experiment data very well. Further discussions indicate that the dominant contribution to the total index modulation is made by NVC monomers, and a brief explanation of the function of POEA monomers is given.
Spin and orbital exchange interactions from Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Secchi, A.; Lichtenstein, A. I.; Katsnelson, M. I.
2016-02-01
We derive a set of equations expressing the parameters of the magnetic interactions characterizing a strongly correlated electronic system in terms of single-electron Green's functions and self-energies. This allows to establish a mapping between the initial electronic system and a spin model including up to quadratic interactions between the effective spins, with a general interaction (exchange) tensor that accounts for anisotropic exchange, Dzyaloshinskii-Moriya interaction and other symmetric terms such as dipole-dipole interaction. We present the formulas in a format that can be used for computations via Dynamical Mean Field Theory algorithms.
Atomic Calculations with a One-Parameter, Single Integral Method.
ERIC Educational Resources Information Center
Baretty, Reinaldo; Garcia, Carmelo
1989-01-01
Presents an energy function E(p) containing a single integral and one variational parameter, alpha. Represents all two-electron integrals within the local density approximation as a single integral. Identifies this as a simple treatment for use in an introductory quantum mechanics course. (MVL)
Microcomputer-based classification of environmental data in municipal areas
NASA Astrophysics Data System (ADS)
Thiergärtner, H.
1995-10-01
Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).
Algorithms and Complexity Results for Genome Mapping Problems.
Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric
2017-01-01
Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
Passive autonomous infrared sensor technology
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz
1987-10-01
This study was conducted in response to the DoD's need for establishing understanding of algorithm's modules for passive infrared sensors and seekers and establishing a standardized systematic procedure for applying this understanding to DoD applications. We quantified the performances of Honeywell's Background Adaptive Convexity Operator Region Extractor (BACORE) detection and segmentation modules, as functions of a set of image metrics for both single-frame and multiframe processing. We established an understanding of the behavior of the BACORE's internal parameters. We characterized several sets of stationary and sequential imagery and extracted TIR squared, TBIR squared, ESR, and range for each target. We generated a set of performance models for multi-frame processing BACORE that could be used to predict the behavior of BACORE in image metric space. A similar study was conducted for another of Honeywell's segmentors, namely Texture Boundary Locator (TBL), and its performances were quantified. Finally, a comparison of TBL and BACORE on the same data base and same number of frames was made.
MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields
NASA Astrophysics Data System (ADS)
Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria
2015-08-01
We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.
Tannenbaum, Dana P; Hoffman, Douglas; Lemij, Hans G; Garway-Heath, David F; Greenfield, David S; Caprioli, Joseph
2004-02-01
The presently available scanning laser polarimeter (SLP) has a fixed corneal compensator (FCC) that neutralizes corneal birefringence only in eyes with birefringence that matches the population mode. A prototype variable corneal compensator (VCC) provides neutralization of individual corneal birefringence based on individual macular retardation patterns. The aim of this study was to evaluate the relative ability of the SLP with the FCC and with the VCC to discriminate between normal and glaucomatous eyes. Prospective, nonrandomized, comparative case series. Algorithm-generating set consisting of 56 normal eyes and 55 glaucomatous eyes and an independent data set consisting of 83 normal eyes and 56 glaucomatous eyes. Sixteen retardation measurements were obtained with the SLP with the FCC and the VCC from all subjects. Dependency of parameters on age, gender, ethnic origin, and eye side was sought. Logistic regression was used to evaluate how well the various parameters could detect glaucoma. Discriminant functions were generated, and the area under the receiver operating characteristic (ROC) curve was determined. Discrimination between normal and glaucomatous eyes on the basis of single parameters was significantly better with the VCC than with the FCC for 6 retardation parameters: nasal average (P = 0.0003), superior maximum (P = 0.0003), ellipse average (P = 0.002), average thickness (P = 0.003), superior average (P = 0.010), and inferior average (P = 0.010). Discriminant analysis identified the optimal combination of parameters for the FCC and for the VCC. When the discriminant functions were applied to the independent data set, areas under the ROC curve were 0.84 for the FCC and 0.90 for the VCC (P<0.021). When the discriminant functions were applied to a subset of patients with early visual field loss, areas under the ROC curve were 0.82 for the FCC and 0.90 for the VCC (P<0.016). Individual correction for corneal birefringence with the VCC significantly improved the ability of the SLP to distinguish between normal and glaucomatous eyes and enabled detection of patients with early glaucoma.
NASA Technical Reports Server (NTRS)
Zugrav, M. Ittu; Carswell, William E.; Haulenbeek, Glen B.; Wessling, Francis C.
2001-01-01
This work is specifically focused on explaining previous results obtained for the crystal growth of an organic material in a reduced gravity environment. On STS-59, in April 1994, two experiments were conducted with N,N-dimethyl-p-(2,2-dicyanovinyl) aniline (DCVA), a promising nonlinear optical (NLO) material. The space experiments were set to reproduce laboratory experiments that yielded small, bulk crystals of DCVA. The results of the flight experiment, however, were surprising. Rather than producing a bulk single crystal, the result was the production of two high quality, single crystalline thin films. This result was even more intriguing when it is considered that thin films are more desirable for NLO applications than are bulk single crystals. Repeated attempts on the ground to reproduce these results were fruitless. A second set of flight experiments was conducted on STS-69 in September 1995. This time eight DCVA experiments were flown, with each of seven experiments containing a slight change from the first reference experiment. The reference experiment was programmed with growth conditions identical to those of the STS-59 mission. The slight variations in each of the other seven were an attempt to understand what particular parameter was responsible for the preference of thin film growth over bulk crystal growth in microgravity. Once again the results were surprising. In all eight cases thin films were grown again, albeit with varying quality. So now we were faced with a phenomenon that not only takes place in microgravity, but also is very robust, resisting all attempts to force the growth of bulk single crystals.
Khvostichenko, Daria; Choi, Andrew; Boulatov, Roman
2008-04-24
We investigated the effect of several computational variables, including the choice of the basis set, application of symmetry constraints, and zero-point energy (ZPE) corrections, on the structural parameters and predicted ground electronic state of model 5-coordinate hemes (iron(II) porphines axially coordinated by a single imidazole or 2-methylimidazole). We studied the performance of B3LYP and B3PW91 with eight Pople-style basis sets (up to 6-311+G*) and B97-1, OLYP, and TPSS functionals with 6-31G and 6-31G* basis sets. Only hybrid functionals B3LYP, B3PW91, and B97-1 reproduced the quintet ground state of the model hemes. With a given functional, the choice of the basis set caused up to 2.7 kcal/mol variation of the quintet-triplet electronic energy gap (DeltaEel), in several cases, resulting in the inversion of the sign of DeltaEel. Single-point energy calculations with triple-zeta basis sets of the Pople (up to 6-311G++(2d,2p)), Ahlrichs (TZVP and TZVPP), and Dunning (cc-pVTZ) families showed the same trend. The zero-point energy of the quintet state was approximately 1 kcal/mol lower than that of the triplet, and accounting for ZPE corrections was crucial for establishing the ground state if the electronic energy of the triplet state was approximately 1 kcal/mol less than that of the quintet. Within a given model chemistry, effects of symmetry constraints and of a "tense" structure of the iron porphine fragment coordinated to 2-methylimidazole on DeltaEel were limited to 0.3 kcal/mol. For both model hemes the best agreement with crystallographic structural data was achieved with small 6-31G and 6-31G* basis sets. Deviation of the computed frequency of the Fe-Im stretching mode from the experimental value with the basis set decreased in the order: nonaugmented basis sets, basis sets with polarization functions, and basis sets with polarization and diffuse functions. Contraction of Pople-style basis sets (double-zeta or triple-zeta) affected the results insignificantly for iron(II) porphyrin coordinated with imidazole. Poor performance of a "locally dense" basis set with a large number of basis functions on the Fe center was observed in calculation of quintet-triplet gaps. Our results lead to a series of suggestions for density functional theory calculations of quintet-triplet energy gaps in ferrohemes with a single axial imidazole; these suggestions are potentially applicable for other transition-metal complexes.
Simple Model for the Benzene Hexafluorobenzene Interaction
Tillack, Andreas F.; Robinson, Bruce H.
2017-06-05
While the experimental intermolecular distance distribution functions of pure benzene and pure hexafluorobenzene are well described by transferable all-atom force fields, the interaction between the two molecules (in a 1:1 mixture) is not well simulated. We demonstrate that the parameters of the transferable force fields are adequate to describe the intermolecular distance distribution if the charges are replaced by a set of charges that are not located at the atoms. Here, the simplest model that well describes the experimental distance distribution, between benzene and hexafluorobenzene, is that of a single ellipsoid for each molecule, representing the van der Waals interactions,more » and a set of three point charges (on the axis perpendicular to the arene plane) which give the same quadrupole moment as do the all atom charges from the transferable force fields.« less
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Simple Model for the Benzene Hexafluorobenzene Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tillack, Andreas F.; Robinson, Bruce H.
While the experimental intermolecular distance distribution functions of pure benzene and pure hexafluorobenzene are well described by transferable all-atom force fields, the interaction between the two molecules (in a 1:1 mixture) is not well simulated. We demonstrate that the parameters of the transferable force fields are adequate to describe the intermolecular distance distribution if the charges are replaced by a set of charges that are not located at the atoms. Here, the simplest model that well describes the experimental distance distribution, between benzene and hexafluorobenzene, is that of a single ellipsoid for each molecule, representing the van der Waals interactions,more » and a set of three point charges (on the axis perpendicular to the arene plane) which give the same quadrupole moment as do the all atom charges from the transferable force fields.« less
Chase, J Geoffrey; Lambermont, Bernard; Starfinger, Christina; Hann, Christopher E; Shaw, Geoffrey M; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas
2011-01-01
A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis-a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications-a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20-30 kg received a 0.5 mg kg(-1) endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the identification process for validation. Importantly, all identified parameter trends match physiologically and clinically and experimentally expected changes, indicating that no diagnostic power is lost. This work represents a further with human subjects validation for this model-based approach to cardiovascular diagnosis and therapy guidance in monitoring endotoxic disease states. The results and methods obtained can be readily extended from this case study to the other animal model results presented previously. Overall, these results provide further support for prospective, proof of concept clinical testing with humans.
Automated analysis of Physarum network structure and dynamics
NASA Astrophysics Data System (ADS)
Fricker, Mark D.; Akita, Dai; Heaton, Luke LM; Jones, Nick; Obara, Boguslaw; Nakagaki, Toshiyuki
2017-06-01
We evaluate different ridge-enhancement and segmentation methods to automatically extract the network architecture from time-series of Physarum plasmodia withdrawing from an arena via a single exit. Whilst all methods gave reasonable results, judged by precision-recall analysis against a ground-truth skeleton, the mean phase angle (Feature Type) from intensity-independent, phase-congruency edge enhancement and watershed segmentation was the most robust to variation in threshold parameters. The resultant single pixel-wide segmented skeleton was converted to a graph representation as a set of weighted adjacency matrices containing the physical dimensions of each vein, and the inter-vein regions. We encapsulate the complete image processing and network analysis pipeline in a downloadable software package, and provide an extensive set of metrics that characterise the network structure, including hierarchical loop decomposition to analyse the nested structure of the developing network. In addition, the change in volume for each vein and intervening plasmodial sheet was used to predict the net flow across the network. The scaling relationships between predicted current, speed and shear force with vein radius were consistent with predictions from Murray’s law. This work was presented at PhysNet 2015.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Köhn, Andreas
2010-11-07
The coupled-cluster singles and doubles method augmented with single Slater-type correlation factors (CCSD-F12) determined by the cusp conditions (also denoted as SP ansatz) yields results close to the basis set limit with only small overhead compared to conventional CCSD. Quantitative calculations on many-electron systems, however, require to include the effect of connected triple excitations at least. In this contribution, the recently proposed [A. Köhn, J. Chem. Phys. 130, 131101 (2009)] extended SP ansatz and its application to the noniterative triples correction CCSD(T) is reviewed. The approach allows to include explicit correlation into connected triple excitations without introducing additional unknown parameters. The explicit expressions are presented and analyzed, and possible simplifications to arrive at a computationally efficient scheme are suggested. Numerical tests based on an implementation obtained by an automated approach are presented. Using a partial wave expansion for the neon atom, we can show that the proposed ansatz indeed leads to the expected (L(max)+1)(-7) convergence of the noniterative triples correction, where L(max) is the maximum angular momentum in the orbital expansion. Further results are reported for a test set of 29 molecules, employing Peterson's F12-optimized basis sets. We find that the customary approach of using the conventional noniterative triples correction on top of a CCSD-F12 calculation leads to significant basis set errors. This, however, is not always directly visible for total CCSD(T) energies due to fortuitous error compensation. The new approach offers a thoroughly explicitly correlated CCSD(T)-F12 method with improved basis set convergence of the triples contributions to both total and relative energies.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
NASA Astrophysics Data System (ADS)
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
Constraints on texture zero and cofactor zero models for neutrino mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whisnant, K.; Liao, Jiajun; Marfatia, D.
2014-06-24
Imposing a texture or cofactor zero on the neutrino mass matrix reduces the number of independent parameters from nine to seven. Since five parameters have been measured, only two independent parameters would remain in such models. We find the allowed regions for single texture zero and single cofactor zero models. We also find strong similarities between single texture zero models with one mass hierarchy and single cofactor zero models with the opposite mass hierarchy. We show that this correspondence can be generalized to texture-zero and cofactor-zero models with the same homogeneous costraints on the elements and cofactors.
Effective Fragment Potential Method for H-Bonding: How To Obtain Parameters for Nonrigid Fragments.
Dubinets, Nikita; Slipchenko, Lyudmila V
2017-07-20
Accuracy of the effective fragment potential (EFP) method was explored for describing intermolecular interaction energies in three dimers with strong H-bonded interactions, formic acid, formamide, and formamidine dimers, which are a part of HBC6 database of noncovalent interactions. Monomer geometries in these dimers change significantly as a function of intermonomer separation. Several EFP schemes were considered, in which fragment parameters were prepared for a fragment in its gas-phase geometry or recomputed for each unique fragment geometry. Additionally, a scheme in which gas-phase fragment parameters are shifted according to relaxed fragment geometries is introduced and tested. EFP data are compared against the coupled cluster with single, double, and perturbative triple excitations (CCSD(T)) method in a complete basis set (CBS) and the symmetry adapted perturbation theory (SAPT). All considered EFP schemes provide a good agreement with CCSD(T)/CBS for binding energies at equilibrium separations, with discrepancies not exceeding 2 kcal/mol. However, only the schemes that utilize relaxed fragment geometries remain qualitatively correct at shorter than equilibrium intermolecular distances. The EFP scheme with shifted parameters behaves quantitatively similar to the scheme in which parameters are recomputed for each monomer geometry and thus is recommended as a computationally efficient approach for large-scale EFP simulations of flexible systems.
Scaling of plasma-body interactions in low Earth orbit
NASA Astrophysics Data System (ADS)
Capon, C. J.; Brown, M.; Boyce, R. R.
2017-04-01
This paper derives the generalised set of dimensionless parameters that scale the interaction of an unmagnetised multi-species plasma with an arbitrarily charged object - the application in this work being to the interaction of the ionosphere with Low Earth Orbiting (LEO) objects. We find that a plasma with K ion species can be described by 1 + 4 K independent dimensionless parameters. These parameters govern the deflection and coupling of ion species k , the relative electrical shielding of the body, electron energy, and scaling of temporal effects. The general shielding length λ ϕ is introduced, which reduces to the Debye length in the high-temperature (weakly coupled) limit. The ability of the scaling parameters to predict the self-similar transformations of single and multi-species plasma interactions is demonstrated numerically using pdFOAM, an electrostatic Particle-in-Cell—Direct Simulation Monte Carlo code. The presented scaling relationships represent a significant generalisation of past work, linking low and high voltage plasma phenomena. Further, the presented parameters capture the scaling of multi-species plasmas with multiply charged ions, demonstrating previously unreported scaling relationship transformations. The implications of this work are not limited to LEO plasma-body interactions but apply to processes governed by the Vlasov-Maxwell equations and represent a framework upon which to incorporate the scaling of additional phenomena, e.g., magnetism and charging.
Feathering instability of spiral arms. II. Parameter study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Wing-Kit, E-mail: wklee@asiaa.sinica.edu.tw; Institute of Astronomy and Astrophysics, Academia Sinica, Taipei 115, Taiwan
2014-09-10
We report the results of a parameter study of the feathering stability in the galactic spiral arms. A two-dimensional, razor-thin magnetized self-gravitating gas disk with an imposed two-armed stellar spiral structure is considered. Using the formulation developed previously by Lee and Shu, a linear stability analysis of the spiral shock is performed in a localized Cartesian geometry. Results of the parameter study of the base state with a spiral shock are also presented. The single-mode feathering instability that leads to growing perturbations may explain the feathering phenomenon found in nearby spiral galaxies. The self-gravity of the gas, characterized by itsmore » average surface density, is an important parameter that (1) shifts the spiral shock farther downstream and (2) increases the growth rate and decreases the characteristic spacing of the feathering structure due to the instability. On the other hand, while the magnetic field suppresses the velocity fluctuation associated with the feathers, it does not strongly affect their growth rate. Using a set of typical parameters of the grand-design spiral galaxy M51 at 2 kpc from the center, the spacing of the feathers with the maximum growth rate is found to be 530 pc, which agrees with the previous observational studies.« less
Morphological estimators on Sunyaev-Zel'dovich maps of MUSIC clusters of galaxies
NASA Astrophysics Data System (ADS)
Cialone, Giammarco; De Petris, Marco; Sembolini, Federico; Yepes, Gustavo; Baldi, Anna Silvia; Rasia, Elena
2018-06-01
The determination of the morphology of galaxy clusters has important repercussions for cosmological and astrophysical studies of them. In this paper, we address the morphological characterization of synthetic maps of the Sunyaev-Zel'dovich (SZ) effect for a sample of 258 massive clusters (Mvir > 5 × 1014 h-1 M⊙ at z = 0), extracted from the MUSIC hydrodynamical simulations. Specifically, we use five known morphological parameters (which are already used in X-ray) and two newly introduced ones, and we combine them in a single parameter. We analyse two sets of simulations obtained with different prescriptions of the gas physics (non-radiative and with cooling, star formation and stellar feedback) at four red shifts between 0.43 and 0.82. For each parameter, we test its stability and efficiency in discriminating the true cluster dynamical state, measured by theoretical indicators. The combined parameter is more efficient at discriminating between relaxed and disturbed clusters. This parameter had a mild correlation with the hydrostatic mass (˜0.3) and a strong correlation (˜0.8) with the offset between the SZ centroid and the cluster centre of mass. The latter quantity is, thus, the most accessible and efficient indicator of the dynamical state for SZ studies.
NASA Astrophysics Data System (ADS)
Schutt, D.; Breidt, J.; Corbalan Castejon, A.; Witt, D. R.
2017-12-01
Shear wave splitting is a commonly used and powerful method for constraining such phenomena as lithospheric strain history or asthenospheric flow. However, a number of challenges with the statistics of shear wave splitting have been noted. This creates difficulties in assessing whether two separate measurements are statistically similar or are indicating real differences in anisotropic structure, as well as for created proper station averaged sets of parameters for more complex situations such as multiple or dipping layers of anisotropy. We present a new method for calculating the most likely splitting parameters using the Menke and Levin [2003] method of cross-convolution. The Menke and Levin method is used because it can more readily be applied to a wider range of anisotropic scenarios than the commonly used Silver and Chan [1991] technique. In our approach, we derive a formula for the spectral density of a function of the microseismic noise and the impulse response of the correct anisotropic model that holds for the true anisotropic model parameters. This is compared to the spectral density of the observed signal convolved with the impulse response for an estimated set of anisotropic parameters. The most likely parameters are found when the former and latter spectral densities are the same. By using the Whittle likelihood to compare the two spectral densities, a likelihood grid for all possible anisotropic parameter values is generated. Using bootstrapping, the uncertainty and covariance between the various anisotropic parameters can be evaluated. We will show this works with a single layer of anisotropy and a vertically incident ray, and discuss the usefulness for a more complex case. The method shows great promise for calculating multiple layer anisotropy parameters with proper assessment of uncertainty. References: Menke, W., and Levin, V. 2003. The cross-convolution method for interpreting SKS splitting observations, with application to one and two-layer anisotropic earth models. Geophysical Journal International, 154: 379-392. doi:10.1046/j.1365-246X.2003.01937.x. Silver, P.G., and Chan, W.W. 1991. Shear Wave Splitting and Sub continental Mantle Deformation. Journal of Geophysical Research, 96: 429-454. doi:10.1029/91JB00899.
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vera, D.R.; Woodle, E.S.; Stadalnik, R.C.
1989-09-01
Kinetic sensitivity is the ability of a physiochemical parameter to alter the time-activity curve of a radiotracer. The kinetic sensitivity of liver and blood time-activity data resulting from a single bolus injection of ({sup 99m}Tc)galactosyl-neoglycoalbumin (( Tc)NGA) into healthy pigs was examined. Three parameters, hepatic plasma flow scaled as flow per plasma volume, ligand-receptor affinity, and total receptor concentration, were tested using (Tc)NGA injections of various molar doses and affinities. Simultaneous measurements of plasma volume (iodine-125 human serum albumin dilution), and hepatic plasma flow (indocyanine green extraction) were performed during 12 (Tc)NGA studies. Paired data sets demonstrated differences (P(chi v2)more » less than 0.01) in liver and blood time-activity curves in response to changes in each of the tested parameters. We conclude that the (Tc)NGA radiopharmacokinetic system is therefore sensitive to hepatic plasma flow, ligand-receptor affinity, and receptor concentration. In vivo demonstration of kinetic sensitivity permits delineation of the physiologic parameters that determine the biodistribution of a radiopharmaceutical. This delineation is a prerequisite to a valid analytic assessment of receptor biochemistry via kinetic modeling.« less
Bifurcation and Spike Adding Transition in Chay-Keizer Model
NASA Astrophysics Data System (ADS)
Lu, Bo; Liu, Shenquan; Liu, Xuanliang; Jiang, Xiaofang; Wang, Xiaohui
Electrical bursting is an activity which is universal in excitable cells such as neurons and various endocrine cells, and it encodes rich physiological information. As burst delay identifies that the signal integration has reached the threshold at which it can generate an action potential, the number of spikes in a burst may have essential physiological implications, and the transition of bursting in excitable cells is associated with the bifurcation phenomenon closely. In this paper, we focus on the transition of the spike count per burst of the pancreatic β-cells within a mathematical model and bifurcation phenomenon in the Chay-Keizer model, which is utilized to simulate the pancreatic β-cells. By the fast-slow dynamical bifurcation analysis and the bi-parameter bifurcation analysis, the local dynamics of the Chay-Keizer system around the Bogdanov-Takens bifurcation is illustrated. Then the variety of the number of spikes per burst is discussed by changing the settings of a single parameter and bi-parameter. Moreover, results on the number of spikes within a burst are summarized in ISIs (interspike intervals) sequence diagrams, maximum and minimum, and the number of spikes under bi-parameter value changes.
Setting priorities in health care organizations: criteria, processes, and parameters of success.
Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A
2004-09-08
Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian
2017-09-01
Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.
NASA Astrophysics Data System (ADS)
Sarac, Abdulhamit; Kysar, Jeffrey W.
2018-02-01
We present a new methodology for experimental validation of single crystal plasticity constitutive relationships based upon spatially resolved measurements of the direction of the Net Burgers Density Vector, which we refer to as the β-field. The β-variable contains information about the active slip systems as well as the ratios of the Geometrically Necessary Dislocation (GND) densities on the active slip systems. We demonstrate the methodology by comparing single crystal plasticity finite element simulations of plane strain wedge indentations into face-centered cubic nickel to detailed experimental measurements of the β-field. We employ the classical Peirce-Asaro-Needleman (PAN) hardening model in this study due to the straightforward physical interpretation of its constitutive parameters that include latent hardening ratio, initial hardening modulus and the saturation stress. The saturation stress and the initial hardening modulus have relatively large influence on the β-variable compared to the latent hardening ratio. A change in the initial hardening modulus leads to a shift in the boundaries of plastic slip sectors with the plastically deforming region. As the saturation strength varies, both the magnitude of the β-variable and the boundaries of the plastic slip sectors change. We thus demonstrate that the β-variable is sensitive to changes in the constitutive parameters making the variable suitable for validation purposes. We identify a set of constitutive parameters that are consistent with the β-field obtained from the experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otobe, Y.; Chikamatsu, M.
1988-03-08
A method of controlling the fuel supply to an internal combustion engine is described, wherein a quantity of fuel for supply to the engine is determined by correcting a basic value of the quantity of fuel determined as a function of at least one operating parameter of the engine by correction values dependent upon operating conditions of the engine and the determined quantity of fuel is supplied to the engine. The method comprises the steps of: (1) detecting a value of at least one predetermined operating parameter of the engine; (2) manually adjusting a single voltage creating means to setmore » an output voltage therefrom to such a desired value as to compensate for deviation of the air/fuel ratio of a mixture supplied to the engine due to variations in operating characteristics of engines between different production lots or aging changes; (3) determining a value of the predetermined one correction value corresponding to the set desired value of output voltage of the single voltage creating means, and then modifying the thus determined value in response to the detected value of the predetermined at least one operating parameter of the engine during engine operation; and (4) correcting the basic value of the quantity of fuel by the value of the predetermined one correction value having the thus modified value, and the other correction values.« less
LCP crystallization and X-ray diffraction analysis of VcmN, a MATE transporter from Vibrio cholerae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusakizako, Tsukasa; Tanaka, Yoshiki; Hipolito, Christopher J.
A V. cholerae MATE transporter was crystallized using the lipidic cubic phase (LCP) method. X-ray diffraction data sets were collected from single crystals obtained in a sandwich plate and a sitting-drop plate to resolutions of 2.5 and 2.2 Å, respectively. Multidrug and toxic compound extrusion (MATE) transporters, one of the multidrug exporter families, efflux xenobiotics towards the extracellular side of the membrane. Since MATE transporters expressed in bacterial pathogens contribute to multidrug resistance, they are important therapeutic targets. Here, a MATE-transporter homologue from Vibrio cholerae, VcmN, was overexpressed in Escherichia coli, purified and crystallized in lipidic cubic phase (LCP). X-raymore » diffraction data were collected to 2.5 Å resolution from a single crystal obtained in a sandwich plate. The crystal belonged to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 52.3, b = 93.7, c = 100.2 Å. As a result of further LCP crystallization trials, crystals of larger size were obtained using sitting-drop plates. X-ray diffraction data were collected to 2.2 Å resolution from a single crystal obtained in a sitting-drop plate. The crystal belonged to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 61.9, b = 91.8, c = 100.9 Å. The present work provides valuable insights into the atomic resolution structure determination of membrane transporters.« less
Wittke, Andreas; von Stengel, Simon; Hettchen, Michael; Fröhlich, Michael; Giessing, Jürgen; Lell, Michael; Scharf, Michael; Bebenek, Michael; Kohl, Matthias; Kemmler, Wolfgang
2017-01-01
High intensity (resistance exercise) training (HIT) defined as a "single set resistance exercise to muscular failure" is an efficient exercise method that allows people with low time budgets to realize an adequate training stimulus. Although there is an ongoing discussion, recent meta-analysis suggests the significant superiority of multiple set (MST) methods for body composition and strength parameters. The aim of this study is to determine whether additional protein supplementation may increase the effect of a HIT-protocol on body composition and strength to an equal MST-level. One hundred and twenty untrained males 30-50 years old were randomly allocated to three groups: (a) HIT, (b) HIT and protein supplementation (HIT&P), and (c) waiting-control (CG) and (after cross-over) high volume/high-intensity-training (HVHIT). HIT was defined as "single set to failure protocol" while HVHIT consistently applied two equal sets. Protein supplementation provided an overall intake of 1.5-1.7 g/kg/d/body mass. Primary study endpoint was lean body mass (LBM). LBM significantly improved in all exercise groups ( p ≤ 0.043); however only HIT&P and HVHIT differ significantly from control ( p ≤ 0.002). HIT diverges significantly from HIT&P ( p = 0.017) and nonsignificantly from HVHIT ( p = 0.059), while no differences were observed for HIT&P versus HVHIT ( p = 0.691). In conclusion, moderate to high protein supplementation significantly increases the effects of a HIT-protocol on LBM in middle-aged untrained males.
NASA Astrophysics Data System (ADS)
Lilichenko, Mark; Kelley, Anne Myers
2001-04-01
A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.
VizieR Online Data Catalog: A catalog of exoplanet physical parameters (Foreman-Mackey+, 2014)
NASA Astrophysics Data System (ADS)
Foreman-Mackey, D.; Hogg, D. W.; Morton, T. D.
2017-05-01
The first ingredient for any probabilistic inference is a likelihood function, a description of the probability of observing a specific data set given a set of model parameters. In this particular project, the data set is a catalog of exoplanet measurements and the model parameters are the values that set the shape and normalization of the occurrence rate density. (2 data files).