GEODYN programmers guide, volume 2, part 1
NASA Technical Reports Server (NTRS)
Mullins, N. E.; Goad, C. C.; Dao, N. C.; Martin, T. V.; Boulware, N. L.; Chin, M. M.
1972-01-01
A guide to the GEODYN Program is presented. The program estimates orbit and geodetic parameters. It possesses the capability to estimate that set of orbital elements, station positions, measurement biases, and a set of force model parameters such that the orbital tracking data from multiple arcs of multiple satellites best fit the entire set of estimated parameters. GEODYN consists of 113 different program segments, including the main program, subroutines, functions, and block data routines. All are in G or H level FORTRAN and are currently operational on GSFC's IBM 360/95 and IBM 360/91.
Speaker verification system using acoustic data and non-acoustic data
Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA
2006-03-21
A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Evolving Non-Dominated Parameter Sets for Computational Models from Multiple Experiments
NASA Astrophysics Data System (ADS)
Lane, Peter C. R.; Gobet, Fernand
2013-03-01
Creating robust, reproducible and optimal computational models is a key challenge for theorists in many sciences. Psychology and cognitive science face particular challenges as large amounts of data are collected and many models are not amenable to analytical techniques for calculating parameter sets. Particular problems are to locate the full range of acceptable model parameters for a given dataset, and to confirm the consistency of model parameters across different datasets. Resolving these problems will provide a better understanding of the behaviour of computational models, and so support the development of general and robust models. In this article, we address these problems using evolutionary algorithms to develop parameters for computational models against multiple sets of experimental data; in particular, we propose the `speciated non-dominated sorting genetic algorithm' for evolving models in several theories. We discuss the problem of developing a model of categorisation using twenty-nine sets of data and models drawn from four different theories. We find that the evolutionary algorithms generate high quality models, adapted to provide a good fit to all available data.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Nielsen, Henrik Bjørn; Wernersson, Rasmus; Knudsen, Steen
2003-07-01
Optimal design of oligonucleotides for microarrays involves tedious and laborious work evaluating potential oligonucleotides relative to a series of parameters. The currently available tools for this purpose are limited in their flexibility and do not present the oligonucleotide designer with an overview of these parameters. We present here a flexible tool named OligoWiz for designing oligonucleotides for multiple purposes. OligoWiz presents a set of parameter scores in a graphical interface to facilitate an overview for the user. Additional custom parameter scores can easily be added to the program to extend the default parameters: homology, DeltaTm, low-complexity, position and GATC-only. Furthermore we present an analysis of the limitations in designing oligonucleotide sets that can detect transcripts from multiple organisms. OligoWiz is available at www.cbs.dtu.dk/services/OligoWiz/.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2013-10-01
presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters . Most samples have been mechanically tested and data extracted for multiple parameters . Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Throughput and latency programmable optical transceiver by using DSP and FEC control.
Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki
2017-05-15
We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.
NASA Astrophysics Data System (ADS)
Li, Cheng; Tian, Jun-Long; Wang, Ning
2013-11-01
The nucleon-nucleon interaction is investigated by using the ImQMD model with the three sets of parameters IQ1, IQ2 and IQ3 in which the corresponding incompressibility coefficients of nuclear matter are different. Fusion excitation function and the charge distribution of fragments are calculated for reaction systems 40Ca+40Ca at different incident energies. It is found that obvious differences in the charge distribution were observed at the energy region 10-25A MeV by adopting the three sets of parameters, while the results were close to each other at energy region of 30-45A MeV for the reaction system. It indicates that the Fermi energy region is a sensitive energy region to explore the N-N interaction. The fragment multiplicity spectrum for 238U+197Au at 15A MeV are reproduced by the ImQMD model with the set of parameter IQ3. It is concluded that charge distribution of the fragments and the fragment multiplicity spectrum are good observables for studying N-N interaction, and IQ3 is a suitable set of parameters for the ImQMD model.
Reichardt, J; Hess, M; Macke, A
2000-04-20
Multiple-scattering correction factors for cirrus particle extinction coefficients measured with Raman and high spectral resolution lidars are calculated with a radiative-transfer model. Cirrus particle-ensemble phase functions are computed from single-crystal phase functions derived in a geometrical-optics approximation. Seven crystal types are considered. In cirrus clouds with height-independent particle extinction coefficients the general pattern of the multiple-scattering parameters has a steep onset at cloud base with values of 0.5-0.7 followed by a gradual and monotonic decrease to 0.1-0.2 at cloud top. The larger the scattering particles are, the more gradual is the rate of decrease. Multiple-scattering parameters of complex crystals and of imperfect hexagonal columns and plates can be well approximated by those of projected-area equivalent ice spheres, whereas perfect hexagonal crystals show values as much as 70% higher than those of spheres. The dependencies of the multiple-scattering parameters on cirrus particle spectrum, base height, and geometric depth and on the lidar parameters laser wavelength and receiver field of view, are discussed, and a set of multiple-scattering parameter profiles for the correction of extinction measurements in homogeneous cirrus is provided.
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
2015-07-01
undergraduate student coauthors Aashish Jindia, Parag Srivastava, and Jay Jin for help with the research. In addition, thank you to the numerous...103 A.1.1 Sacramento Data Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A.1.2 RadMap and SUNS Data Sets...parameters in a joint hypothesis space. We develop scalable branch and bound and pruning mechanisms for searching (at multiple resolutions) over source
drPACS: A Simple UNIX Execution Pipeline
NASA Astrophysics Data System (ADS)
Teuben, P.
2011-07-01
We describe a very simple yet flexible and effective pipeliner for UNIX commands. It creates a Makefile to define a set of serially dependent commands. The commands in the pipeline share a common set of parameters by which they can communicate. Commands must follow a simple convention to retrieve and store parameters. Pipeline parameters can optionally be made persistent across multiple runs of the pipeline. Tools were added to simplify running a large series of pipelines, which can then also be run in parallel.
Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...
2015-06-15
In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
A Weight of Evidence Framework for Environmental Assessments: Inferring Quantities
Environmental assessments require the generation of quantitative parameters such as degradation rates and assessment products may be quantities such as criterion values or magnitudes of effects. When multiple data sets or outputs of multiple models are available, it may be appro...
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
NASA Technical Reports Server (NTRS)
Martin, T. V.; Mullins, N. E.
1972-01-01
The operating and set-up procedures for the multi-satellite, multi-arc GEODYN- Orbit Determination program are described. All system output is analyzed. The GEODYN Program is the nucleus of the entire GEODYN system. It is a definitive orbit and geodetic parameter estimation program capable of simultaneously processing observations from multiple arcs of multiple satellites. GEODYN has two modes of operation: (1) the data reduction mode and (2) the orbit generation mode.
Haroz, E E; Bolton, P; Gross, A; Chan, K S; Michalopoulos, L; Bass, J
2016-07-01
Prevalence estimates of depression vary between countries, possibly due to differential functioning of items between settings. This study compared the performance of the widely used Hopkins symptom checklist 15-item depression scale (HSCL-15) across multiple settings using item response theory analyses. Data came from adult populations in the low and middle income countries (LMIC) of Colombia, Indonesia, Kurdistan Iraq, Rwanda, Iraq, Thailand (Burmese refugees), and Uganda (N = 4732). Item parameters based on a graded response model were compared across LMIC settings. Differential item functioning (DIF) by setting was evaluated using multiple indicators multiple causes (MIMIC) models. Most items performed well across settings except items related to suicidal ideation and "loss of sexual interest or pleasure," which had low discrimination parameters (suicide: a = 0.31 in Thailand to a = 2.49 in Indonesia; sexual interest: a = 0.74 in Rwanda to a = 1.26 in one region of Kurdistan). Most items showed some degree of DIF, but DIF only impacted aggregate scale-level scores in Indonesia. Thirteen of the 15 HSCL depression items performed well across diverse settings, with most items showing a strong relationship to the underlying trait of depression. The results support the cross-cultural applicability of most of these depression symptoms across LMIC settings. DIF impacted aggregate depression scores in one setting illustrating a possible source of measurement invariance in prevalence estimates.
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Assessing and Programming Generalized Behavioral Reduction across Multiple Stimulus Parameters.
ERIC Educational Resources Information Center
Shore, Bridget A.; And Others
1994-01-01
Generalization across three stimulus parameters (therapist, setting, and demands) was examined for five men with severe/profound mental retardation whose self-injurious behavior was maintained by escape from task demands. Variables were held constant during the escape extinction treatment. Full or partial generalization to novel situations was…
Effect of slice thickness on brain magnetic resonance image texture analysis
2010-01-01
Background The accuracy of texture analysis in clinical evaluation of magnetic resonance images depends considerably on imaging arrangements and various image quality parameters. In this paper, we study the effect of slice thickness on brain tissue texture analysis using a statistical approach and classification of T1-weighted images of clinically confirmed multiple sclerosis patients. Methods We averaged the intensities of three consecutive 1-mm slices to simulate 3-mm slices. Two hundred sixty-four texture parameters were calculated for both the original and the averaged slices. Wilcoxon's signed ranks test was used to find differences between the regions of interest representing white matter and multiple sclerosis plaques. Linear and nonlinear discriminant analyses were applied with several separate training and test sets to determine the actual classification accuracy. Results Only moderate differences in distributions of the texture parameter value for 1-mm and simulated 3-mm-thick slices were found. Our study also showed that white matter areas are well separable from multiple sclerosis plaques even if the slice thickness differs between training and test sets. Conclusions Three-millimeter-thick magnetic resonance image slices acquired with a 1.5 T clinical magnetic resonance scanner seem to be sufficient for texture analysis of multiple sclerosis plaques and white matter tissue. PMID:20955567
Assessment of Students with Emotional and Behavioral Disorders
ERIC Educational Resources Information Center
Plotts, Cynthia A.
2012-01-01
Assessment and identification of children with emotional and behavioral disorders (EBD) is complex and involves multiple techniques, levels, and participants. While federal law sets the general parameters for identification in school settings, these criteria are vague and may lead to inconsistencies in selection and interpretation of assessment…
Two methods for parameter estimation using multiple-trait models and beef cattle field data.
Bertrand, J K; Kriese, L A
1990-08-01
Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.
Accuracy Estimation and Parameter Advising for Protein Multiple Sequence Alignment
DeBlasio, Dan
2013-01-01
Abstract We develop a novel and general approach to estimating the accuracy of multiple sequence alignments without knowledge of a reference alignment, and use our approach to address a new task that we call parameter advising: the problem of choosing values for alignment scoring function parameters from a given set of choices to maximize the accuracy of a computed alignment. For protein alignments, we consider twelve independent features that contribute to a quality alignment. An accuracy estimator is learned that is a polynomial function of these features; its coefficients are determined by minimizing its error with respect to true accuracy using mathematical optimization. Compared to prior approaches for estimating accuracy, our new approach (a) introduces novel feature functions that measure nonlocal properties of an alignment yet are fast to evaluate, (b) considers more general classes of estimators beyond linear combinations of features, and (c) develops new regression formulations for learning an estimator from examples; in addition, for parameter advising, we (d) determine the optimal parameter set of a given cardinality, which specifies the best parameter values from which to choose. Our estimator, which we call Facet (for “feature-based accuracy estimator”), yields a parameter advisor that on the hardest benchmarks provides more than a 27% improvement in accuracy over the best default parameter choice, and for parameter advising significantly outperforms the best prior approaches to assessing alignment quality. PMID:23489379
Zemali, El-Amine; Boukra, Abdelmadjid
2015-08-01
The multiple sequence alignment (MSA) is one of the most challenging problems in bioinformatics, it involves discovering similarity between a set of protein or DNA sequences. This paper introduces a new method for the MSA problem called biogeography-based optimization with multiple populations (BBOMP). It is based on a recent metaheuristic inspired from the mathematics of biogeography named biogeography-based optimization (BBO). To improve the exploration ability of BBO, we have introduced a new concept allowing better exploration of the search space. It consists of manipulating multiple populations having each one its own parameters. These parameters are used to build up progressive alignments allowing more diversity. At each iteration, the best found solution is injected in each population. Moreover, to improve solution quality, six operators are defined. These operators are selected with a dynamic probability which changes according to the operators efficiency. In order to test proposed approach performance, we have considered a set of datasets from Balibase 2.0 and compared it with many recent algorithms such as GAPAM, MSA-GA, QEAMSA and RBT-GA. The results show that the proposed approach achieves better average score than the previously cited methods.
LMSS communication network design
NASA Technical Reports Server (NTRS)
1982-01-01
The architecture of the telecommunication network as the first step in the design of the LMSS system is described. A set of functional requirements including the total number of users to be served by the LMSS are hypothesized. The design parameters are then defined at length and are systematically selected such that the resultant system is capable of serving the hypothesized number of users. The design of the backhaul link is presented. The number of multiple backhaul beams required for communication to the base stations is determined. A conceptual procedure for call-routing and locating a mobile subscriber within the LMSS network is presented. The various steps in placing a call are explained, and the relationship between the two sets of UHF and S-band multiple beams is developed. A summary of the design parameters is presented.
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
Drake, Andrew W; Klakamp, Scott L
2007-01-10
A new 4-parameter nonlinear equation based on the standard multiple independent binding site model (MIBS) is presented for fitting cell-based ligand titration data in order to calculate the ligand/cell receptor equilibrium dissociation constant and the number of receptors/cell. The most commonly used linear (Scatchard Plot) or nonlinear 2-parameter model (a single binding site model found in commercial programs like Prism(R)) used for analysis of ligand/receptor binding data assumes only the K(D) influences the shape of the titration curve. We demonstrate using simulated data sets that, depending upon the cell surface receptor expression level, the number of cells titrated, and the magnitude of the K(D) being measured, this assumption of always being under K(D)-controlled conditions can be erroneous and can lead to unreliable estimates for the binding parameters. We also compare and contrast the fitting of simulated data sets to the commonly used cell-based binding equation versus our more rigorous 4-parameter nonlinear MIBS model. It is shown through these simulations that the new 4-parameter MIBS model, when used for cell-based titrations under optimal conditions, yields highly accurate estimates of all binding parameters and hence should be the preferred model to fit cell-based experimental nonlinear titration data.
An extended harmonic balance method based on incremental nonlinear control parameters
NASA Astrophysics Data System (ADS)
Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.
2017-02-01
A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.
FEAST: sensitive local alignment with multiple rates of evolution.
Hudek, Alexander K; Brown, Daniel G
2011-01-01
We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.
NASA Technical Reports Server (NTRS)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
2012-03-22
shapes tested , when the objective parameter set was confined to a dictionary’s de - fined parameter space. These physical characteristics included...8 2.3 Hypothesis Testing and Detection Theory . . . . . . . . . . . . . . . 8 2.4 3-D SAR Scattering Models...basis pursuit de -noising (BPDN) algorithm is chosen to perform extraction due to inherent efficiency and error tolerance. Multiple shape dictionaries
Entangling measurements for multiparameter estimation with two qubits
NASA Astrophysics Data System (ADS)
Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco
2018-01-01
Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.
Zhang, Z; Jewett, D L
1994-01-01
Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.
Verma, Manjusha; Chaudhry, Aneese F.; Fahrni, Christoph J.
2010-01-01
The photophysical properties of 1,3,5-triarylpyrazolines are strongly influenced by the nature and position of substituents attached to the aryl-rings, rendering this fluorophore platform well suited for the design of fluorescent probes utilizing a photoinduced electron transfer (PET) switching mechanism. To explore the tunability of two key parameters that govern the PET thermodynamics, the excited state energy ΔE00 and acceptor potential E(A/A−), a library of polyfluoro-substituted 1,3-diaryl-5-phenyl-pyrazolines was synthesized and characterized. The observed trends for the PET parameters were effectively captured through multiple Hammett linear free energy relationships (LFER) using a set of independent substituent constants for each of the two aryl rings. Given the lack of experimental Hammett constants for polyfluoro substituted aromatics, theoretically derived constants based on the electrostatic potential at the nucleus (EPN) of carbon atoms were employed as quantum chemical descriptors. The performance of the LFER was evaluated with a set of compounds that were not included in the training set, yielding a mean unsigned error of 0.05 eV for the prediction of the combined PET parameters. The outlined LFER approach should be well suited to design and optimize the performance of cation-responsive 1,3,5-triarylpyrazolines. PMID:19343239
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
Multiple angles on the sterile neutrino - a combined view of cosmological and oscillation limits
NASA Astrophysics Data System (ADS)
Guzowski, Pawel
2017-09-01
The possible existence of sterile neutrinos is an important unresolved question for both particle physics and cosmology. Data sensitive to a sterile neutrino is coming from both particle physics experiments and from astrophysical measurements of the Cosmic Microwave Background. In this study, we address the question whether these two contrasting data sets provide complementary information about sterile neutrinos. We focus on the muon disappearance oscillation channel, taking data from the MINOS, ICECUBE and Planck experiments, converting the limits into particle physics and cosmological parameter spaces, to illustrate the different regions of parameter space where the data sets have the best sensitivity. For the first time, we combine the data sets into a single analysis to illustrate how the limits on the parameters of the sterile-neutrino model are strengthened. We investigate how data from a future accelerator neutrino experiment (SBN) will be able to further constrain this picture.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Multi-year encoding of daily rainfall and streamflow via the fractal-multifractal method
NASA Astrophysics Data System (ADS)
Puente, C. E.; Maskey, M.; Sivakumar, B.
2017-12-01
A deterministic geometric approach, the fractal-multifractal (FM) method, which has been proven to be faithful in encoding daily geophysical sets over a year, is used to describe records over multiple years at a time. Looking for FM parameter trends over longer periods, the present study shows FM descriptions of daily rainfall and streamflow gathered over five consecutive years optimizing deviations on accumulated sets. The results for 100 and 60 sets of five years for rainfall streamflow, respectively, near Sacramento, California illustrate that: (a) encoding of both types of data sets may be accomplished with relatively small errors; and (b) predicting the geometry of both variables appears to be possible, even five years ahead, training neural networks on the respective FM parameters. It is emphasized that the FM approach not only captures the accumulated sets over successive pentades but also preserves other statistical attributes including the overall "texture" of the records.
On multiple solutions of non-Newtonian Carreau fluid flow over an inclined shrinking sheet
NASA Astrophysics Data System (ADS)
Khan, Masood; Sardar, Humara; Gulzar, M. Mudassar; Alshomrani, Ali Saleh
2018-03-01
This paper presents the multiple solutions of a non-Newtonian Carreau fluid flow over a nonlinear inclined shrinking surface in presence of infinite shear rate viscosity. The governing boundary layer equations are derived for the Carreau fluid with infinite shear rate viscosity. The suitable transformations are employed to alter the leading partial differential equations to a set of ordinary differential equations. The consequential non-linear ODEs are solved numerically by an active numerical approach namely Runge-Kutta Fehlberg fourth-fifth order method accompanied by shooting technique. Multiple solutions are presented graphically and results are shown for various physical parameters. It is important to state that the velocity and momentum boundary layer thickness reduce with increasing viscosity ratio parameter in shear thickening fluid while opposite trend is observed for shear thinning fluid. Another important observation is that the wall shear stress is significantly decreased by the viscosity ratio parameter β∗ for the first solution and opposite trend is observed for the second solution.
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Model for predicting the injury severity score.
Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi
2015-07-01
To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P < 0.05. To select objective variables, the stepwise method was used. A total of 122 patients were included in this study. The formula for predicting the injury severity score (ISS) was as follows: ISS = 13.252-0.078(mean blood pressure) + 0.12(fibrin degradation products). The P -value of this formula from analysis of variance was <0.001, and the multiple correlation coefficient (R) was 0.739 (R 2 = 0.546). The multiple correlation coefficient adjusted for the degrees of freedom was 0.538. The Durbin-Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.
Parameter redundancy in discrete state-space and integrated models.
Cole, Diana J; McCrea, Rachel S
2016-09-01
Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Consensus Classification Using Non-Optimized Classifiers.
Brownfield, Brett; Lemos, Tony; Kalivas, John H
2018-04-03
Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.
Coexisting multiple attractors and riddled basins of a memristive system.
Wang, Guangyi; Yuan, Fang; Chen, Guanrong; Zhang, Yu
2018-01-01
In this paper, a new memristor-based chaotic system is designed, analyzed, and implemented. Multistability, multiple attractors, and complex riddled basins are observed from the system, which are investigated along with other dynamical behaviors such as equilibrium points and their stabilities, symmetrical bifurcation diagrams, and sustained chaotic states. With different sets of system parameters, the system can also generate various multi-scroll attractors. Finally, the system is realized by experimental circuits.
Heuristics for multiobjective multiple sequence alignment.
Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B
2016-07-15
Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show that our approaches can obtain better results than TCoffee and Clustal Omega in terms of the first ratio.
Slide Set: Reproducible image analysis and batch processing with ImageJ.
Nanes, Benjamin A
2015-11-01
Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.
Ruys, Andrew J.
2018-01-01
Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614
Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.
2010-01-01
Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Finite Nuclei in the Quark-Meson Coupling Model.
Stone, J R; Guichon, P A M; Reinhard, P G; Thomas, A W
2016-03-04
We report the first use of the effective quark-meson coupling (QMC) energy density functional (EDF), derived from a quark model of hadron structure, to study a broad range of ground state properties of even-even nuclei across the periodic table in the nonrelativistic Hartree-Fock+BCS framework. The novelty of the QMC model is that the nuclear medium effects are treated through modification of the internal structure of the nucleon. The density dependence is microscopically derived and the spin-orbit term arises naturally. The QMC EDF depends on a single set of four adjustable parameters having a clear physics basis. When applied to diverse ground state data the QMC EDF already produces, in its present simple form, overall agreement with experiment of a quality comparable to a representative Skyrme EDF. There exist, however, multiple Skyrme parameter sets, frequently tailored to describe selected nuclear phenomena. The QMC EDF set of fewer parameters, derived in this work, is not open to such variation, chosen set being applied, without adjustment, to both the properties of finite nuclei and nuclear matter.
A deterministic (non-stochastic) low frequency method for geoacoustic inversion.
Tolstoy, A
2010-06-01
It is well known that multiple frequency sources are necessary for accurate geoacoustic inversion. This paper presents an inversion method which uses the low frequency (LF) spectrum only to estimate bottom properties even in the presence of expected errors in source location, phone depths, and ocean sound-speed profiles. Matched field processing (MFP) along a vertical array is used. The LF method first conducts an exhaustive search of the (five) parameter search space (sediment thickness, sound-speed at the top of the sediment layer, the sediment layer sound-speed gradient, the half-space sound-speed, and water depth) at 25 Hz and continues by retaining only the high MFP value parameter combinations. Next, frequency is slowly increased while again retaining only the high value combinations. At each stage of the process, only those parameter combinations which give high MFP values at all previous LF predictions are considered (an ever shrinking set). It is important to note that a complete search of each relevant parameter space seems to be necessary not only at multiple (sequential) frequencies but also at multiple ranges in order to eliminate sidelobes, i.e., false solutions. Even so, there are no mathematical guarantees that one final, unique "solution" will be found.
Iterative integral parameter identification of a respiratory mechanics model.
Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey
2012-07-18
Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Osche, G R
2000-08-20
Single- and multiple-pulse detection statistics are presented for aperture-averaged direct detection optical receivers operating against partially developed speckle fields. A partially developed speckle field arises when the probability density function of the received intensity does not follow negative exponential statistics. The case of interest here is the target surface that exhibits diffuse as well as specular components in the scattered radiation. An approximate expression is derived for the integrated intensity at the aperture, which leads to single- and multiple-pulse discrete probability density functions for the case of a Poisson signal in Poisson noise with an additive coherent component. In the absence of noise, the single-pulse discrete density function is shown to reduce to a generalized negative binomial distribution. The radar concept of integration loss is discussed in the context of direct detection optical systems where it is shown that, given an appropriate set of system parameters, multiple-pulse processing can be more efficient than single-pulse processing over a finite range of the integration parameter n.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
2012-06-01
Nanotube MWCNT Multi-Walled Carbon Nanotube PET Polyethylene Terephthalate 4H-SiC 4-H Silicon Carbide AlGaAs Aluminum Gallium Arsenide...nanotubes ( MWCNTs ). SWCNTs are structured with one layer of graphene rolled into a CNT. MWCNTs are contrastingly composed of 23 multiple layers...simulation 19 times to extract cell parameters at #varying widths set cellWidth=200 loop steps=19 go atlas #Constants which are used to set the
Conditional High-Order Boltzmann Machines for Supervised Relation Learning.
Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu
2017-09-01
Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.
Stroet, Martin; Koziara, Katarzyna B; Malde, Alpeshkumar K; Mark, Alan E
2017-12-12
A general method for parametrizing atomic interaction functions is presented. The method is based on an analysis of surfaces corresponding to the difference between calculated and target data as a function of alternative combinations of parameters (parameter space mapping). The consideration of surfaces in parameter space as opposed to local values or gradients leads to a better understanding of the relationships between the parameters being optimized and a given set of target data. This in turn enables for a range of target data from multiple molecules to be combined in a robust manner and for the optimal region of parameter space to be trivially identified. The effectiveness of the approach is illustrated by using the method to refine the chlorine 6-12 Lennard-Jones parameters against experimental solvation free enthalpies in water and hexane as well as the density and heat of vaporization of the liquid at atmospheric pressure for a set of 10 aromatic-chloro compounds simultaneously. Single-step perturbation is used to efficiently calculate solvation free enthalpies for a wide range of parameter combinations. The capacity of this approach to parametrize accurate and transferrable force fields is discussed.
Prediction and Computation of Corrosion Rates of A36 Mild Steel in Oilfield Seawater
NASA Astrophysics Data System (ADS)
Paul, Subir; Mondal, Rajdeep
2018-04-01
The parameters which primarily control the corrosion rate and life of steel structures are several and they vary across the different ocean and seawater as well as along the depth. While the effect of single parameter on corrosion behavior is known, the conjoint effects of multiple parameters and the interrelationship among the variables are complex. Millions sets of experiments are required to understand the mechanism of corrosion failure. Statistical modeling such as ANN is one solution that can reduce the number of experimentation. ANN model was developed using 170 sets of experimental data of A35 mild steel in simulated seawater, varying the corrosion influencing parameters SO4 2-, Cl-, HCO3 -,CO3 2-, CO2, O2, pH and temperature as input and the corrosion current as output. About 60% of experimental data were used to train the model, 20% for testing and 20% for validation. The model was developed by programming in Matlab. 80% of the validated data could predict the corrosion rate correctly. Corrosion rates predicted by the ANN model are displayed in 3D graphics which show many interesting phenomenon of the conjoint effects of multiple variables that might throw new ideas of mitigation of corrosion by simply modifying the chemistry of the constituents. The model could predict the corrosion rates of some real systems.
Autoimmune control of lesion growth in CNS with minimal damage
NASA Astrophysics Data System (ADS)
Mathankumar, R.; Mohan, T. R. Krishna
2013-07-01
Lesions in central nervous system (CNS) and their growth leads to debilitating diseases like Multiple Sclerosis (MS), Alzheimer's etc. We developed a model earlier [1, 2] which shows how the lesion growth can be arrested through a beneficial auto-immune mechanism. We compared some of the dynamical patterns in the model with different facets of MS. The success of the approach depends on a set of control parameters and their phase space was shown to have a smooth manifold separating the uncontrolled lesion growth region from the controlled. Here we show that an optimal set of parameter values exist in the model which minimizes system damage while, at once, achieving control of lesion growth.
Aptamer-conjugated nanoparticles for cancer cell detection.
Medley, Colin D; Bamrungsap, Suwussa; Tan, Weihong; Smith, Joshua E
2011-02-01
Aptamer-conjugated nanoparticles (ACNPs) have been used for a variety of applications, particularly dual nanoparticles for magnetic extraction and fluorescent labeling. In this type of assay, silica-coated magnetic and fluorophore-doped silica nanoparticles are conjugated to highly selective aptamers to detect and extract targeted cells in a variety of matrixes. However, considerable improvements are required in order to increase the selectivity and sensitivity of this two-particle assay to be useful in a clinical setting. To accomplish this, several parameters were investigated, including nanoparticle size, conjugation chemistry, use of multiple aptamer sequences on the nanoparticles, and use of multiple nanoparticles with different aptamer sequences. After identifying the best-performing elements, the improvements made to this assay's conditional parameters were combined to illustrate the overall enhanced sensitivity and selectivity of the two-particle assay using an innovative multiple aptamer approach, signifying a critical feature in the advancement of this technique.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
2009-03-01
Set negative pixel values = 0 (remove bad pixels) -------------- [m,n] = size(data_matrix_new); for i =1:m for j= 1:n if...everything from packaging toothpaste to high speed fluid dynamics. While future engagements will continue to require the development of specialized
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Using multiple group modeling to test moderators in meta-analysis.
Schoemann, Alexander M
2016-12-01
Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jansen van Rensburg, Gerhardus J.; Kok, Schalk; Wilke, Daniel N.
2018-03-01
This paper presents the development and numerical implementation of a state variable based thermomechanical material model, intended for use within a fully implicit finite element formulation. Plastic hardening, thermal recovery and multiple cycles of recrystallisation can be tracked for single peak as well as multiple peak recrystallisation response. The numerical implementation of the state variable model extends on a J2 isotropic hypo-elastoplastic modelling framework. The complete numerical implementation is presented as an Abaqus UMAT and linked subroutines. Implementation is discussed with detailed explanation of the derivation and use of various sensitivities, internal state variable management and multiple recrystallisation cycle contributions. A flow chart explaining the proposed numerical implementation is provided as well as verification on the convergence of the material subroutine. The material model is characterised using two high temperature data sets for cobalt and copper. The results of finite element analyses using the material parameter values characterised on the copper data set are also presented.
Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M
2017-04-01
Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
The Stratway Program for Strategic Conflict Resolution: User's Guide
NASA Technical Reports Server (NTRS)
Hagen, George E.; Butler, Ricky W.; Maddalon, Jeffrey M.
2016-01-01
Stratway is a strategic conflict detection and resolution program. It provides both intent-based conflict detection and conflict resolution for a single ownship in the presence of multiple traffic aircraft and weather cells defined by moving polygons. It relies on a set of heuristic search strategies to solve conflicts. These strategies are user configurable through multiple parameters. The program can be called from other programs through an application program interface (API) and can also be executed from a command line.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
Bayesian LASSO, scale space and decision making in association genetics.
Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J
2015-01-01
LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.
NASA Astrophysics Data System (ADS)
Dumon, M.; Van Ranst, E.
2016-01-01
This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Multi-scale modularity and motif distributional effect in metabolic networks.
Gao, Shang; Chen, Alan; Rahmani, Ali; Zeng, Jia; Tan, Mehmet; Alhajj, Reda; Rokne, Jon; Demetrick, Douglas; Wei, Xiaohui
2016-01-01
Metabolism is a set of fundamental processes that play important roles in a plethora of biological and medical contexts. It is understood that the topological information of reconstructed metabolic networks, such as modular organization, has crucial implications on biological functions. Recent interpretations of modularity in network settings provide a view of multiple network partitions induced by different resolution parameters. Here we ask the question: How do multiple network partitions affect the organization of metabolic networks? Since network motifs are often interpreted as the super families of evolved units, we further investigate their impact under multiple network partitions and investigate how the distribution of network motifs influences the organization of metabolic networks. We studied Homo sapiens, Saccharomyces cerevisiae and Escherichia coli metabolic networks; we analyzed the relationship between different community structures and motif distribution patterns. Further, we quantified the degree to which motifs participate in the modular organization of metabolic networks.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2014-04-01
Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in the following...have been harvested. All harvested samples have been scanned by µCT and evaluated for multiple parameters . All samples have been mechanically... Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model. J.Biomed.Mater.Res.B Appl.Biomater. 2005;74(2):712-7. 4. De Ranieri, A., Virdi, A. S
Learning-based meta-algorithm for MRI brain extraction.
Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang
2011-01-01
Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
Red Cell Properties after Different Modes of Blood Transportation
Makhro, Asya; Huisjes, Rick; Verhagen, Liesbeth P.; Mañú-Pereira, María del Mar; Llaudet-Planas, Esther; Petkova-Kirova, Polina; Wang, Jue; Eichler, Hermann; Bogdanova, Anna; van Wijk, Richard; Vives-Corrons, Joan-Lluís; Kaestner, Lars
2016-01-01
Transportation of blood samples is unavoidable for assessment of specific parameters in blood of patients with rare anemias, blood doping testing, or for research purposes. Despite the awareness that shipment may substantially alter multiple parameters, no study of that extent has been performed to assess these changes and optimize shipment conditions to reduce transportation-related artifacts. Here we investigate the changes in multiple parameters in blood of healthy donors over 72 h of simulated shipment conditions. Three different anticoagulants (K3EDTA, Sodium Heparin, and citrate-based CPDA) for two temperatures (4°C and room temperature) were tested to define the optimal transportation conditions. Parameters measured cover common cytology and biochemistry parameters (complete blood count, hematocrit, morphological examination), red blood cell (RBC) volume, ion content and density, membrane properties and stability (hemolysis, osmotic fragility, membrane heat stability, patch-clamp investigations, and formation of micro vesicles), Ca2+ handling, RBC metabolism, activity of numerous enzymes, and O2 transport capacity. Our findings indicate that individual sets of parameters may require different shipment settings (anticoagulants, temperature). Most of the parameters except for ion (Na+, K+, Ca2+) handling and, possibly, reticulocytes counts, tend to favor transportation at 4°C. Whereas plasma and intraerythrocytic Ca2+ cannot be accurately measured in the presence of chelators such as citrate and EDTA, the majority of Ca2+-dependent parameters are stabilized in CPDA samples. Even in blood samples from healthy donors transported using an optimized shipment protocol, the majority of parameters were stable within 24 h, a condition that may not hold for the samples of patients with rare anemias. This implies for as short as possible shipping using fast courier services to the closest expert laboratory at reach. Mobile laboratories or the travel of the patients to the specialized laboratories may be the only option for some groups of patients with highly unstable RBCs. PMID:27471472
Wong, Wicger K H; Leung, Lucullus H T; Kwong, Dora L W
2016-01-01
To evaluate and optimize the parameters used in multiple-atlas-based segmentation of prostate cancers in radiation therapy. A retrospective study was conducted, and the accuracy of the multiple-atlas-based segmentation was tested on 30 patients. The effect of library size (LS), number of atlases used for contour averaging and the contour averaging strategy were also studied. The autogenerated contours were compared with the manually drawn contours. Dice similarity coefficient (DSC) and Hausdorff distance were used to evaluate the segmentation agreement. Mixed results were found between simultaneous truth and performance level estimation (STAPLE) and majority vote (MV) strategies. Multiple-atlas approaches were relatively insensitive to LS. A LS of ten was adequate, and further increase in the LS only showed insignificant gain. Multiple atlas performed better than single atlas for most of the time. Using more atlases did not guarantee better performance, with five atlases performing better than ten atlases. With our recommended setting, the median DSC for the bladder, rectum, prostate, seminal vesicle and femurs was 0.90, 0.77, 0.84, 0.56 and 0.95, respectively. Our study shows that multiple-atlas-based strategies have better accuracy than single-atlas approach. STAPLE is preferred, and a LS of ten is adequate for prostate cases. Using five atlases for contour averaging is recommended. The contouring accuracy of seminal vesicle still needs improvement, and manual editing is still required for the other structures. This article provides a better understanding of the influence of the parameters used in multiple-atlas-based segmentation of prostate cancers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, W.L.
Macroinvertebrate sampling was performed at 16 locations in the Savannah River Site (SRS) streams using Hester-Dendy multiplate samplers and EPA Rapid Bioassessment Protocols (RBP). Some of the sampling locations were unimpacted, while other locations had been subject to various forms of perturbation by SRS activities. In general, the data from the Hester-Dendy multiplate samplers were more sensitive at detecting impacts than were the RBP data. We developed a Biotic Index for the Hester-Dendy data which incorporated eight community structure, function, and balance parameters. when tested using a data set that was unrelated to the data set that was used inmore » developing the Biotic Index, the index was very successful at detecting impact.« less
Coexistence of multiple bifurcation modes in memristive diode-bridge-based canonical Chua's circuit
NASA Astrophysics Data System (ADS)
Bao, Bocheng; Xu, Li; Wu, Zhimin; Chen, Mo; Wu, Huagan
2018-07-01
Based on a memristive diode bridge cascaded with series resistor and inductor filter, a modified memristive canonical Chua's circuit is presented in this paper. With the modelling of the memristive circuit, a normalised system model is built. Stability analyses of the equilibrium points are performed and bifurcation behaviours are investigated by numerical simulations and hardware experiments. Most extraordinary in the memristive circuit is that within a parameter region, coexisting phenomenon of multiple bifurcation modes is emerged under six sets of different initial values, resulting in the coexistence of four sets of topologically different and disconnected attractors. These coexisting attractors are easily captured by repeatedly switching on and off the circuit power supplies, which well verify the numerical simulations.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity
NASA Astrophysics Data System (ADS)
Cheon, Jung Hee; Jarecki, Stanislaw; Seo, Jae Hong
Secure computation of the set intersection functionality allows n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require O(n2k) bits of communication and Õ(n2k) group multiplications per player in the malicious adversary setting, where k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C.; Beymer, David; Rangarajan, Anand
2010-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions – specifically Mixture of Gaussians – estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes. PMID:20426043
Wang, Fei; Syeda-Mahmood, Tanveer; Vemuri, Baba C; Beymer, David; Rangarajan, Anand
2009-01-01
In this paper, we propose a generalized group-wise non-rigid registration strategy for multiple unlabeled point-sets of unequal cardinality, with no bias toward any of the given point-sets. To quantify the divergence between the probability distributions--specifically Mixture of Gaussians--estimated from the given point sets, we use a recently developed information-theoretic measure called Jensen-Renyi (JR) divergence. We evaluate a closed-form JR divergence between multiple probabilistic representations for the general case where the mixture models differ in variance and the number of components. We derive the analytic gradient of the divergence measure with respect to the non-rigid registration parameters, and apply it to numerical optimization of the group-wise registration, leading to a computationally efficient and accurate algorithm. We validate our approach on synthetic data, and evaluate it on 3D cardiac shapes.
Multiple steady states in atmospheric chemistry
NASA Technical Reports Server (NTRS)
Stewart, Richard W.
1993-01-01
The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Reconstituting protein interaction networks using parameter-dependent domain-domain interactions
2013-01-01
Background We can describe protein-protein interactions (PPIs) as sets of distinct domain-domain interactions (DDIs) that mediate the physical interactions between proteins. Experimental data confirm that DDIs are more consistent than their corresponding PPIs, lending support to the notion that analyses of DDIs may improve our understanding of PPIs and lead to further insights into cellular function, disease, and evolution. However, currently available experimental DDI data cover only a small fraction of all existing PPIs and, in the absence of structural data, determining which particular DDI mediates any given PPI is a challenge. Results We present two contributions to the field of domain interaction analysis. First, we introduce a novel computational strategy to merge domain annotation data from multiple databases. We show that when we merged yeast domain annotations from six annotation databases we increased the average number of domains per protein from 1.05 to 2.44, bringing it closer to the estimated average value of 3. Second, we introduce a novel computational method, parameter-dependent DDI selection (PADDS), which, given a set of PPIs, extracts a small set of domain pairs that can reconstruct the original set of protein interactions, while attempting to minimize false positives. Based on a set of PPIs from multiple organisms, our method extracted 27% more experimentally detected DDIs than existing computational approaches. Conclusions We have provided a method to merge domain annotation data from multiple sources, ensuring large and consistent domain annotation for any given organism. Moreover, we provided a method to extract a small set of DDIs from the underlying set of PPIs and we showed that, in contrast to existing approaches, our method was not biased towards DDIs with low or high occurrence counts. Finally, we used these two methods to highlight the influence of the underlying annotation density on the characteristics of extracted DDIs. Although increased annotations greatly expanded the possible DDIs, the lack of knowledge of the true biological false positive interactions still prevents an unambiguous assignment of domain interactions responsible for all protein network interactions. Executable files and examples are given at: http://www.bhsai.org/downloads/padds/ PMID:23651452
Gain-scheduling multivariable LPV control of an irrigation canal system.
Bolea, Yolanda; Puig, Vicenç
2016-07-01
The purpose of this paper is to present a multivariable linear parameter varying (LPV) controller with a gain scheduling Smith Predictor (SP) scheme applicable to open-flow canal systems. This LPV controller based on SP is designed taking into account the uncertainty in the estimation of delay and the variation of plant parameters according to the operating point. This new methodology can be applied to a class of delay systems that can be represented by a set of models that can be factorized into a rational multivariable model in series with left/right diagonal (multiple) delays, such as, the case of irrigation canals. A multiple pool canal system is used to test and validate the proposed control approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A model for incomplete longitudinal multivariate ordinal data.
Liu, Li C
2008-12-30
In studies where multiple outcome items are repeatedly measured over time, missing data often occur. A longitudinal item response theory model is proposed for analysis of multivariate ordinal outcomes that are repeatedly measured. Under the MAR assumption, this model accommodates missing data at any level (missing item at any time point and/or missing time point). It allows for multiple random subject effects and the estimation of item discrimination parameters for the multiple outcome items. The covariates in the model can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is described utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher-scoring solution, which provides standard errors for all model parameters, is used. A data set from a longitudinal prevention study is used to motivate the application of the proposed model. In this study, multiple ordinal items of health behavior are repeatedly measured over time. Because of a planned missing design, subjects answered only two-third of all items at a given point. Copyright 2008 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
Combining Costs and Benefits of Animal Activities to Assess Net Yield Outcomes in Apple Orchards
Luck, Gary W.
2016-01-01
Diverse animal communities influence ecosystem function in agroecosystems through positive and negative plant-animal interactions. Yet, past research has largely failed to examine multiple interactions that can have opposing impacts on agricultural production in a given context. We collected data on arthropod communities and yield quality and quantity parameters (fruit set, yield loss and net outcomes) in three major apple-growing regions in south-eastern Australia. We quantified the net yield outcome (accounting for positive and negative interactions) of multiple animal activities (pollination, fruit damage, biological control) across the entire growing season on netted branches, which excluded vertebrate predators of arthropods, and open branches. Net outcome was calculated as the number of undamaged fruit at harvest as a proportion of the number of blossoms (i.e., potential fruit yield). Vertebrate exclusion resulted in lower levels of fruit set and higher levels of arthropod damage to apples, but did not affect net outcomes. Yield quality and quantity parameters (fruit set, yield loss, net outcomes) were not directly associated with arthropod functional groups. Model variance and significant differences between the ratio of pest to beneficial arthropods between regions indicated that complex relationships between environmental factors and multiple animal interactions have a combined effect on yield. Our results show that focusing on a single crop stage, species group or ecosystem function/service can overlook important complexity in ecological processes within the system. Accounting for this complexity and quantifying the net outcome of ecological interactions within the system, is more informative for research and management of biodiversity and ecosystem services in agricultural landscapes. PMID:27391022
Combining Costs and Benefits of Animal Activities to Assess Net Yield Outcomes in Apple Orchards.
Saunders, Manu E; Luck, Gary W
2016-01-01
Diverse animal communities influence ecosystem function in agroecosystems through positive and negative plant-animal interactions. Yet, past research has largely failed to examine multiple interactions that can have opposing impacts on agricultural production in a given context. We collected data on arthropod communities and yield quality and quantity parameters (fruit set, yield loss and net outcomes) in three major apple-growing regions in south-eastern Australia. We quantified the net yield outcome (accounting for positive and negative interactions) of multiple animal activities (pollination, fruit damage, biological control) across the entire growing season on netted branches, which excluded vertebrate predators of arthropods, and open branches. Net outcome was calculated as the number of undamaged fruit at harvest as a proportion of the number of blossoms (i.e., potential fruit yield). Vertebrate exclusion resulted in lower levels of fruit set and higher levels of arthropod damage to apples, but did not affect net outcomes. Yield quality and quantity parameters (fruit set, yield loss, net outcomes) were not directly associated with arthropod functional groups. Model variance and significant differences between the ratio of pest to beneficial arthropods between regions indicated that complex relationships between environmental factors and multiple animal interactions have a combined effect on yield. Our results show that focusing on a single crop stage, species group or ecosystem function/service can overlook important complexity in ecological processes within the system. Accounting for this complexity and quantifying the net outcome of ecological interactions within the system, is more informative for research and management of biodiversity and ecosystem services in agricultural landscapes.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.
Vilhelmsen, Troels N; Ferré, Ty P A
2018-05-01
Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.
Lee, Y; Tien, J M
2001-01-01
We present mathematical models that determine the optimal parameters for strategically routing multidestination traffic in an end-to-end network setting. Multidestination traffic refers to a traffic type that can be routed to any one of a multiple number of destinations. A growing number of communication services is based on multidestination routing. In this parameter-driven approach, a multidestination call is routed to one of the candidate destination nodes in accordance with predetermined decision parameters associated with each candidate node. We present three different approaches: (1) a link utilization (LU) approach, (2) a network cost (NC) approach, and (3) a combined parametric (CP) approach. The LU approach provides the solution that would result in an optimally balanced link utilization, whereas the NC approach provides the least expensive way to route traffic to destinations. The CP approach, on the other hand, provides multiple solutions that help leverage link utilization and cost. The LU approach has in fact been implemented by a long distance carrier resulting in a considerable efficiency improvement in its international direct services, as summarized.
NASA Astrophysics Data System (ADS)
Varun, Sajja; Reddy, Kalakada Bhargav Bal; Vardhan Reddy, R. R. Vishnu
2016-09-01
In this research work, development of a multi response optimization technique has been undertaken, using traditional desirability analysis and non-traditional particle swarm optimization techniques (for different customer's priorities) in wire electrical discharge machining (WEDM). Monel 400 has been selected as work material for experimentation. The effect of key process parameters such as pulse on time (TON), pulse off time (TOFF), peak current (IP), wire feed (WF) were on material removal rate (MRR) and surface roughness(SR) in WEDM operation were investigated. Further, the responses such as MRR and SR were modelled empirically through regression analysis. The developed models can be used by the machinists to predict the MRR and SR over a wide range of input parameters. The optimization of multiple responses has been done for satisfying the priorities of multiple users by using Taguchi-desirability function method and particle swarm optimization technique. The analysis of variance (ANOVA) is also applied to investigate the effect of influential parameters. Finally, the confirmation experiments were conducted for the optimal set of machining parameters, and the betterment has been proved.
Generative Representations for Evolving Families of Designs
NASA Technical Reports Server (NTRS)
Hornby, Gregory S.
2003-01-01
Since typical evolutionary design systems encode only a single artifact with each individual, each time the objective changes a new set of individuals must be evolved. When this objective varies in a way that can be parameterized, a more general method is to use a representation in which a single individual encodes an entire class of artifacts. In addition to saving time by preventing the need for multiple evolutionary runs, the evolution of parameter-controlled designs can create families of artifacts with the same style and a reuse of parts between members of the family. In this paper an evolutionary design system is described which uses a generative representation to encode families of designs. Because a generative representation is an algorithmic encoding of a design, its input parameters are a way to control aspects of the design it generates. By evaluating individuals multiple times with different input parameters the evolutionary design system creates individuals in which the input parameter controls specific aspects of a design. This system is demonstrated on two design substrates: neural-networks which solve the 3/5/7-parity problem and three-dimensional tables of varying heights.
Multiparameter Estimation in Networked Quantum Sensors
NASA Astrophysics Data System (ADS)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-01
We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.
NASA Astrophysics Data System (ADS)
Janardhanan, S.; Datta, B.
2011-12-01
Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.; ...
2017-02-23
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S.
Here, a newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides,more » important for metal extraction chemistry, are parametrized using ParFit.« less
Item Estimates under Low-Stakes Conditions: How Should Omits Be Treated?
ERIC Educational Resources Information Center
DeMars, Christine
Using data from a pilot test of science and math from students in 30 high schools, item difficulties were estimated with a one-parameter model (partial-credit model for the multi-point items). Some items were multiple-choice items, and others were constructed-response items (open-ended). Four sets of estimates were obtained: estimates for males…
Multivariate meta-analysis for non-linear and other multi-parameter associations
Gasparrini, A; Armstrong, B; Kenward, M G
2012-01-01
In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043
Improving Upon an Empirical Procedure for Characterizing Magnetospheric States
NASA Astrophysics Data System (ADS)
Fung, S. F.; Neufeld, J.; Shao, X.
2012-12-01
Work is being performed to improve upon an empirical procedure for describing and predicting the states of the magnetosphere [Fung and Shao, 2008]. We showed in our previous paper that the state of the magnetosphere can be described by a quantity called the magnetospheric state vector (MS vector) consisting of a concatenation of a set of driver-state and a set of response-state parameters. The response state parameters are time-shifted individually to account for their nominal response times so that time does not appear as an explicit parameter in the MS prescription. The MS vector is thus conceptually analogous to the set of vital signs for describing the state of health of a human body. In that previous study, we further demonstrated that since response states are results of driver states, then there should be a correspondence between driver and response states. Such correspondence can be used to predict the subsequent response state from any known driver state with a few hours' lead time. In this paper, we investigate a few possible ways to improve the magnetospheric state descriptions and prediction efficiency by including additional driver state parameters, such as solar activity, IMF-Bx and -By, and optimizing parameter bin sizes. Fung, S. F. and X. Shao, Specification of multiple geomagnetic responses to variable solar wind and IMF input, Ann. Geophys., 26, 639-652, 2008.
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H.; Montesinos-López, José C.; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-01-01
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. PMID:28364037
AlignMe—a membrane protein sequence alignment web server
Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.
2014-01-01
We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425
Multiple-Event Seismic Location Using the Markov-Chain Monte Carlo Technique
NASA Astrophysics Data System (ADS)
Myers, S. C.; Johannesson, G.; Hanley, W.
2005-12-01
We develop a new multiple-event location algorithm (MCMCloc) that utilizes the Markov-Chain Monte Carlo (MCMC) method. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with observations and prior estimates of data and model uncertainties. Model parameters in MCMCloc consist of event hypocenters, and travel-time predictions. Data are arrival time measurements and phase assignments. Posteriori estimates of event locations, path corrections, pick errors, and phase assignments are made through analysis of the posteriori suite of acceptable solutions. Prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, the probability of misidentifying one phase for another, and the probability of spurious data. Inclusion of prior constraints on location accuracy allows direct utilization of ground-truth locations or well-constrained location parameters (e.g. from InSAR) that aid in the accuracy of the solution. Implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. Transition in behavior between a multiple-event locator for tightly clustered events and a single-event locator for solitary events is controlled by the spatial correlation of travel-time predictions. We test the MCMC locator on a regional data set of Nevada Test Site nuclear explosions. Event locations and origin times are known for these events, allowing us to test the features of MCMCloc using a high-quality ground truth data set. Preliminary tests suggest that MCMCloc provides excellent relative locations, often outperforming traditional multiple-event location algorithms, and excellent absolute locations are attained when constraints from one or more ground truth event are included. When phase assignments are switched, we find that MCMCloc properly corrects the error when predicted arrival times are separated by several seconds. In cases where the predicted arrival times are within the combined uncertainty of prediction and measurement errors, MCMCloc determines the probability of one or the other phase assignment and propagates this uncertainty into all model parameters. We find that MCMCloc is a promising method for simultaneously locating large, geographically distributed data sets. Because we incorporate prior knowledge on many parameters, MCMCloc is ideal for combining trusted data with data of unknown reliability. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-ABS-215048
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Identifiability and estimation of multiple transmission pathways in cholera and waterborne disease.
Eisenberg, Marisa C; Robertson, Suzanne L; Tien, Joseph H
2013-05-07
Cholera and many waterborne diseases exhibit multiple characteristic timescales or pathways of infection, which can be modeled as direct and indirect transmission. A major public health issue for waterborne diseases involves understanding the modes of transmission in order to improve control and prevention strategies. An important epidemiological question is: given data for an outbreak, can we determine the role and relative importance of direct vs. environmental/waterborne routes of transmission? We examine whether parameters for a differential equation model of waterborne disease transmission dynamics can be identified, both in the ideal setting of noise-free data (structural identifiability) and in the more realistic setting in the presence of noise (practical identifiability). We used a differential algebra approach together with several numerical approaches, with a particular emphasis on identifiability of the transmission rates. To examine these issues in a practical public health context, we apply the model to a recent cholera outbreak in Angola (2006). Our results show that the model parameters-including both water and person-to-person transmission routes-are globally structurally identifiable, although they become unidentifiable when the environmental transmission timescale is fast. Even for water dynamics within the identifiable range, when noisy data are considered, only a combination of the water transmission parameters can practically be estimated. This makes the waterborne transmission parameters difficult to estimate, leading to inaccurate estimates of important epidemiological parameters such as the basic reproduction number (R0). However, measurements of pathogen persistence time in environmental water sources or measurements of pathogen concentration in the water can improve model identifiability and allow for more accurate estimation of waterborne transmission pathway parameters as well as R0. Parameter estimates for the Angola outbreak suggest that both transmission pathways are needed to explain the observed cholera dynamics. These results highlight the importance of incorporating environmental data when examining waterborne disease. Copyright © 2013 Elsevier Ltd. All rights reserved.
Conditional probability of rainfall extremes across multiple durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2017-04-01
The conditional probability that extreme rainfall will occur at one location given that it is occurring at another location is critical in engineering design and management circumstances including planning of evacuation routes and the sitting of emergency infrastructure. A challenge with this conditional simulation is that in many situations the interest is not so much the conditional distributions of rainfall of the same duration at two locations, but rather the conditional distribution of flooding in two neighbouring catchments, which may be influenced by rainfall of different critical durations. To deal with this challenge, a model that can consider both spatial and duration dependence of extremes is required. The aim of this research is to develop a model that can take account both spatial dependence and duration dependence into the dependence structure of extreme rainfalls. To achieve this aim, this study is a first attempt at combining extreme rainfall for multiple durations within a spatial extreme model framework based on max-stable process theory. Max-stable processes provide a general framework for modelling multivariate extremes with spatial dependence for just a single duration extreme rainfall. To achieve dependence across multiple timescales, this study proposes a new approach that includes addition elements representing duration dependence of extremes to the covariance matrix of max-stable model. To improve the efficiency of calculation, a re-parameterization proposed by Koutsoyiannis et al. (1998) is used to reduce the number of parameters necessary to be estimated. This re-parameterization enables the GEV parameters to be represented as a function of timescale. A stepwise framework has been adopted to achieve the overall aims of this research. Firstly, the re-parameterization is used to define a new set of common parameters for marginal distribution across multiple durations. Secondly, spatial interpolation of the new parameter set is used to estimate marginal parameters across the full spatial domain. Finally, spatial interpolation result is used as initial condition to estimate dependence parameters via a likelihood function of max-stable model for multiple durations. The Hawkesbury-Nepean catchment near Sydney in Australia was selected as case study for this research. This catchment has 25 sub-daily rain gauges with the minimum record length of 24 years over a region of 300 km × 300 km area. The re-parameterization was applied for each station for durations from 1 hour to 24 hours and then is evaluated by comparing with the at-site fitted GEV. The evaluation showed that the average R2 for all station is around 0.80 with the range from 0.26 to 1.0. The output of re-parameterization then was used to construct the spatial surface based on covariates including longitude, latitude, and elevation. The dependence model showed good agreements between empirical extremal coefficient and theoretical extremal coefficient for multiple durations. For the overall model, a leave-one-out cross-validation for all stations showed it works well for 20 out of 25 stations. The potential application of this model framework was illustrated through a conditional map of return period and return level across multiple durations, both of which are important for engineering design and management.
Bayesian Regression of Thermodynamic Models of Redox Active Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Katherine
Finding a suitable functional redox material is a critical challenge to achieving scalable, economically viable technologies for storing concentrated solar energy in the form of a defected oxide. Demonstrating e ectiveness for thermal storage or solar fuel is largely accomplished by using a thermodynamic model derived from experimental data. The purpose of this project is to test the accuracy of our regression model on representative data sets. Determining the accuracy of the model includes parameter tting the model to the data, comparing the model using di erent numbers of param- eters, and analyzing the entropy and enthalpy calculated from themore » model. Three data sets were considered in this project: two demonstrating materials for solar fuels by wa- ter splitting and the other of a material for thermal storage. Using Bayesian Inference and Markov Chain Monte Carlo (MCMC), parameter estimation was preformed on the three data sets. Good results were achieved, except some there was some deviations on the edges of the data input ranges. The evidence values were then calculated in a variety of ways and used to compare models with di erent number of parameters. It was believed that at least one of the parameters was unnecessary and comparing evidence values demonstrated that the parameter was need on one data set and not signi cantly helpful on another. The entropy was calculated by taking the derivative in one variable and integrating over another. and its uncertainty was also calculated by evaluating the entropy over multiple MCMC samples. Afterwards, all the parts were written up as a tutorial for the Uncertainty Quanti cation Toolkit (UQTk).« less
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
A Necessary Condition for Coexistence of Autocatalytic Replicators in a Prebiotic Environment
Hernandez, Andres F.; Grover, Martha A.
2013-01-01
A necessary, but not sufficient, mathematical condition for the coexistence of short replicating species is presented here. The mathematical condition is obtained for a prebiotic environment, simulated as a fed-batch reactor, which combines monomer recycling, variable reaction order and a fixed monomer inlet flow with two replicator types and two monomer types. An extensive exploration of the parameter space in the model validates the robustness and efficiency of the mathematical condition, with nearly 1.7% of parameter sets meeting the condition and half of those exhibiting sustained coexistence. The results show that it is possible to generate a condition of coexistence, where two replicators sustain a linear growth simultaneously for a wide variety of chemistries, under an appropriate environment. The presence of multiple monomer types is critical to sustaining the coexistence of multiple replicator types. PMID:25369813
A necessary condition for coexistence of autocatalytic replicators in a prebiotic environment.
Hernandez, Andres F; Grover, Martha A
2013-07-24
A necessary, but not sufficient, mathematical condition for the coexistence of short replicating species is presented here. The mathematical condition is obtained for a prebiotic environment, simulated as a fed-batch reactor, which combines monomer recycling, variable reaction order and a fixed monomer inlet flow with two replicator types and two monomer types. An extensive exploration of the parameter space in the model validates the robustness and efficiency of the mathematical condition, with nearly 1.7% of parameter sets meeting the condition and half of those exhibiting sustained coexistence. The results show that it is possible to generate a condition of coexistence, where two replicators sustain a linear growth simultaneously for a wide variety of chemistries, under an appropriate environment. The presence of multiple monomer types is critical to sustaining the coexistence of multiple replicator types.
Induced subgraph searching for geometric model fitting
NASA Astrophysics Data System (ADS)
Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi
2017-11-01
In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.
a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data
NASA Astrophysics Data System (ADS)
Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.
2017-09-01
Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.
NASA Astrophysics Data System (ADS)
Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias
2015-07-01
Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
NASA Astrophysics Data System (ADS)
Padhi, Amit; Mallick, Subhashis
2014-03-01
Inversion of band- and offset-limited single component (P wave) seismic data does not provide robust estimates of subsurface elastic parameters and density. Multicomponent seismic data can, in principle, circumvent this limitation but adds to the complexity of the inversion algorithm because it requires simultaneous optimization of multiple objective functions, one for each data component. In seismology, these multiple objectives are typically handled by constructing a single objective given as a weighted sum of the objectives of individual data components and sometimes with additional regularization terms reflecting their interdependence; which is then followed by a single objective optimization. Multi-objective problems, inclusive of the multicomponent seismic inversion are however non-linear. They have non-unique solutions, known as the Pareto-optimal solutions. Therefore, casting such problems as a single objective optimization provides one out of the entire set of the Pareto-optimal solutions, which in turn, may be biased by the choice of the weights. To handle multiple objectives, it is thus appropriate to treat the objective as a vector and simultaneously optimize each of its components so that the entire Pareto-optimal set of solutions could be estimated. This paper proposes such a novel multi-objective methodology using a non-dominated sorting genetic algorithm for waveform inversion of multicomponent seismic data. The applicability of the method is demonstrated using synthetic data generated from multilayer models based on a real well log. We document that the proposed method can reliably extract subsurface elastic parameters and density from multicomponent seismic data both when the subsurface is considered isotropic and transversely isotropic with a vertical symmetry axis. We also compute approximate uncertainty values in the derived parameters. Although we restrict our inversion applications to horizontally stratified models, we outline a practical procedure of extending the method to approximately include local dips for each source-receiver offset pair. Finally, the applicability of the proposed method is not just limited to seismic inversion but it could be used to invert different data types not only requiring multiple objectives but also multiple physics to describe them.
Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M
2015-10-01
New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10-fold cross-validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross-validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time-sequence data sets, for a total of 17,479 images. This method is implemented as an open-source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Least-Squares Self-Calibration of Imaging Array Data
NASA Technical Reports Server (NTRS)
Arendt, R. G.; Moseley, S. H.; Fixsen, D. J.
2004-01-01
When arrays are used to collect multiple appropriately-dithered images of the same region of sky, the resulting data set can be calibrated using a least-squares minimization procedure that determines the optimal fit between the data and a model of that data. The model parameters include the desired sky intensities as well as instrument parameters such as pixel-to-pixel gains and offsets. The least-squares solution simultaneously provides the formal error estimates for the model parameters. With a suitable observing strategy, the need for separate calibration observations is reduced or eliminated. We show examples of this calibration technique applied to HST NICMOS observations of the Hubble Deep Fields and simulated SIRTF IRAC observations.
An Alternative to the 3PL: Using Asymmetric Item Characteristic Curves to Address Guessing Effects
ERIC Educational Resources Information Center
Lee, Sora; Bolt, Daniel M.
2018-01-01
Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…
Sela, Itamar; Ashkenazy, Haim; Katoh, Kazutaka; Pupko, Tal
2015-07-01
Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http://guidance.tau.ac.il. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Cosmological Parameters from the QUAD CMB Polarization Experiment
NASA Astrophysics Data System (ADS)
Castro, P. G.; Ade, P.; Bock, J.; Bowden, M.; Brown, M. L.; Cahill, G.; Church, S.; Culverhouse, T.; Friedman, R. B.; Ganga, K.; Gear, W. K.; Gupta, S.; Hinderks, J.; Kovac, J.; Lange, A. E.; Leitch, E.; Melhuish, S. J.; Memari, Y.; Murphy, J. A.; Orlando, A.; Pryke, C.; Schwarz, R.; O'Sullivan, C.; Piccirillo, L.; Rajguru, N.; Rusholme, B.; Taylor, A. N.; Thompson, K. L.; Turner, A. H.; Wu, E. Y. S.; Zemcov, M.; QUa D Collaboration
2009-08-01
In this paper, we present a parameter estimation analysis of the polarization and temperature power spectra from the second and third season of observations with the QUaD experiment. QUaD has for the first time detected multiple acoustic peaks in the E-mode polarization spectrum with high significance. Although QUaD-only parameter constraints are not competitive with previous results for the standard six-parameter ΛCDM cosmology, they do allow meaningful polarization-only parameter analyses for the first time. In a standard six-parameter ΛCDM analysis, we find the QUaD TT power spectrum to be in good agreement with previous results. However, the QUaD polarization data show some tension with ΛCDM. The origin of this 1σ-2σ tension remains unclear, and may point to new physics, residual systematics, or simple random chance. We also combine QUaD with the five-year WMAP data set and the SDSS luminous red galaxies 4th data release power spectrum, and extend our analysis to constrain individual isocurvature mode fractions, constraining cold dark matter density, αcdmi < 0.11 (95% confidence limit (CL)), neutrino density, αndi < 0.26 (95% CL), and neutrino velocity, αnvi < 0.23 (95% CL), modes. Our analysis sets a benchmark for future polarization experiments.
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
Probabilistic images (PBIS): A concise image representation technique for multiple parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, L.C.; Yeh, S.H.; Chen, Z.
1984-01-01
Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalitiesmore » is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method.« less
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
An Integrated Framework for Parameter-based Optimization of Scientific Workflows.
Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel
2009-01-01
Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.
NASA Astrophysics Data System (ADS)
Krenn, Julia; Zangerl, Christian; Mergili, Martin
2017-04-01
r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.
Gas Emissions Acquired during the Aircraft Particle Emission Experiment (APEX) Series
NASA Technical Reports Server (NTRS)
Changlie, Wey; Chowen, Chou Wey
2007-01-01
NASA, in collaboration with other US federal agencies, engine/airframe manufacturers, airlines, and airport authorities, recently sponsored a series of 3 ground-based field investigations to examine the particle and gas emissions from a variety of in-use commercial aircraft. Emissions parameters were measured at multiple engine power settings, ranging from idle to maximum thrust, in samples collected at 3 different down stream locations of the exhaust. Sampling rakes at nominally 1 meter down stream contained multiple probes to facilitate a study of the spatial variation of emissions across the engine exhaust plane. Emission indices measured at 1 m were in good agreement with the engine certification data as well as predictions provided by the engine company. However at low power settings, trace species emissions were observed to be highly dependent on ambient conditions and engine temperature.
NASA Astrophysics Data System (ADS)
Rudowicz, C.; Gnutek, P.
2010-01-01
Central quantities in spectroscopy and magnetism of transition ions in crystals are crystal (ligand) field parameters (CFPs). For orthorhombic, monoclinic, and triclinic site symmetry CF analysis is prone to misinterpretations due to large number of CFPs and existence of correlated sets of alternative CFPs. In this review, we elucidate the intrinsic features of orthorhombic and lower symmetry CFPs and their implications. The alternative CFP sets, which yield identical energy levels, belong to different regions of CF parameter space and hence are intrinsically incompatible. Only their ‘images’ representing CFP sets expressed in the same region of CF parameter space may be directly compared. Implications of these features for fitting procedures and meaning of fitted CFPs are categorized into negative: pitfalls and positive: blessings. As a case study, the CFP sets for Tm 3+ ions in KLu(WO 4) 2 are analysed and shown to be intrinsically incompatible. Inadvertent, so meaningless, comparisons of incompatible CFP sets result in various pitfalls, e.g., controversial claims about the values of CFPs obtained by other researchers as well as incorrect structural conclusions or faulty systematics of CF parameters across rare-earth ion series based on relative magnitudes of incompatible CFPs. Such pitfalls bear on interpretation of, e.g., optical spectroscopy, inelastic neutron scattering, and magnetic susceptibility data. An extensive survey of pertinent literature was carried out to assess recognition of compatibility problems. Great portion of available orthorhombic and lower symmetry CFP sets are found intrinsically incompatible, yet these problems and their implications appear barely recognized. The considerable extent and consequences of pitfalls revealed by our survey call for concerted remedial actions of researchers. A general approach based on the rhombicity ratio standardization may solve compatibility problems. Wider utilization of alternative CFP sets in the multiple correlated fitting techniques may improve reliability ( blessing) of fitted CFPs. This review may be of interest to a broad range of researchers from condensed matter physicists to physical chemists working on, e.g., high temperature superconductors, luminescent, optoelectronic, laser, and magnetic materials.
Hostettler, Isabel Charlotte; Muroi, Carl; Richter, Johannes Konstantin; Schmid, Josef; Neidert, Marian Christoph; Seule, Martin; Boss, Oliver; Pangalu, Athina; Germans, Menno Robbert; Keller, Emanuela
2018-01-19
OBJECTIVE The aim of this study was to create prediction models for outcome parameters by decision tree analysis based on clinical and laboratory data in patients with aneurysmal subarachnoid hemorrhage (aSAH). METHODS The database consisted of clinical and laboratory parameters of 548 patients with aSAH who were admitted to the Neurocritical Care Unit, University Hospital Zurich. To examine the model performance, the cohort was randomly divided into a derivation cohort (60% [n = 329]; training data set) and a validation cohort (40% [n = 219]; test data set). The classification and regression tree prediction algorithm was applied to predict death, functional outcome, and ventriculoperitoneal (VP) shunt dependency. Chi-square automatic interaction detection was applied to predict delayed cerebral infarction on days 1, 3, and 7. RESULTS The overall mortality was 18.4%. The accuracy of the decision tree models was good for survival on day 1 and favorable functional outcome at all time points, with a difference between the training and test data sets of < 5%. Prediction accuracy for survival on day 1 was 75.2%. The most important differentiating factor was the interleukin-6 (IL-6) level on day 1. Favorable functional outcome, defined as Glasgow Outcome Scale scores of 4 and 5, was observed in 68.6% of patients. Favorable functional outcome at all time points had a prediction accuracy of 71.1% in the training data set, with procalcitonin on day 1 being the most important differentiating factor at all time points. A total of 148 patients (27%) developed VP shunt dependency. The most important differentiating factor was hyperglycemia on admission. CONCLUSIONS The multiple variable analysis capability of decision trees enables exploration of dependent variables in the context of multiple changing influences over the course of an illness. The decision tree currently generated increases awareness of the early systemic stress response, which is seemingly pertinent for prognostication.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Multiparameter Estimation in Networked Quantum Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
Multiparameter Estimation in Networked Quantum Sensors
Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.
2018-02-21
We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
Choi, Jungyill; Harvey, Judson W.; Conklin, Martha H.
2000-01-01
The fate of contaminants in streams and rivers is affected by exchange and biogeochemical transformation in slowly moving or stagnant flow zones that interact with rapid flow in the main channel. In a typical stream, there are multiple types of slowly moving flow zones in which exchange and transformation occur, such as stagnant or recirculating surface water as well as subsurface hyporheic zones. However, most investigators use transport models with just a single storage zone in their modeling studies, which assumes that the effects of multiple storage zones can be lumped together. Our study addressed the following question: Can a single‐storage zone model reliably characterize the effects of physical retention and biogeochemical reactions in multiple storage zones? We extended an existing stream transport model with a single storage zone to include a second storage zone. With the extended model we generated 500 data sets representing transport of nonreactive and reactive solutes in stream systems that have two different types of storage zones with variable hydrologic conditions. The one storage zone model was tested by optimizing the lumped storage parameters to achieve a best fit for each of the generated data sets. Multiple storage processes were categorized as possessing I, additive; II, competitive; or III, dominant storage zone characteristics. The classification was based on the goodness of fit of generated data sets, the degree of similarity in mean retention time of the two storage zones, and the relative distributions of exchange flux and storage capacity between the two storage zones. For most cases (>90%) the one storage zone model described either the effect of the sum of multiple storage processes (category I) or the dominant storage process (category III). Failure of the one storage zone model occurred mainly for category II, that is, when one of the storage zones had a much longer mean retention time (ts ratio > 5.0) and when the dominance of storage capacity and exchange flux occurred in different storage zones. We also used the one storage zone model to estimate a “single” lumped rate constant representing the net removal of a solute by biogeochemical reactions in multiple storage zones. For most cases the lumped rate constant that was optimized by one storage zone modeling estimated the flux‐weighted rate constant for multiple storage zones. Our results explain how the relative hydrologic properties of multiple storage zones (retention time, storage capacity, exchange flux, and biogeochemical reaction rate constant) affect the reliability of lumped parameters determined by a one storage zone transport model. We conclude that stream transport models with a single storage compartment will in most cases reliably characterize the dominant physical processes of solute retention and biogeochemical reactions in streams with multiple storage zones.
A mixed-effects regression model for longitudinal multivariate ordinal data.
Liu, Li C; Hedeker, Donald
2006-03-01
A mixed-effects item response theory model that allows for three-level multivariate ordinal outcomes and accommodates multiple random subject effects is proposed for analysis of multivariate ordinal outcomes in longitudinal studies. This model allows for the estimation of different item factor loadings (item discrimination parameters) for the multiple outcomes. The covariates in the model do not have to follow the proportional odds assumption and can be at any level. Assuming either a probit or logistic response function, maximum marginal likelihood estimation is proposed utilizing multidimensional Gauss-Hermite quadrature for integration of the random effects. An iterative Fisher scoring solution, which provides standard errors for all model parameters, is used. An analysis of a longitudinal substance use data set, where four items of substance use behavior (cigarette use, alcohol use, marijuana use, and getting drunk or high) are repeatedly measured over time, is used to illustrate application of the proposed model.
Electron-impact Multiple-ionization Cross Sections for Atoms and Ions of Helium through Zinc
NASA Astrophysics Data System (ADS)
Hahn, M.; Müller, A.; Savin, D. W.
2017-12-01
We compiled a set of electron-impact multiple-ionization (EIMI) cross section for astrophysically relevant ions. EIMIs can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature or if there is a non-thermal electron energy distribution, such as a kappa distribution. Cross section for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to available experimental EIMI cross-section data. Based on this comparison, we interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for 3466 EIMI cross sections and the associated Maxwellian plasma rate coefficients. We also highlight some outstanding issues that remain to be resolved.
Passing in Command Line Arguments and Parallel Cluster/Multicore Batching in R with batch.
Hoffmann, Thomas J
2011-03-01
It is often useful to rerun a command line R script with some slight change in the parameters used to run it - a new set of parameters for a simulation, a different dataset to process, etc. The R package batch provides a means to pass in multiple command line options, including vectors of values in the usual R format, easily into R. The same script can be setup to run things in parallel via different command line arguments. The R package batch also provides a means to simplify this parallel batching by allowing one to use R and an R-like syntax for arguments to spread a script across a cluster or local multicore/multiprocessor computer, with automated syntax for several popular cluster types. Finally it provides a means to aggregate the results together of multiple processes run on a cluster.
NASA Astrophysics Data System (ADS)
Schutt, D.; Breidt, J.; Corbalan Castejon, A.; Witt, D. R.
2017-12-01
Shear wave splitting is a commonly used and powerful method for constraining such phenomena as lithospheric strain history or asthenospheric flow. However, a number of challenges with the statistics of shear wave splitting have been noted. This creates difficulties in assessing whether two separate measurements are statistically similar or are indicating real differences in anisotropic structure, as well as for created proper station averaged sets of parameters for more complex situations such as multiple or dipping layers of anisotropy. We present a new method for calculating the most likely splitting parameters using the Menke and Levin [2003] method of cross-convolution. The Menke and Levin method is used because it can more readily be applied to a wider range of anisotropic scenarios than the commonly used Silver and Chan [1991] technique. In our approach, we derive a formula for the spectral density of a function of the microseismic noise and the impulse response of the correct anisotropic model that holds for the true anisotropic model parameters. This is compared to the spectral density of the observed signal convolved with the impulse response for an estimated set of anisotropic parameters. The most likely parameters are found when the former and latter spectral densities are the same. By using the Whittle likelihood to compare the two spectral densities, a likelihood grid for all possible anisotropic parameter values is generated. Using bootstrapping, the uncertainty and covariance between the various anisotropic parameters can be evaluated. We will show this works with a single layer of anisotropy and a vertically incident ray, and discuss the usefulness for a more complex case. The method shows great promise for calculating multiple layer anisotropy parameters with proper assessment of uncertainty. References: Menke, W., and Levin, V. 2003. The cross-convolution method for interpreting SKS splitting observations, with application to one and two-layer anisotropic earth models. Geophysical Journal International, 154: 379-392. doi:10.1046/j.1365-246X.2003.01937.x. Silver, P.G., and Chan, W.W. 1991. Shear Wave Splitting and Sub continental Mantle Deformation. Journal of Geophysical Research, 96: 429-454. doi:10.1029/91JB00899.
Multiplicative Multitask Feature Learning
Wang, Xin; Bi, Jinbo; Yu, Shipeng; Sun, Jiangwen; Song, Minghu
2016-01-01
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An efficient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks. PMID:28428735
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H; Montesinos-López, José C; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-05-05
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. Copyright © 2017 Montesinos-López et al.
Java bioinformatics analysis web services for multiple sequence alignment--JABAWS:MSA.
Troshin, Peter V; Procter, James B; Barton, Geoffrey J
2011-07-15
JABAWS is a web services framework that simplifies the deployment of web services for bioinformatics. JABAWS:MSA provides services for five multiple sequence alignment (MSA) methods (Probcons, T-coffee, Muscle, Mafft and ClustalW), and is the system employed by the Jalview multiple sequence analysis workbench since version 2.6. A fully functional, easy to set up server is provided as a Virtual Appliance (VA), which can be run on most operating systems that support a virtualization environment such as VMware or Oracle VirtualBox. JABAWS is also distributed as a Web Application aRchive (WAR) and can be configured to run on a single computer and/or a cluster managed by Grid Engine, LSF or other queuing systems that support DRMAA. JABAWS:MSA provides clients full access to each application's parameters, allows administrators to specify named parameter preset combinations and execution limits for each application through simple configuration files. The JABAWS command-line client allows integration of JABAWS services into conventional scripts. JABAWS is made freely available under the Apache 2 license and can be obtained from: http://www.compbio.dundee.ac.uk/jabaws.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors
NASA Astrophysics Data System (ADS)
Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.
2017-12-01
We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data volume are constrained, and the COLLABORATE module will support simulations of coordination among multiple platforms with adaptive sensors. When used together these modules will for a simulation OSSEs that can enable both the design of adaptive algorithms to support remote sensing and the prediction of the sensor performance.
Alghanem, Bandar; Nikitin, Frédéric; Stricker, Thomas; Duchoslav, Eva; Luban, Jeremy; Strambio-De-Castillia, Caterina; Muller, Markus; Lisacek, Frédérique; Varesio, Emmanuel; Hopfgartner, Gérard
2017-05-15
In peptide quantification by liquid chromatography/mass spectrometry (LC/MS), the optimization of multiple reaction monitoring (MRM) parameters is essential for sensitive detection. We have compared different approaches to build MRM assays, based either on flow injection analysis (FIA) of isotopically labelled peptides, or on the knowledge and the prediction of the best settings for MRM transitions and collision energies (CE). In this context, we introduce MRMOptimizer, an open-source software tool that processes spectra and assists the user in selecting transitions in the FIA workflow. MS/MS spectral libraries with CE voltages from 10 to 70 V are automatically acquired in FIA mode for isotopically labelled peptides. Then MRMOptimizer determines the optimal MRM settings for each peptide. To assess the quantitative performance of our approach, 155 peptides, representing 84 proteins, were analysed by LC/MRM-MS and the peak areas were compared between: (A) the MRMOptimizer-based workflow, (B1) the SRMAtlas transitions set used 'as-is'; (B2) the same SRMAtlas set with CE parameters optimized by Skyline. 51% of the three most intense transitions per peptide were shown to be common to both A and B1/B2 methods, and displayed similar sensitivity and peak area distributions. The peak areas obtained with MRMOptimizer for transitions sharing either the precursor ion charge state or the fragment ions with the SRMAtlas set at unique transitions were increased 1.8- to 2.3-fold. The gain in sensitivity using MRMOptimizer for transitions with different precursor ion charge state and fragment ions (8% of the total), reaches a ~ 11-fold increase. Isotopically labelled peptides can be used to optimize MRM transitions more efficiently in FIA than by searching databases. The MRMOptimizer software is MS independent and enables the post-acquisition selection of MRM parameters. Coefficients of variation for optimal CE values are lower than those obtained with the SRMAtlas approach (B2) and one additional peptide was detected. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
NASA Technical Reports Server (NTRS)
Huba, J. D.; Chen, J.; Anderson, R. R.
1992-01-01
Attention is given to a mechanism to generate a broad spectrum of electrostatic turbulence in the quiet time central plasma sheet (CPS) plasma. It is shown theoretically that multiple-ring ion distributions can generate short-wavelength (less than about 1), electrostatic turbulence with frequencies less than about kVj, where Vj is the velocity of the jth ring. On the basis of a set of parameters from measurements made in the CPS, it is found that electrostatic turbulence can be generated with wavenumbers in the range of 0.02 and 1.0, with real frequencies in the range of 0 and 10, and with linear growth rates greater than 0.01 over a broad range of angles relative to the magnetic field (5-90 deg). These theoretical results are compared with wave data from ISEE 1 using an ion distribution function exhibiting multiple-ring structures observed at the same time. The theoretical results in the linear regime are found to be consistent with the wave data.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.
2015-12-01
Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
NASA Astrophysics Data System (ADS)
Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence
2017-06-01
We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.
OPTIMAL EXPERIMENT DESIGN FOR MAGNETIC RESONANCE FINGERPRINTING
Zhao, Bo; Haldar, Justin P.; Setsompop, Kawin; Wald, Lawrence L.
2017-01-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance. PMID:28268369
Optimal experiment design for magnetic resonance fingerprinting.
Bo Zhao; Haldar, Justin P; Setsompop, Kawin; Wald, Lawrence L
2016-08-01
Magnetic resonance (MR) fingerprinting is an emerging quantitative MR imaging technique that simultaneously acquires multiple tissue parameters in an efficient experiment. In this work, we present an estimation-theoretic framework to evaluate and design MR fingerprinting experiments. More specifically, we derive the Cramér-Rao bound (CRB), a lower bound on the covariance of any unbiased estimator, to characterize parameter estimation for MR fingerprinting. We then formulate an optimal experiment design problem based on the CRB to choose a set of acquisition parameters (e.g., flip angles and/or repetition times) that maximizes the signal-to-noise ratio efficiency of the resulting experiment. The utility of the proposed approach is validated by numerical studies. Representative results demonstrate that the optimized experiments allow for substantial reduction in the length of an MR fingerprinting acquisition, and substantial improvement in parameter estimation performance.
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
Replicates in high dimensions, with applications to latent variable graphical models.
Tan, Kean Ming; Ning, Yang; Witten, Daniela M; Liu, Han
2016-12-01
In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.
The STAR Data Reporting Guidelines for Clinical High Altitude Research.
Brodmann Maeder, Monika; Brugger, Hermann; Pun, Matiram; Strapazzon, Giacomo; Dal Cappello, Tomas; Maggiorini, Marco; Hackett, Peter; Bärtsch, Peter; Swenson, Erik R; Zafren, Ken
2018-03-01
Brodmann Maeder, Monika, Hermann Brugger, Matiram Pun, Giacomo Strapazzon, Tomas Dal Cappello, Marco Maggiorini, Peter Hackett, Peter Baärtsch, Erik R. Swenson, Ken Zafren (STAR Core Group), and the STAR Delphi Expert Group. The STARdata reporting guidelines for clinical high altitude research. High AltMedBiol. 19:7-14, 2018. The goal of the STAR (STrengthening Altitude Research) initiative was to produce a uniform set of key elements for research and reporting in clinical high-altitude (HA) medicine. The STAR initiative was inspired by research on treatment of cardiac arrest, in which the establishment of the Utstein Style, a uniform data reporting protocol, substantially contributed to improving data reporting and subsequently the quality of scientific evidence. The STAR core group used the Delphi method, in which a group of experts reaches a consensus over multiple rounds using a formal method. We selected experts in the field of clinical HA medicine based on their scientific credentials and identified an initial set of parameters for evaluation by the experts. Of 51 experts in HA research who were identified initially, 21 experts completed both rounds. The experts identified 42 key parameters in 5 categories (setting, individual factors, acute mountain sickness and HA cerebral edema, HA pulmonary edema, and treatment) that were considered essential for research and reporting in clinical HA research. An additional 47 supplemental parameters were identified that should be reported depending on the nature of the research. The STAR initiative, using the Delphi method, identified a set of key parameters essential for research and reporting in clinical HA medicine.
2011-01-01
In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173
Kück, Patrick; Meusemann, Karen; Dambach, Johannes; Thormann, Birthe; von Reumont, Björn M; Wägele, Johann W; Misof, Bernhard
2010-03-31
Methods of alignment masking, which refers to the technique of excluding alignment blocks prior to tree reconstructions, have been successful in improving the signal-to-noise ratio in sequence alignments. However, the lack of formally well defined methods to identify randomness in sequence alignments has prevented a routine application of alignment masking. In this study, we compared the effects on tree reconstructions of the most commonly used profiling method (GBLOCKS) which uses a predefined set of rules in combination with alignment masking, with a new profiling approach (ALISCORE) based on Monte Carlo resampling within a sliding window, using different data sets and alignment methods. While the GBLOCKS approach excludes variable sections above a certain threshold which choice is left arbitrary, the ALISCORE algorithm is free of a priori rating of parameter space and therefore more objective. ALISCORE was successfully extended to amino acids using a proportional model and empirical substitution matrices to score randomness in multiple sequence alignments. A complex bootstrap resampling leads to an even distribution of scores of randomly similar sequences to assess randomness of the observed sequence similarity. Testing performance on real data, both masking methods, GBLOCKS and ALISCORE, helped to improve tree resolution. The sliding window approach was less sensitive to different alignments of identical data sets and performed equally well on all data sets. Concurrently, ALISCORE is capable of dealing with different substitution patterns and heterogeneous base composition. ALISCORE and the most relaxed GBLOCKS gap parameter setting performed best on all data sets. Correspondingly, Neighbor-Net analyses showed the most decrease in conflict. Alignment masking improves signal-to-noise ratio in multiple sequence alignments prior to phylogenetic reconstruction. Given the robust performance of alignment profiling, alignment masking should routinely be used to improve tree reconstructions. Parametric methods of alignment profiling can be easily extended to more complex likelihood based models of sequence evolution which opens the possibility of further improvements.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
Prediction of kinase-inhibitor binding affinity using energetic parameters
Usha, Singaravelu; Selvaraj, Samuel
2016-01-01
The combination of physicochemical properties and energetic parameters derived from protein-ligand complexes play a vital role in determining the biological activity of a molecule. In the present work, protein-ligand interaction energy along with logP values was used to predict the experimental log (IC50) values of 25 different kinase-inhibitors using multiple regressions which gave a correlation coefficient of 0.93. The regression equation obtained was tested on 93 kinase-inhibitor complexes and an average deviation of 0.92 from the experimental log IC50 values was shown. The same set of descriptors was used to predict binding affinities for a test set of five individual kinase families, with correlation values > 0.9. We show that the protein-ligand interaction energies and partition coefficient values form the major deterministic factors for binding affinity of the ligand for its receptor. PMID:28149052
Zolg, Daniel Paul; Wilhelm, Mathias; Yu, Peng; Knaute, Tobias; Zerweck, Johannes; Wenschuh, Holger; Reimer, Ulf; Schnatbaum, Karsten; Kuster, Bernhard
2017-11-01
Beyond specific applications, such as the relative or absolute quantification of peptides in targeted proteomic experiments, synthetic spike-in peptides are not yet systematically used as internal standards in bottom-up proteomics. A number of retention time standards have been reported that enable chromatographic aligning of multiple LC-MS/MS experiments. However, only few peptides are typically included in such sets limiting the analytical parameters that can be monitored. Here, we describe PROCAL (ProteomeTools Calibration Standard), a set of 40 synthetic peptides that span the entire hydrophobicity range of tryptic digests, enabling not only accurate determination of retention time indices but also monitoring of chromatographic separation performance over time. The fragmentation characteristics of the peptides can also be used to calibrate and compare collision energies between mass spectrometers. The sequences of all selected peptides do not occur in any natural protein, thus eliminating the need for stable isotope labeling. We anticipate that this set of peptides will be useful for multiple purposes in individual laboratories but also aiding the transfer of data acquisition and analysis methods between laboratories, notably the use of spectral libraries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimization of Gas Metal Arc Welding Process Parameters
NASA Astrophysics Data System (ADS)
Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.
2016-09-01
This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.
VINE: A Variational Inference -Based Bayesian Neural Network Engine
2018-01-01
networks are trained using the same dataset and hyper parameter settings as discussed. Table 1 Performance evaluation of the proposed transfer learning...multiplication/addition/subtraction. These operations can be implemented using nested loops in which various iterations of a loop are independent of...each other. This introduces an opportunity for optimization where a loop may be unrolled fully or partially to increase parallelism at the cost of
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
NASA Astrophysics Data System (ADS)
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
NASA Astrophysics Data System (ADS)
Chtourou, Rim; Haugou, Gregory; Leconte, Nicolas; Zouari, Bassem; Chaari, Fahmi; Markiewicz, Eric
2015-09-01
Resistance Spot Welding (RSW) of multiple sheets with multiple materials are increasingly realized in the automotive industry. The mechanical strength of such new generation of spot welded assemblies is not that much dealt with. This is true in particular for experiments dedicated to investigate the mechanical strength of spot weld made by multi sheets of different grades, and their macro modeling in structural computations. Indeed, the most published studies are limited to two sheet assemblies. Therefore, in the first part of this work an advanced experimental set-up with a reduced mass is proposed to characterize the quasi-static and dynamic mechanical behavior and rupture of spot weld made by several sheets of different grades. The proposed device is based on Arcan test, the plates contribution in the global response is, thus, reduced. Loading modes I/II are, therefore, combined and well controlled. In the second part a simplified spot weld connector element (macroscopic modeling) is proposed to describe the nonlinear response and rupture of this new generation of spot welded assemblies. The weld connector model involves several parameters to be set. The remaining parameters are finally identified through a reverse engineering approach using mechanical responses of experimental tests presented in the first part of this work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilchrist, Kristin H., E-mail: kgilchrist@rti.org; Lewis, Gregory F.; Gay, Elaine A.
Microelectrode arrays (MEAs) recording extracellular field potentials of human-induced pluripotent stem cell-derived cardiomyocytes (hiPS-CM) provide a rich data set for functional assessment of drug response. The aim of this work is the development of a method for a systematic analysis of arrhythmia using MEAs, with emphasis on the development of six parameters accounting for different types of cardiomyocyte signal irregularities. We describe a software approach to carry out such analysis automatically including generation of a heat map that enables quick visualization of arrhythmic liability of compounds. We also implemented signal processing techniques for reliable extraction of the repolarization peak formore » field potential duration (FPD) measurement even from recordings with low signal to noise ratios. We measured hiPS-CM's on a 48 well MEA system with 5 minute recordings at multiple time points (0.5, 1, 2 and 4 h) after drug exposure. We evaluated concentration responses for seven compounds with a combination of hERG, QT and clinical proarrhythmia properties: Verapamil, Ranolazine, Flecainide, Amiodarone, Ouabain, Cisapride, and Terfenadine. The predictive utility of MEA parameters as surrogates of these clinical effects were examined. The beat rate and FPD results exhibited good correlations with previous MEA studies in stem cell derived cardiomyocytes and clinical data. The six-parameter arrhythmia assessment exhibited excellent predictive agreement with the known arrhythmogenic potential of the tested compounds, and holds promise as a new method to predict arrhythmic liability. - Highlights: • Six parameters describing arrhythmia were defined and measured for known compounds. • Software for efficient parameter extraction from large MEA data sets was developed. • The proposed cellular parameter set is predictive of clinical drug proarrhythmia.« less
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
Sánchez, Ariel G.; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; ...
2016-09-30
The cosmological information contained in anisotropic galaxy clustering measurements can often be compressed into a small number of parameters whose posterior distribution is well described by a Gaussian. Here, we present a general methodology to combine these estimates into a single set of consensus constraints that encode the total information of the individual measurements, taking into account the full covariance between the different methods. We also illustrate this technique by applying it to combine the results obtained from different clustering analyses, including measurements of the signature of baryon acoustic oscillations and redshift-space distortions, based on a set of mock cataloguesmore » of the final SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). Our results show that the region of the parameter space allowed by the consensus constraints is smaller than that of the individual methods, highlighting the importance of performing multiple analyses on galaxy surveys even when the measurements are highly correlated. Our paper is part of a set that analyses the final galaxy clustering data set from BOSS. The methodology presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.« less
Computational modeling of cardiovascular response to orthostatic stress
NASA Technical Reports Server (NTRS)
Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.
2002-01-01
The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.
Antagonistic and synergistic interactions among predators.
Huxel, Gary R
2007-08-01
The structure and dynamics of food webs are largely dependent upon interactions among consumers and their resources. However, interspecific interactions such as intraguild predation and interference competition can also play a significant role in the stability of communities. The role of antagonistic/synergistic interactions among predators has been largely ignored in food web theory. These mechanisms influence predation rates, which is one of the key factors regulating food web structure and dynamics, thus ignoring them can potentially limit understanding of food webs. Using nonlinear models, it is shown that critical aspects of multiple predator food web dynamics are antagonistic/synergistic interactions among predators. The influence of antagonistic/synergistic interactions on coexistence of predators depended largely upon the parameter set used and the degree of feeding niche differentiation. In all cases when there was no effect of antagonism or synergism (a ( ij )=1.00), the predators coexisted. Using the stable parameter set, coexistence occurred across the range of antagonism/synergism used. However, using the chaotic parameter strong antagonism resulted in the extinction of one or both species, while strong synergism tended to coexistence. Whereas using the limit cycle parameter set, coexistence was strongly dependent on the degree of feeding niche overlap. Additionally increasing the degree of feeding specialization of the predators on the two prey species increased the amount of parameter space in which coexistence of the two predators occurred. Bifurcation analyses supported the general pattern of increased stability when the predator interaction was synergistic and decreased stability when it was antagonistic. Thus, synergistic interactions should be more common than antagonistic interactions in ecological systems.
Logistic Stick-Breaking Process
Ren, Lu; Du, Lan; Carin, Lawrence; Dunson, David B.
2013-01-01
A logistic stick-breaking process (LSBP) is proposed for non-parametric clustering of general spatially- or temporally-dependent data, imposing the belief that proximate data are more likely to be clustered together. The sticks in the LSBP are realized via multiple logistic regression functions, with shrinkage priors employed to favor contiguous and spatially localized segments. The LSBP is also extended for the simultaneous processing of multiple data sets, yielding a hierarchical logistic stick-breaking process (H-LSBP). The model parameters (atoms) within the H-LSBP are shared across the multiple learning tasks. Efficient variational Bayesian inference is derived, and comparisons are made to related techniques in the literature. Experimental analysis is performed for audio waveforms and images, and it is demonstrated that for segmentation applications the LSBP yields generally homogeneous segments with sharp boundaries. PMID:25258593
Multi- and hyperspectral scene modeling
NASA Astrophysics Data System (ADS)
Borel, Christoph C.; Tuttle, Ronald F.
2011-06-01
This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.
Experimental studies of systematic multiple-energy operation at HIMAC synchrotron
NASA Astrophysics Data System (ADS)
Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.
2014-07-01
Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
Simultaneous Retrieval of Multiple Aerosol Parameters Using a Multi-Angular Approach
NASA Technical Reports Server (NTRS)
Kuo, K. S.; Weger, R. C.; Welch, R. M.
1997-01-01
Atmospheric aerosol particles, both natural and anthropogenic, are important to the earth's radiative balance through their direct and indirect effects. They scatter the incoming solar radiation (direct effect) and modify the shortwave reflective properties of clouds by acting as cloud condensation nuclei (indirect effect). Although it has been suggested that aerosols exert a net cooling influence on climate, this effect has received less attention than the radiative forcing due to clouds and greenhouse gases. In order to understand the role that aerosols play in a changing climate, detailed and accurate observations are a prerequisite. The retrieval of aerosol optical properties by satellite remote sensing has proven to be a difficult task. The difficulty results mainly from the tenuous nature and variable composition of aerosols. To date, with single-angle satellite observations, we can only retrieve reliably against dark backgrounds, such as over oceans and dense vegetation. Even then, assumptions must be made concerning the chemical composition of aerosols. The best hope we have for aerosol retrievals over bright backgrounds are observations from multiple angles, such as those provided by the MISR and POLDER instruments. In this investigation we examine the feasibility of simultaneous retrieval of multiple aerosol optical parameters using reflectances from a typical set of twelve angles observed by the French POLDER instrument. The retrieved aerosol optical parameters consist of asymmetry factor, single scattering albedo, surface albedo, and optical thickness.
Physical characteristics of experienced and junior open-wheel car drivers.
Raschner, Christian; Platzer, Hans-Peter; Patterson, Carson
2013-01-01
Despite the popularity of open-wheel car racing, scientific literature about the physical characteristics of competitive race car drivers is scarce. The purpose of this study was to compare selected fitness parameters of experienced and junior open-wheel race car drivers. The experienced drivers consisted of five Formula One, two GP2 and two Formula 3 drivers, and the nine junior drivers drove in the Formula Master, Koenig, BMW and Renault series. The following fitness parameters were tested: multiple reactions, multiple anticipation, postural stability, isometric upper body strength, isometric leg extension strength, isometric grip strength, cyclic foot speed and jump height. The group differences were calculated using the Mann-Whitney U-test. Because of the multiple testing strategy used, the statistical significance was Bonferroni corrected and set at P < 0.004. Significant differences between the experienced and junior drivers were found only for the jump height parameter (P = 0.002). The experienced drivers tended to perform better in leg strength (P = 0.009), cyclic foot speed (P = 0.024) and grip strength (P = 0.058). None of the other variables differed between the groups. The results suggested that the experienced drivers were significantly more powerful than the junior drivers: they tended to be quicker and stronger (18% to 25%) but without statistical significance. The experienced drivers demonstrated excellent strength and power compared with other high-performance athletes.
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Hilbert-Carius, P; Hofmann, G O; Lefering, R; Stuttmann, R; Struck, M F
2016-04-01
Trauma-induced coagulopathy (TIC) in multiple trauma patients is a potentially lethal complication. Whether quickly available laboratory parameters using point-of-care (POC) blood gas analysis (BGA) may serve as surrogate parameters for standard coagulation parameters is unknown. The present study evaluated TraumaRegister DGU® of the German Trauma Society for correlations between POC BGA parameters and standard coagulation parameters. In the setting of 197 trauma centres (172 in Germany), 86,442 patients were analysed between 2005 and 2012. Of these, 40,129 (72% men) with a mean age 46 ± 21 years underwent further analysis presenting with direct admission from the scene of the accident to a trauma centre, injury severity score (ISS) ≥ 9, complete data available for the calculation of revised injury severity classification prognosis, and blood samples with valid haemoglobin (Hb) measurements taken immediately after emergency department (ED) admission. Correlations between standard coagulation parameters and POC BGA parameters (Hb, base excess [BE], lactate) were tested using Pearson's test with a two-tailed significance level of p < 0.05. A subgroup analysis including patients with ISS > 16, ISS > 25, ISS > 16 and shock at ED admission, and patients with massive transfusion was likewise carried out. Correlations were found between Hb and prothrombin time (r = 0.497; p < 0.01), Hb and activated partial thromboplastin time (aPTT; r = -0.414; p < 0.01), and Hb and platelet count (PLT; r = 0.301; p < 0.01). Patients presenting with ISS ≥ 16 and shock (systolic blood pressure < 90 mmHg) at ED admission (n = 4,329) revealed the strongest correlations between Hb and prothrombin time (r = 0.570; p < 0.01), Hb and aPTT (r = -0.457; p < 0.01), and Hb and PLT (r = 0.412; p < 0.01). Significant correlations were also found between BE and prothrombin time (r = -0.365; p < 0.01), and BE and aPTT (r = 0.327, p < 0.01). No correlations were found between Hb, BE and lactate lactate. POC BGA parameters Hb and BE of multiple trauma patients correlated with standard coagulation parameters in a large database analysis. These correlations were particularly strong in multiple trauma patients presenting with ISS > 16 and shock at ED admission. This may be relevant for hospitals with delayed availability of coagulation studies and those without viscoelastic POC devices. Future studies may determine whether clinical presentation/BGA-oriented coagulation therapy is an appropriate tool for improving outcomes after major trauma.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
Quantitative knowledge acquisition for expert systems
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.
Processing of meteorological data with ultrasonic thermoanemometers
NASA Astrophysics Data System (ADS)
Telminov, A. E.; Bogushevich, A. Ya.; Korolkov, V. A.; Botygin, I. A.
2017-11-01
The article describes a software system intended for supporting scientific researches of the atmosphere during the processing of data gathered by multi-level ultrasonic complexes for automated monitoring of meteorological and turbulent parameters in the ground layer of the atmosphere. The system allows to process files containing data sets of temperature instantaneous values, three orthogonal components of wind speed, humidity and pressure. The processing task execution is done in multiple stages. During the first stage, the system executes researcher's query for meteorological parameters. At the second stage, the system computes series of standard statistical meteorological field properties, such as averages, dispersion, standard deviation, asymmetry coefficients, excess, correlation etc. The third stage is necessary to prepare for computing the parameters of atmospheric turbulence. The computation results are displayed to user and stored at hard drive.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis
Beato, M.
2013-01-01
Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.
NASA Astrophysics Data System (ADS)
Audebert, M.; Clément, R.; Touze-Foltz, N.; Günther, T.; Moreau, S.; Duquennoi, C.
2014-12-01
Leachate recirculation is a key process in municipal waste landfills functioning as bioreactors. To quantify the water content and to assess the leachate injection system, in-situ methods are required to obtain spatially distributed information, usually electrical resistivity tomography (ERT). This geophysical method is based on the inversion process, which presents two major problems in terms of delimiting the infiltration area. First, it is difficult for ERT users to choose an appropriate inversion parameter set. Indeed, it might not be sufficient to interpret only the optimum model (i.e. the model with the chosen regularisation strength) because it is not necessarily the model which best represents the physical process studied. Second, it is difficult to delineate the infiltration front based on resistivity models because of the smoothness of the inversion results. This paper proposes a new methodology called MICS (multiple inversions and clustering strategy), which allows ERT users to improve the delimitation of the infiltration area in leachate injection monitoring. The MICS methodology is based on (i) a multiple inversion step by varying the inversion parameter values to take a wide range of resistivity models into account and (ii) a clustering strategy to improve the delineation of the infiltration front. In this paper, MICS was assessed on two types of data. First, a numerical assessment allows us to optimise and test MICS for different infiltration area sizes, contrasts and shapes. Second, MICS was applied to a field data set gathered during leachate recirculation on a bioreactor.
Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models
Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.
2014-01-01
Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
Local Variability of Parameters for Characterization of the Corneal Subbasal Nerve Plexus.
Winter, Karsten; Scheibe, Patrick; Köhler, Bernd; Allgeier, Stephan; Guthoff, Rudolf F; Stachs, Oliver
2016-01-01
The corneal subbasal nerve plexus (SNP) offers high potential for early diagnosis of diabetic peripheral neuropathy. Changes in subbasal nerve fibers can be assessed in vivo by confocal laser scanning microscopy (CLSM) and quantified using specific parameters. While current study results agree regarding parameter tendency, there are considerable differences in terms of absolute values. The present study set out to identify factors that might account for this high parameter variability. In three healthy subjects, we used a novel method of software-based large-scale reconstruction that provided SNP images of the central cornea, decomposed the image areas into all possible image sections corresponding to the size of a single conventional CLSM image (0.16 mm2), and calculated a set of parameters for each image section. In order to carry out a large number of virtual examinations within the reconstructed image areas, an extensive simulation procedure (10,000 runs per image) was implemented. The three analyzed images ranged in size from 3.75 mm2 to 4.27 mm2. The spatial configuration of the subbasal nerve fiber networks varied greatly across the cornea and thus caused heavily location-dependent results as well as wide value ranges for the parameters assessed. Distributions of SNP parameter values varied greatly between the three images and showed significant differences between all images for every parameter calculated (p < 0.001 in each case). The relatively small size of the conventionally evaluated SNP area is a contributory factor in high SNP parameter variability. Averaging of parameter values based on multiple CLSM frames does not necessarily result in good approximations of the respective reference values of the whole image area. This illustrates the potential for examiner bias when selecting SNP images in the central corneal area.
NASA Astrophysics Data System (ADS)
Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry
2013-04-01
An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav; ...
2016-04-07
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A
2017-01-01
Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghafarian, M.; Ariaei, A., E-mail: ariaei@eng.ui.ac.ir
The free vibration analysis of a multiple rotating nanobeams' system applying the nonlocal Eringen elasticity theory is presented. Multiple nanobeams' systems are of great importance in nano-optomechanical applications. At nanoscale, the nonlocal effects become non-negligible. According to the nonlocal Euler-Bernoulli beam theory, the governing partial differential equations are derived by incorporating the nonlocal scale effects. Assuming a structure of n parallel nanobeams, the vibration of the system is described by a coupled set of n partial differential equations. The method involves a change of variables to uncouple the equations and the differential transform method as an efficient mathematical technique tomore » solve the nonlocal governing differential equations. Then a number of parametric studies are conducted to assess the effect of the nonlocal scaling parameter, rotational speed, boundary conditions, hub radius, and the stiffness coefficients of the elastic interlayer media on the vibration behavior of the coupled rotating multiple-carbon-nanotube-beam system. It is revealed that the bending vibration of the system is significantly influenced by the rotational speed, elastic mediums, and the nonlocal scaling parameters. This model is validated by comparing the results with those available in the literature. The natural frequencies are in a reasonably good agreement with the reported results.« less
Lin, Ying Ling; Guerguerian, Anne-Marie; Tomasi, Jessica; Laussen, Peter; Trbovich, Patricia
2017-08-14
Intensive care clinicians use several sources of data in order to inform decision-making. We set out to evaluate a new interactive data integration platform called T3™ made available for pediatric intensive care. Three primary functions are supported: tracking of physiologic signals, displaying trajectory, and triggering decisions, by highlighting data or estimating risk of patient instability. We designed a human factors study to identify interface usability issues, to measure ease of use, and to describe interface features that may enable or hinder clinical tasks. Twenty-two participants, consisting of bedside intensive care physicians, nurses, and respiratory therapists, tested the T3™ interface in a simulation laboratory setting. Twenty tasks were performed with a true-to-setting, fully functional, prototype, populated with physiological and therapeutic intervention patient data. Primary data visualization was time series and secondary visualizations were: 1) shading out-of-target values, 2) mini-trends with exaggerated maxima and minima (sparklines), and 3) bar graph of a 16-parameter indicator. Task completion was video recorded and assessed using a use error rating scale. Usability issues were classified in the context of task and type of clinician. A severity rating scale was used to rate potential clinical impact of usability issues. Time series supported tracking a single parameter but partially supported determining patient trajectory using multiple parameters. Visual pattern overload was observed with multiple parameter data streams. Automated data processing using shading and sparklines was often ignored but the 16-parameter data reduction algorithm, displayed as a persistent bar graph, was visually intuitive. However, by selecting or automatically processing data, triggering aids distorted the raw data that clinicians use regularly. Consequently, clinicians could not rely on new data representations because they did not know how they were established or derived. Usability issues, observed through contextual use, provided directions for tangible design improvements of data integration software that may lessen use errors and promote safe use. Data-driven decision making can benefit from iterative interface redesign involving clinician-users in simulated environments. This study is a first step in understanding how software can support clinicians' decision making with integrated continuous monitoring data. Importantly, testing of similar platforms by all the different disciplines who may become clinician users is a fundamental step necessary to understand the impact on clinical outcomes of decision aids.
Kovács, A; Erős, I; Csóka, I
2016-04-01
The aim of our present work was to develop stable water-in-oil-in-water (w/o/w) cosmetic multiple emulsions that are proper for cosmetic use and can also be applied on the skin as pharmaceutical vehicles by means of Quality by Design (QbD) concept. This product design concept consists of a risk assessment step and also the 'predetermination' of the critical material attributes and process parameters of a stable multiple emulsion system. We have set up the hypothesis that the stability of multiple emulsions can be improved by the development based on such systematic planning - making a map of critical product parameters - so their industrial usage can be increased. The risk assessment and the determination of critical physical-chemical stability parameters of w/o/w multiple emulsions to define critical control points were performed by means of quality tools and the leanqbd(™) (QbD Works LLC, Fremont, CA, U.S.A.) software. Critical materials and process parameters: Based on the results of preformulation experiments, three factors, namely entrapped active agent, preparation methodology and shear rate, were found to be highly critical factors for critical quality attributes (CQAs) and for stability, whereas the nature of oil was found a medium level risk factor. The results of the risk assessment are the following: (i) droplet structure and size distribution should be evaluated together to be able to predict the stability issues, (ii) the presence of entrapped active agents had a great impact on droplet structure, (iii) the viscosity curves represent the structural changes during storage, if the decrease in relative viscosity is >15% the emulsion disintegrates, and (iv) it is enough to use the shear rate between 34g and 116g relative centrifugal force (RCF). CQAs: By risk assessment, we discovered that four factors should be considered to be high-risk variables as compared to others: droplet size, droplet structure, viscosity and multiple character were found to be highly critical attributes. The preformulation experiment is the part of a development plan. On the basis of these results, the control strategy can be defined and a stable multiple emulsion can be ensured that meets the relevant stakeholders' quality expectations. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Electron Impact Multiple Ionization Cross Sections for Solar Physics
NASA Astrophysics Data System (ADS)
Hahn, M.; Savin, D. W.; Mueller, A.
2017-12-01
We have compiled a set of electron-impact multiple ionization (EIMI) cross sections for astrophysically relevant ions. EIMI can have a significant effect on the ionization balance of non-equilibrium plasmas. For example, it can be important if there is a rapid change in the electron temperature, as in solar flares or in nanoflare coronal heating. EIMI is also likely to be significant when the electron energy distribution is non-thermal, such as if the electrons follow a kappa distribution. Cross sections for EIMI are needed in order to account for these processes in plasma modeling and for spectroscopic interpretation. Here, we describe our comparison of proposed semiempirical formulae to the available experimental EIMI cross section data. Based on this comparison, we have interpolated and extrapolated fitting parameters to systems that have not yet been measured. A tabulation of the fit parameters is provided for thousands of EIMI cross sections. We also highlight some outstanding issues that remain to be resolved.
Design and optimization of an energy degrader with a multi-wedge scheme based on Geant4
NASA Astrophysics Data System (ADS)
Liang, Zhikai; Liu, Kaifeng; Qin, Bin; Chen, Wei; Liu, Xu; Li, Dong; Xiong, Yongqian
2018-05-01
A proton therapy facility based on an isochronous superconducting cyclotron is under construction in Huazhong University of Science and Technology (HUST). To meet the clinical requirements, an energy degrader is essential in the beamline to modulate the fixed beam energy extracted from the cyclotron. Because of the multiple Coulomb scattering in the degrader, the beam emittance and the energy spread will be considerably increased during the energy degradation process. Therefore, a set of collimators is designed to restrict the increase in beam emittance after the energy degradation. The energy spread will be reduced in the following beam line which is not discussed in this paper. In this paper, the design considerations of an energy degrader and collimators are introduced, and the properties of the degrader material, degrader structure and the initial beam parameters are discussed using the Geant4 Monte-Carlo toolkit, with the main purpose of improving the overall performance of the degrader by multiple parameter optimization.
On the correlation between phase-locking modes and Vibrational Resonance in a neuronal model
NASA Astrophysics Data System (ADS)
Morfu, S.; Bordet, M.
2018-02-01
We numerically and experimentally investigate the underlying mechanism leading to multiple resonances in the FitzHugh-Nagumo model driven by a bichromatic excitation. Using a FitzHugh-Nagumo circuit, we first analyze the number of spikes triggered by the system in response to a single sinusoidal wave forcing. We build an encoding diagram where different phase-locking modes are identified according to the amplitude and frequency of the sinusoidal excitation. Next, we consider the bichromatic driving which consists in a low frequency sinusoidal wave perturbed by an additive high frequency signal. Beside the classical Vibrational Resonance phenomenon, we show in real experiments that multiple resonances can be reached by an appropriate setting of the perturbation parameters. We clearly establish a correlation between these resonances and the encoding diagram of the low frequency signal free FitzHugh-Nagumo model. We show with realistic parameters that sharp transitions of the encoding diagram allow to predict the main resonances. Our experiments are confirmed by numerical simulations of the system response.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
NASA Technical Reports Server (NTRS)
Hadass, Z.
1974-01-01
The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.
Snyder, David A; Montelione, Gaetano T
2005-06-01
An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.
NASA Astrophysics Data System (ADS)
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
NASA Astrophysics Data System (ADS)
Hus, Jean-Christophe; Bruschweiler, Rafael
2002-07-01
A general method is presented for the reconstruction of interatomic vector orientations from nuclear magnetic resonance (NMR) spectroscopic data of tensor interactions of rank 2, such as dipolar coupling and chemical shielding anisotropy interactions, in solids and partially aligned liquid-state systems. The method, called PRIMA, is based on a principal component analysis of the covariance matrix of the NMR parameters collected for multiple alignments. The five nonzero eigenvalues and their eigenvectors efficiently allow the approximate reconstruction of the vector orientations of the underlying interactions. The method is demonstrated for an isotropic distribution of sample orientations as well as for finite sets of orientations and internuclear vectors encountered in protein systems.
NASA Astrophysics Data System (ADS)
Lecun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-01
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Digital Beamforming Synthetic Aperture Radar Developments at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung Kuk; Du Toit, Cornelis F.; Perrine, Martin; Ranson, K. Jon; Sun, Guoqing; Deshpande, Manohar; Beck, Jaclyn;
2016-01-01
Advanced Digital Beamforming (DBF) Synthetic Aperture Radar (SAR) technology is an area of research and development pursued at the NASA Goddard Space Flight Center (GSFC). Advanced SAR architectures enhances radar performance and opens a new set of capabilities in radar remote sensing. DBSAR-2 and EcoSAR are two state-of-the-art radar systems recently developed and tested. These new instruments employ multiple input-multiple output (MIMO) architectures characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instruments have been developed to support several disciplines in Earth and Planetary sciences. This paper describes the radars advanced features and report on the latest SAR processing and calibration efforts.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey
2015-05-28
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
NASA Astrophysics Data System (ADS)
Ji, H.; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.
2017-10-01
The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton with first plasmas expected in the fall of 2017, based on the design of Magnetic Reconnection Experiment (MRX; mrx.pppl.gov) with much extended parameter ranges. Its main objective is to provide an experimental platform for the studies of magnetic reconnection and related phenomena in the multiple X-line regimes directly relevant to space, solar, astrophysical and fusion plasmas. The main diagnostics is an extensive set of magnetic probe arrays, simultaneously covering multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m). Specific example space physics topics which can be studied on FLARE will be discussed.
Steady axisymmetric vortex flows with swirl and shear
NASA Astrophysics Data System (ADS)
Elcrat, Alan R.; Fornberg, Bengt; Miller, Kenneth G.
A general procedure is presented for computing axisymmetric swirling vortices which are steady with respect to an inviscid flow that is either uniform at infinity or includes shear. We consider cases both with and without a spherical obstacle. Choices of numerical parameters are given which yield vortex rings with swirl, attached vortices with swirl analogous to spherical vortices found by Moffatt, tubes of vorticity extending to infinity and Beltrami flows. When there is a spherical obstacle we have found multiple solutions for each set of parameters. Flows are found by numerically solving the Bragg-Hawthorne equation using a non-Newton-based iterative procedure which is robust in its dependence on an initial guess.
Combining states without scale hierarchies with ordered parton showers
Fischer, Nadine; Prestel, Stefan
2017-09-12
Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less
Welch, Stephen M.; White, Jeffrey W.; Thorp, Kelly R.; Bello, Nora M.
2018-01-01
Ecophysiological crop models encode intra-species behaviors using parameters that are presumed to summarize genotypic properties of individual lines or cultivars. These genotype-specific parameters (GSP’s) can be interpreted as quantitative traits that can be mapped or otherwise analyzed, as are more conventional traits. The goal of this study was to investigate the estimation of parameters controlling maize anthesis date with the CERES-Maize model, based on 5,266 maize lines from 11 plantings at locations across the eastern United States. High performance computing was used to develop a database of 356 million simulated anthesis dates in response to four CERES-Maize model parameters. Although the resulting estimates showed high predictive value (R2 = 0.94), three issues presented serious challenges for use of GSP’s as traits. First (expressivity), the model was unable to express the observed data for 168 to 3,339 lines (depending on the combination of site-years), many of which ended up sharing the same parameter value irrespective of genetics. Second, for 2,254 lines, the model reproduced the data, but multiple parameter sets were equally effective (equifinality). Third, parameter values were highly dependent (p<10−6919) on the sets of environments used to estimate them (instability), calling in to question the assumption that they represent fundamental genetic traits. The issues of expressivity, equifinality and instability must be addressed before the genetic mapping of GSP’s becomes a robust means to help solve the genotype-to-phenotype problem in crops. PMID:29672629
NASA Astrophysics Data System (ADS)
Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.
2012-12-01
Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom
2018-01-09
We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.
NASA Astrophysics Data System (ADS)
Rodriguez Lucatero, C.; Schaum, A.; Alarcon Ramos, L.; Bernal-Jaquez, R.
2014-07-01
In this study, the dynamics of decisions in complex networks subject to external fields are studied within a Markov process framework using nonlinear dynamical systems theory. A mathematical discrete-time model is derived using a set of basic assumptions regarding the convincement mechanisms associated with two competing opinions. The model is analyzed with respect to the multiplicity of critical points and the stability of extinction states. Sufficient conditions for extinction are derived in terms of the convincement probabilities and the maximum eigenvalues of the associated connectivity matrices. The influences of exogenous (e.g., mass media-based) effects on decision behavior are analyzed qualitatively. The current analysis predicts: (i) the presence of fixed-point multiplicity (with a maximum number of four different fixed points), multi-stability, and sensitivity with respect to the process parameters; and (ii) the bounded but significant impact of exogenous perturbations on the decision behavior. These predictions were verified using a set of numerical simulations based on a scale-free network topology.
Inference in a Synchronization Game with Social Interactions *
de Paula, Áureo
2009-01-01
This paper studies inference in a continuous time game where an agent's decision to quit an activity depends on the participation of other players. In equilibrium, similar actions can be explained not only by direct influences but also by correlated factors. Our model can be seen as a simultaneous duration model with multiple decision makers and interdependent durations. We study the problem of determining the existence and uniqueness of equilibrium stopping strategies in this setting. This paper provides results and conditions for the detection of these endogenous effects. First, we show that the presence of such effects is a necessary and sufficient condition for simultaneous exits. This allows us to set up a nonparametric test for the presence of such influences which is robust to multiple equilibria. Second, we provide conditions under which parameters in the game are identified. Finally, we apply the model to data on desertion in the Union Army during the American Civil War and find evidence of endogenous influences. PMID:20046804
Interactive graphical system for small-angle scattering analysis of polydisperse systems
NASA Astrophysics Data System (ADS)
Konarev, P. V.; Volkov, V. V.; Svergun, D. I.
2016-09-01
A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.
Automating approximate Bayesian computation by local linear regression.
Thornton, Kevin R
2009-07-07
In several biological contexts, parameter inference often relies on computationally-intensive techniques. "Approximate Bayesian Computation", or ABC, methods based on summary statistics have become increasingly popular. A particular flavor of ABC based on using a linear regression to approximate the posterior distribution of the parameters, conditional on the summary statistics, is computationally appealing, yet no standalone tool exists to automate the procedure. Here, I describe a program to implement the method. The software package ABCreg implements the local linear-regression approach to ABC. The advantages are: 1. The code is standalone, and fully-documented. 2. The program will automatically process multiple data sets, and create unique output files for each (which may be processed immediately in R), facilitating the testing of inference procedures on simulated data, or the analysis of multiple data sets. 3. The program implements two different transformation methods for the regression step. 4. Analysis options are controlled on the command line by the user, and the program is designed to output warnings for cases where the regression fails. 5. The program does not depend on any particular simulation machinery (coalescent, forward-time, etc.), and therefore is a general tool for processing the results from any simulation. 6. The code is open-source, and modular.Examples of applying the software to empirical data from Drosophila melanogaster, and testing the procedure on simulated data, are shown. In practice, the ABCreg simplifies implementing ABC based on local-linear regression.
Marschallinger, Robert; Golaszewski, Stefan M; Kunz, Alexander B; Kronbichler, Martin; Ladurner, Gunther; Hofmann, Peter; Trinka, Eugen; McCoy, Mark; Kraus, Jörg
2014-01-01
In multiple sclerosis (MS) the individual disease courses are very heterogeneous among patients and biomarkers for setting the diagnosis and the estimation of the prognosis for individual patients would be very helpful. For this purpose, we are developing a multidisciplinary method and workflow for the quantitative, spatial, and spatiotemporal analysis and characterization of MS lesion patterns from MRI with geostatistics. We worked on a small data set involving three synthetic and three real-world MS lesion patterns, covering a wide range of possible MS lesion configurations. After brain normalization, MS lesions were extracted and the resulting binary 3-dimensional models of MS lesion patterns were subject to geostatistical indicator variography in three orthogonal directions. By applying geostatistical indicator variography, we were able to describe the 3-dimensional spatial structure of MS lesion patterns in a standardized manner. Fitting a model function to the empirical variograms, spatial characteristics of the MS lesion patterns could be expressed and quantified by two parameters. An orthogonal plot of these parameters enabled a well-arranged comparison of the involved MS lesion patterns. This method in development is a promising candidate to complement standard image-based statistics by incorporating spatial quantification. The work flow is generic and not limited to analyzing MS lesion patterns. It can be completely automated for the screening of radiological archives. Copyright © 2013 by the American Society of Neuroimaging.
NASA Astrophysics Data System (ADS)
Bukhari, Hassan J.
2017-12-01
In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.
Induced hypothermia does not impair coagulation system in a swine multiple trauma model.
Mohr, Juliane; Ruchholtz, Steffen; Hildebrand, Frank; Flohé, Sascha; Frink, Michael; Witte, Ingo; Weuster, Matthias; Fröhlich, Matthias; van Griensven, Martijn; Keibl, Claudia; Mommsen, Philipp
2013-04-01
Accidental hypothermia, acidosis, and coagulopathy represent the lethal triad in severely injured patients. Therapeutic hypothermia however is commonly used in transplantations, cardiac and neurosurgical surgery, or after cardiac arrest. However, the effects of therapeutic hypothermia on the coagulation system following multiple trauma need to be elucidated. In a porcine model of multiple trauma including blunt chest injury, liver laceration, and hemorrhagic shock followed by fluid resuscitation, the influence of therapeutic hypothermia on coagulation was evaluated. A total of 40 pigs were randomly assigned to sham (only anesthesia) or trauma groups receiving either hypothermia or normothermia. Each group consisted of 10 pigs. Analyzed parameters were cell count (red blood cells, platelets), pH, prothrombin time (PT), fibrinogen concentration, and analysis with ROTEM and Multiplate. Trauma and consecutive fluid resuscitation resulted in impaired coagulation parameters (cell count, pH, PT, fibrinogen, ROTEM, and platelet function). During hypothermia, coagulation parameters measured at 37°C, such as PT, fibrinogen, thrombelastometry measurements, and platelet function, showed no significant differences between normothermic and hypothermic animals in both trauma groups. Additional analyses of thrombelastometry at 34°C during hypothermia showed significant differences for clotting time and clot formation time but not for maximum clot firmness. We were not able to detect macroscopic or petechial bleeding in both trauma groups. Based on the results of the present study we suggest that mild hypothermia can be safely performed after stabilization following major trauma. Mild hypothermia has effects on the coagulation system but does not aggravate trauma-induced coagulopathy in our model. Before hypothermic treatment can be performed in the clinical setting, additional experiments with prolonged and deeper hypothermia to exclude detrimental effects are required.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Siebenmorgen, R.; Voshchinnikov, N. V.; Bagnulo, S.; Cox, N. L. J.; Cami, J.; Peest, C.
2018-03-01
It is well known that the dust properties of the diffuse interstellar medium exhibit variations towards different sight-lines on a large scale. We have investigated the variability of the dust characteristics on a small scale, and from cloud-to-cloud. We use low-resolution spectro-polarimetric data obtained in the context of the Large Interstellar Polarisation Survey (LIPS) towards 59 sight-lines in the Southern Hemisphere, and we fit these data using a dust model composed of silicate and carbon particles with sizes from the molecular to the sub-micrometre domain. Large (≥6 nm) silicates of prolate shape account for the observed polarisation. For 32 sight-lines we complement our data set with UVES archive high-resolution spectra, which enable us to establish the presence of single-cloud or multiple-clouds towards individual sight-lines. We find that the majority of these 35 sight-lines intersect two or more clouds, while eight of them are dominated by a single absorbing cloud. We confirm several correlations between extinction and parameters of the Serkowski law with dust parameters, but we also find previously undetected correlations between these parameters that are valid only in single-cloud sight-lines. We find that interstellar polarisation from multiple-clouds is smaller than from single-cloud sight-lines, showing that the presence of a second or more clouds depolarises the incoming radiation. We find large variations of the dust characteristics from cloud-to-cloud. However, when we average a sufficiently large number of clouds in single-cloud or multiple-cloud sight-lines, we always retrieve similar mean dust parameters. The typical dust abundances of the single-cloud cases are [C]/[H] = 92 ppm and [Si]/[H] = 20 ppm.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Mathematical modeling of tetrahydroimidazole benzodiazepine-1-one derivatives as an anti HIV agent
NASA Astrophysics Data System (ADS)
Ojha, Lokendra Kumar
2017-07-01
The goal of the present work is the study of drug receptor interaction via QSAR (Quantitative Structure-Activity Relationship) analysis for 89 set of TIBO (Tetrahydroimidazole Benzodiazepine-1-one) derivatives. MLR (Multiple Linear Regression) method is utilized to generate predictive models of quantitative structure-activity relationships between a set of molecular descriptors and biological activity (IC50). The best QSAR model was selected having a correlation coefficient (r) of 0.9299 and Standard Error of Estimation (SEE) of 0.5022, Fisher Ratio (F) of 159.822 and Quality factor (Q) of 1.852. This model is statistically significant and strongly favours the substitution of sulphur atom, IS i.e. indicator parameter for -Z position of the TIBO derivatives. Two other parameter logP (octanol-water partition coefficient) and SAG (Surface Area Grid) also played a vital role in the generation of best QSAR model. All three descriptor shows very good stability towards data variation in leave-one-out (LOO).
Functional Multiple-Set Canonical Correlation Analysis
ERIC Educational Resources Information Center
Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.
2012-01-01
We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei
2017-09-11
Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.
Lazaris, Charalampos; Kelly, Stephen; Ntziachristos, Panagiotis; Aifantis, Iannis; Tsirigos, Aristotelis
2017-01-05
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies. To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available. HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Mariel, Petr; Hoyos, David; Artabe, Alaitz; Guevara, C Angelo
2018-08-15
Endogeneity is an often neglected issue in empirical applications of discrete choice modelling despite its severe consequences in terms of inconsistent parameter estimation and biased welfare measures. This article analyses the performance of the multiple indicator solution method to deal with endogeneity arising from omitted explanatory variables in discrete choice models for environmental valuation. We also propose and illustrate a factor analysis procedure for the selection of the indicators in practice. Additionally, the performance of this method is compared with the recently proposed hybrid choice modelling framework. In an empirical application we find that the multiple indicator solution method and the hybrid model approach provide similar results in terms of welfare estimates, although the multiple indicator solution method is more parsimonious and notably easier to implement. The empirical results open a path to explore the performance of this method when endogeneity is thought to have a different cause or under a different set of indicators. Copyright © 2018 Elsevier B.V. All rights reserved.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
Gamma-ray Output Spectra from 239 Pu Fission
Ullmann, John
2015-05-25
The gamma-ray multiplicities, individual gamma-ray energy spectra, and total gamma energy spectra following neutron-induced fission of 239Pu were measured using the DANCE detector at Los Alamos. Corrections for detector response were made using a forward-modeling technique based on propagating sets of gamma rays generated from a paramaterized model through a GEANT model of the DANCE array and adjusting the parameters for best fit to the measured spectra. The results for the gamma-ray spectrum and multiplicity are in general agreement with previous results, but the measured total gamma-ray energy is about 10% higher. We found that a dependence of the gamma-raymore » spectrum on the gamma-ray multplicity was also observed. Finally, global model calculations of the multiplicity and gamma energy distributions are in good agreement with the data, but predict a slightly softer total-energy distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faulds, James E.; Hinz, Nicholas H.; Coolbaugh, Mark F.
We have undertaken an integrated geologic, geochemical, and geophysical study of a broad 240-km-wide, 400-km-long transect stretching from west-central to eastern Nevada in the Great Basin region of the western USA. The main goal of this study is to produce a comprehensive geothermal potential map that incorporates up to 11 parameters and identifies geothermal play fairways that represent potential blind or hidden geothermal systems. Our new geothermal potential map incorporates: 1) heat flow; 2) geochemistry from springs and wells; 3) structural setting; 4) recency of faulting; 5) slip rates on Quaternary faults; 6) regional strain rate; 7) slip and dilationmore » tendency on Quaternary faults; 8) seismologic data; 9) gravity data; 10) magnetotelluric data (where available); and 11) seismic reflection data (primarily from the Carson Sink and Steptoe basins). The transect is respectively anchored on its western and eastern ends by regional 3D modeling of the Carson Sink and Steptoe basins, which will provide more detailed geothermal potential maps of these two promising areas. To date, geological, geochemical, and geophysical data sets have been assembled into an ArcGIS platform and combined into a preliminary predictive geothermal play fairway model using various statistical techniques. The fairway model consists of the following components, each of which are represented in grid-cell format in ArcGIS and combined using specified weights and mathematical operators: 1) structural component of permeability; 2) regional-scale component of permeability; 3) combined permeability, and 4) heat source model. The preliminary model demonstrates that the multiple data sets can be successfully combined into a comprehensive favorability map. An initial evaluation using known geothermal systems as benchmarks to test interpretations indicates that the preliminary modeling has done a good job assigning relative ranks of geothermal potential. However, a major challenge is defining logical relative rankings of each parameter and how best to combine the multiple data sets into the geothermal potential/ permeability map. Ongoing feedback and data analysis are in use to revise the grouping and weighting of some parameters in order to develop a more robust, optimized, final model. The final product will incorporate more parameters into a geothermal potential map than any previous effort in the region and may serve as a prototype to develop comprehensive geothermal potential maps for other regions.« less
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
Genetic algorithms for the application of Activated Sludge Model No. 1.
Kim, S; Lee, H; Kim, J; Kim, C; Ko, J; Woo, H; Kim, S
2002-01-01
The genetic algorithm (GA) has been integrated into the IWA ASM No. 1 to calibrate important stoichiometric and kinetic parameters. The evolutionary feature of GA was used to configure the multiple local optima as well as the global optimum. The objective function of optimization was designed to minimize the difference between estimated and measured effluent concentrations at the activated sludge system. Both steady state and dynamic data of the simulation benchmark were used for calibration using denitrification layout. Depending upon the confidence intervals and objective functions, the proposed method provided distributions of parameter space. Field data have been collected and applied to validate calibration capacity of GA. Dynamic calibration was suggested to capture periodic variations of inflow concentrations. Also, in order to verify this proposed method in real wastewater treatment plant, measured data sets for substrate concentrations were obtained from Haeundae wastewater treatment plant and used to estimate parameters in the dynamic system. The simulation results with calibrated parameters matched well with the observed concentrations of effluent COD.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Baseline predictors of persistence to first disease-modifying treatment in multiple sclerosis.
Zettl, U K; Schreiber, H; Bauer-Steinhusen, U; Glaser, T; Hechenbichler, K; Hecker, M
2017-08-01
Patients with multiple sclerosis (MS) require lifelong therapy. However, success of disease-modifying therapies is dependent on patients' persistence and adherence to treatment schedules. In the setting of a large multicenter observational study, we aimed at assessing multiple parameters for their predictive power with respect to discontinuation of therapy. We analyzed 13 parameters to predict discontinuation of interferon beta-1b treatment during a 2-year follow-up period based on data from 395 patients with MS who were treatment-naïve at study onset. Besides clinical characteristics, patient-related psychosocial outcomes were assessed as well. Among patients without clinically relevant fatigue, males showed a higher persistence rate than females (80.3% vs 64.7%). Clinically relevant fatigue scores decreased the persistence rate in men and especially in women (71.4% and 51.2%). Besides gender and fatigue, univariable and multivariable analyses revealed further factors associated with interferon beta-1b therapy discontinuation, namely lower quality of life, depressiveness, and higher relapse rate before therapy initiation, while higher education, living without a partner, and higher age improved persistence. Patients with higher grades of fatigue and depressiveness are at higher risk to prematurely discontinue MS treatment; especially, women suffering from fatigue have an increased discontinuation rate. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Simultaneous Retrieval of Multiple Aerosol Parameters Using a Multi-Angular Approach
NASA Technical Reports Server (NTRS)
Kuo, K.-S.; Weger, R. C.; Welch, R. M.
1997-01-01
Atmospheric aerosol particles, both natural and anthropogenic, are important to the earth's radiative balance through their direct and indirect effects. They scatter the incoming solar radiation (direct effect) and modify the shortwave reflective properties of clouds by acting as cloud condensation nuclei (indirect effect). Although it has been suggested that aerosols exert a net cooling influence on climate, this effect has received less attention than the radiative forcing due to clouds and greenhouse gases. In order to understand the role that aerosols play in a changing climate, detailed and accurate observations are a prerequisite. The retrieval of aerosol optical properties by satellite remote sensing has proven to be a difficult task. The difficulty results mainly from the tenuous nature and variable composition of aerosols. To date, with single-angle satellite observations, we can only retrieve reliably against dark backgrounds, such as over oceans and dense vegetation. Even then, assumptions must be made concerning the chemical composition of aerosols. In this investigation we examine the feasibility of simultaneous retrieval of multiple aerosol optical parameters using reflectances from a typical set of twelve angles observed by the French POLDER instrument. The retrieved aerosol optical parameters consist of asymmetry factor, single scattering albedo, surface albedo, and optical thickness.
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field.
Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim
2015-12-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model's estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences.
Autonomous Modal Identification of the Space Shuttle Tail Rudder
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; James, George H., III; Zimmerman, David C.
1997-01-01
Autonomous modal identification automates the calculation of natural vibration frequencies, damping, and mode shapes of a structure from experimental data. This technology complements damage detection techniques that use continuous or periodic monitoring of vibration characteristics. The approach shown in the paper incorporates the Eigensystem Realization Algorithm (ERA) as a data analysis engine and an autonomous supervisor to condense multiple estimates of modal parameters using ERA's Consistent-Mode Indicator and correlation of mode shapes. The procedure was applied to free-decay responses of a Space Shuttle tail rudder and successfully identified the seven modes of the structure below 250 Hz. The final modal parameters are a condensed set of results for 87 individual ERA cases requiring approximately five minutes of CPU time on a DEC Alpha computer.
Structural investigation of the Grenville Province by radar and other imaging and nonimaging sensors
NASA Technical Reports Server (NTRS)
Lowman, P. D., Jr.; Blodget, H. W.; Webster, W. J., Jr.; Paia, S.; Singhroy, V. H.; Slaney, V. R.
1984-01-01
The structural investigation of the Canadian Shield by orbital radar and LANDSAT, is outlined. The area includes parts of the central metasedimentary belt and the Ontario gneiss belt, and major structures as well-expressed topographically. The primary objective is to apply SIR-B data to the mapping of this key part of the Grenville orogen, specifically ductile fold structures and associated features, and igneous, metamorphic, and sedimentary rock (including glacial and recent sediments). Secondary objectives are to support the Canadian RADARSAT project by evaluating the baseline parameters of a Canadian imaging radar satellite planned for late in the decade. The baseline parameters include optimum incidence and azimuth angles. The experiment is to develop techniques for the use of multiple data sets.
Adali, Tülay; Levin-Schwartz, Yuri; Calhoun, Vince D.
2015-01-01
Fusion of information from multiple sets of data in order to extract a set of features that are most useful and relevant for the given task is inherent to many problems we deal with today. Since, usually, very little is known about the actual interaction among the datasets, it is highly desirable to minimize the underlying assumptions. This has been the main reason for the growing importance of data-driven methods, and in particular of independent component analysis (ICA) as it provides useful decompositions with a simple generative model and using only the assumption of statistical independence. A recent extension of ICA, independent vector analysis (IVA) generalizes ICA to multiple datasets by exploiting the statistical dependence across the datasets, and hence, as we discuss in this paper, provides an attractive solution to fusion of data from multiple datasets along with ICA. In this paper, we focus on two multivariate solutions for multi-modal data fusion that let multiple modalities fully interact for the estimation of underlying features that jointly report on all modalities. One solution is the Joint ICA model that has found wide application in medical imaging, and the second one is the the Transposed IVA model introduced here as a generalization of an approach based on multi-set canonical correlation analysis. In the discussion, we emphasize the role of diversity in the decompositions achieved by these two models, present their properties and implementation details to enable the user make informed decisions on the selection of a model along with its associated parameters. Discussions are supported by simulation results to help highlight the main issues in the implementation of these methods. PMID:26525830
Automatic parameter selection for feature-based multi-sensor image registration
NASA Astrophysics Data System (ADS)
DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan
2006-05-01
Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.
Query-Adaptive Reciprocal Hash Tables for Nearest Neighbor Search.
Liu, Xianglong; Deng, Cheng; Lang, Bo; Tao, Dacheng; Li, Xuelong
2016-02-01
Recent years have witnessed the success of binary hashing techniques in approximate nearest neighbor search. In practice, multiple hash tables are usually built using hashing to cover more desired results in the hit buckets of each table. However, rare work studies the unified approach to constructing multiple informative hash tables using any type of hashing algorithms. Meanwhile, for multiple table search, it also lacks of a generic query-adaptive and fine-grained ranking scheme that can alleviate the binary quantization loss suffered in the standard hashing techniques. To solve the above problems, in this paper, we first regard the table construction as a selection problem over a set of candidate hash functions. With the graph representation of the function set, we propose an efficient solution that sequentially applies normalized dominant set to finding the most informative and independent hash functions for each table. To further reduce the redundancy between tables, we explore the reciprocal hash tables in a boosting manner, where the hash function graph is updated with high weights emphasized on the misclassified neighbor pairs of previous hash tables. To refine the ranking of the retrieved buckets within a certain Hamming radius from the query, we propose a query-adaptive bitwise weighting scheme to enable fine-grained bucket ranking in each hash table, exploiting the discriminative power of its hash functions and their complement for nearest neighbor search. Moreover, we integrate such scheme into the multiple table search using a fast, yet reciprocal table lookup algorithm within the adaptive weighted Hamming radius. In this paper, both the construction method and the query-adaptive search method are general and compatible with different types of hashing algorithms using different feature spaces and/or parameter settings. Our extensive experiments on several large-scale benchmarks demonstrate that the proposed techniques can significantly outperform both the naive construction methods and the state-of-the-art hashing algorithms.
Naegle, Kristen M; Welsch, Roy E; Yaffe, Michael B; White, Forest M; Lauffenburger, Douglas A
2011-07-01
Advances in proteomic technologies continue to substantially accelerate capability for generating experimental data on protein levels, states, and activities in biological samples. For example, studies on receptor tyrosine kinase signaling networks can now capture the phosphorylation state of hundreds to thousands of proteins across multiple conditions. However, little is known about the function of many of these protein modifications, or the enzymes responsible for modifying them. To address this challenge, we have developed an approach that enhances the power of clustering techniques to infer functional and regulatory meaning of protein states in cell signaling networks. We have created a new computational framework for applying clustering to biological data in order to overcome the typical dependence on specific a priori assumptions and expert knowledge concerning the technical aspects of clustering. Multiple clustering analysis methodology ('MCAM') employs an array of diverse data transformations, distance metrics, set sizes, and clustering algorithms, in a combinatorial fashion, to create a suite of clustering sets. These sets are then evaluated based on their ability to produce biological insights through statistical enrichment of metadata relating to knowledge concerning protein functions, kinase substrates, and sequence motifs. We applied MCAM to a set of dynamic phosphorylation measurements of the ERRB network to explore the relationships between algorithmic parameters and the biological meaning that could be inferred and report on interesting biological predictions. Further, we applied MCAM to multiple phosphoproteomic datasets for the ERBB network, which allowed us to compare independent and incomplete overlapping measurements of phosphorylation sites in the network. We report specific and global differences of the ERBB network stimulated with different ligands and with changes in HER2 expression. Overall, we offer MCAM as a broadly-applicable approach for analysis of proteomic data which may help increase the current understanding of molecular networks in a variety of biological problems. © 2011 Naegle et al.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Zhang, Wei; Shmuylovich, Leonid; Kovacs, Sandor J
2009-01-01
Using a simple harmonic oscillator model (PDF formalism), every early filling E-wave can be uniquely described by a set of parameters, (x(0), c, and k). Parameter c in the PDF formalism is a damping or relaxation parameter that measures the energy loss during the filling process. Based on Bernoulli's equation and kinematic modeling, we derived a causal correlation between the relaxation parameter c in the PDF formalism and a feature of the pressure contour during filling - the pressure recovery ratio defined by the left ventricular pressure difference between diastasis and minimum pressure, normalized to the pressure difference between a fiducial pressure and minimum pressure [PRR = (P(Diastasis)-P(Min))/(P(Fiducial)-P(Min))]. We analyzed multiple heart beats from one human subject to validate the correlation. Further validation among more patients is warranted. PRR is the invasive causal analogue of the noninvasive E-wave relaxation parameter c. PRR has the potential to be calculated using automated methodology in the catheterization lab in real time.
NASA Astrophysics Data System (ADS)
Sanford, Ward E.; Niel Plummer, L.; Casile, Gerolamo; Busenberg, Ed; Nelms, David L.; Schlosser, Peter
2017-06-01
Dual-domain transport is an alternative conceptual and mathematical paradigm to advection-dispersion for describing the movement of dissolved constituents in groundwater. Here we test the use of a dual-domain algorithm combined with advective pathline tracking to help reconcile environmental tracer concentrations measured in springs within the Shenandoah Valley, USA. The approach also allows for the estimation of the three dual-domain parameters: mobile porosity, immobile porosity, and a domain exchange rate constant. Concentrations of CFC-113, SF6, 3H, and 3He were measured at 28 springs emanating from carbonate rocks. The different tracers give three different mean composite piston-flow ages for all the springs that vary from 5 to 18 years. Here we compare four algorithms that interpret the tracer concentrations in terms of groundwater age: piston flow, old-fraction mixing, advective-flow path modeling, and dual-domain modeling. Whereas the second two algorithms made slight improvements over piston flow at reconciling the disparate piston-flow age estimates, the dual-domain algorithm gave a very marked improvement. Optimal values for the three transport parameters were also obtained, although the immobile porosity value was not well constrained. Parameter correlation and sensitivities were calculated to help quantify the uncertainty. Although some correlation exists between the three parameters being estimated, a watershed simulation of a pollutant breakthrough to a local stream illustrates that the estimated transport parameters can still substantially help to constrain and predict the nature and timing of solute transport. The combined use of multiple environmental tracers with this dual-domain approach could be applicable in a wide variety of fractured-rock settings.
NASA Astrophysics Data System (ADS)
Basu, Sumit; Nayak, Tapan K.; Datta, Kaustuv
2016-06-01
Heavy-ion collisions at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory and the Large Hadron Collider at CERN probe matter at extreme conditions of temperature and energy density. Most of the global properties of the collisions can be extracted from the measurements of charged-particle multiplicity and pseudorapidity (η ) distributions. We have shown that the available experimental data on beam energy and centrality dependence of η distributions in heavy-ion (Au +Au or Pb +Pb ) collisions from √{sNN}=7.7 GeV to 2.76 TeV are reasonably well described by the AMPT model, which is used for further exploration. The nature of the η distributions has been described by a double Gaussian function using a set of fit parameters, which exhibit a regular pattern as a function of beam energy. By extrapolating the parameters to a higher energy of √{sNN}=5.02 TeV, we have obtained the charged-particle multiplicity densities, η distributions, and energy densities for various centralities. Incidentally, these results match well with some of the recently published data by the ALICE Collaboration.
NASA Astrophysics Data System (ADS)
Tang, J.; Gu, Y. J.; Chen, Q. F.; Li, Z. G.; Zheng, J.; Li, C. J.; Li, J. T.
2018-04-01
Multiple shock reverberation compression experiments are designed and performed to determine the equation of state of neon ranging from the initial dense gas up to the warm dense regime where the pressure is from about 40 MPa to 120 GPa and the temperature is from about 297 K up to above 20 000 K. The wide region experimental data are used to evaluate the available theoretical models. It is found that, for neon below 1.1 g/cm 3 , within the framework of density functional theory molecular dynamics, a van der Waals correction is meaningful. Under high pressure and temperature, results from the self-consistent fluid variational theory model are sensitive to the potential parameter and could give successful predictions in the whole experimental regime if a set of proper parameters is employed. The new observations on neon under megabar (1 Mbar =1011Pa ) pressure and eV temperature (1 eV ≈104K ) enrich the understanding on properties of warm dense matter and have potential applications in revealing the formation and evolution of gaseous giants or mega-Earths.
Kong, Steven H; Shore, Joel D
2007-03-01
We study the propagation of light through a medium containing isotropic scattering and absorption centers. With a Monte Carlo simulation serving as the benchmark solution to the radiative transfer problem of light propagating through a turbid slab, we compare the transmission and reflection density computed from the telegrapher's equation, the diffusion equation, and multiple-flux theories such as the Kubelka-Munk and four-flux theories. Results are presented for both normally incident light and diffusely incident light. We find that we can always obtain very good results from the telegrapher's equation provided that two parameters that appear in the solution are set appropriately. We also find an interesting connection between certain solutions of the telegrapher's equation and solutions of the Kubelka-Munk and four-flux theories with a small modification to how the phenomenological parameters in those theories are traditionally related to the optical scattering and absorption coefficients of the slab. Finally, we briefly explore how well the theories can be extended to the case of anisotropic scattering by multiplying the scattering coefficient by a simple correction factor.
NASA Astrophysics Data System (ADS)
Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi
2018-02-01
The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs
Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.
Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.
Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim
2018-04-10
There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
A Navigation Analysis Tool (NAT) to assess spatial behavior in open-field and structured mazes.
Jarlier, Frédéric; Arleo, Angelo; Petit, Géraldine H; Lefort, Julie M; Fouquet, Céline; Burguière, Eric; Rondi-Reig, Laure
2013-05-15
Spatial navigation calls upon mnemonic capabilities (e.g. remembering the location of a rewarding site) as well as adaptive motor control (e.g. fine tuning of the trajectory according to the ongoing sensory context). To study this complex process by means of behavioral measurements it is necessary to quantify a large set of meaningful parameters on multiple time scales (from milliseconds to several minutes), and to compare them across different paradigms. Moreover, the issue of automating the behavioral analysis is critical to cope with the consequent computational load and the sophistication of the measurements. We developed a general purpose Navigation Analysis Tool (NAT) that provides an integrated architecture consisting of a data management system (implemented in MySQL), a core analysis toolbox (in MATLAB), and a graphical user interface (in JAVA). Its extensive characterization of trajectories over time, from exploratory behavior to goal-oriented navigation with decision points using a wide range of parameters, makes NAT a powerful analysis tool. In particular, NAT supplies a new set of specific measurements assessing performances in multiple intersection mazes and allowing navigation strategies to be discriminated (e.g. in the starmaze). Its user interface enables easy use while its modular organization provides many opportunities of extension and customization. Importantly, the portability of NAT to any type of maze and environment extends its exploitation far beyond the field of spatial navigation. Copyright © 2013 Elsevier B.V. All rights reserved.
Aspiring to Spectral Ignorance in Earth Observation
NASA Astrophysics Data System (ADS)
Oliver, S. A.
2016-12-01
Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K(+), inward rectifying K(+), L-type Ca(2+), and Na(+)/K(+) pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation.
Gemmell, Philip; Burrage, Kevin; Rodriguez, Blanca; Quinn, T. Alexander
2014-01-01
Variability is observed at all levels of cardiac electrophysiology. Yet, the underlying causes and importance of this variability are generally unknown, and difficult to investigate with current experimental techniques. The aim of the present study was to generate populations of computational ventricular action potential models that reproduce experimentally observed intercellular variability of repolarisation (represented by action potential duration) and to identify its potential causes. A systematic exploration of the effects of simultaneously varying the magnitude of six transmembrane current conductances (transient outward, rapid and slow delayed rectifier K+, inward rectifying K+, L-type Ca2+, and Na+/K+ pump currents) in two rabbit-specific ventricular action potential models (Shannon et al. and Mahajan et al.) at multiple cycle lengths (400, 600, 1,000 ms) was performed. This was accomplished with distributed computing software specialised for multi-dimensional parameter sweeps and grid execution. An initial population of 15,625 parameter sets was generated for both models at each cycle length. Action potential durations of these populations were compared to experimentally derived ranges for rabbit ventricular myocytes. 1,352 parameter sets for the Shannon model and 779 parameter sets for the Mahajan model yielded action potential duration within the experimental range, demonstrating that a wide array of ionic conductance values can be used to simulate a physiological rabbit ventricular action potential. Furthermore, by using clutter-based dimension reordering, a technique that allows visualisation of multi-dimensional spaces in two dimensions, the interaction of current conductances and their relative importance to the ventricular action potential at different cycle lengths were revealed. Overall, this work represents an important step towards a better understanding of the role that variability in current conductances may play in experimentally observed intercellular variability of rabbit ventricular action potential repolarisation. PMID:24587229
Displacement-based back-analysis of the model parameters of the Nuozhadu high earth-rockfill dam.
Wu, Yongkang; Yuan, Huina; Zhang, Bingyin; Zhang, Zongliang; Yu, Yuzhen
2014-01-01
The parameters of the constitutive model, the creep model, and the wetting model of materials of the Nuozhadu high earth-rockfill dam were back-analyzed together based on field monitoring displacement data by employing an intelligent back-analysis method. In this method, an artificial neural network is used as a substitute for time-consuming finite element analysis, and an evolutionary algorithm is applied for both network training and parameter optimization. To avoid simultaneous back-analysis of many parameters, the model parameters of the three main dam materials are decoupled and back-analyzed separately in a particular order. Displacement back-analyses were performed at different stages of the construction period, with and without considering the creep and wetting deformations. Good agreement between the numerical results and the monitoring data was obtained for most observation points, which implies that the back-analysis method and decoupling method are effective for solving complex problems with multiple models and parameters. The comparison of calculation results based on different sets of back-analyzed model parameters indicates the necessity of taking the effects of creep and wetting into consideration in the numerical analyses of high earth-rockfill dams. With the resulting model parameters, the stress and deformation distributions at completion are predicted and analyzed.
A Holistic approach to assess older adults' wellness using e-health technologies.
Thompson, Hilaire J; Demiris, George; Rue, Tessa; Shatil, Evelyn; Wilamowska, Katarzyna; Zaslavsky, Oleg; Reeder, Blaine
2011-12-01
To date, methodologies are lacking that address a holistic assessment of wellness in older adults. Technology applications may provide a platform for such an assessment, but have not been validated. We set out to demonstrate whether e-health applications could support the assessment of older adults' wellness in community-dwelling older adults. Twenty-seven residents of independent retirement community were followed over 8 weeks. Subjects engaged in the use of diverse technologies to assess cognitive performance, physiological and functional variables, as well as psychometric components of wellness. Data were integrated from various e-health sources into one study database. Correlations were assessed between different parameters, and hierarchical cluster analysis was used to explore the validity of the wellness model. We found strong associations across multiple parameters of wellness within the conceptual model, including cognitive, functional, and physical. However, spirituality did not correlate with any other parameter studied in contrast to prior studies of older adults. Participants expressed overall positive attitudes toward the e-health tools and the holistic approach to the assessment of wellness, without expressing any privacy concerns. Parameters were highly correlated across multiple domains of wellness. Important clusters were noted to be formed across cognitive and physiological domains, giving further evidence of need for an integrated approach to the assessment of wellness. This finding warrants further replication in larger and more diverse samples of older adults to standardize and deploy these technologies across population groups.
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie-Eve; Snow, Debra; Mettetal, Jerome T; Bialecki, Russell A
2016-10-01
While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen-mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular-relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Han-Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg(-1) , p.o.) at 4 h intervals and baclofen-mediated changes in parameters recorded. A pharmacokinetic-pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. The systems pharmacology model developed fits baclofen-mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. © 2016 The British Pharmacological Society.
Kamendi, Harriet; Barthlow, Herbert; Lengel, David; Beaudoin, Marie‐Eve; Snow, Debra
2016-01-01
Background and Purpose While the molecular pathways of baclofen toxicity are understood, the relationships between baclofen‐mediated perturbation of individual target organs and systems involved in cardiovascular regulation are not clear. Our aim was to use an integrative approach to measure multiple cardiovascular‐relevant parameters [CV: mean arterial pressure (MAP), systolic BP, diastolic BP, pulse pressure, heart rate (HR); CNS: EEG; renal: chemistries and biomarkers of injury] in tandem with the pharmacokinetic properties of baclofen to better elucidate the site(s) of baclofen activity. Experimental Approach Han‐Wistar rats were administered vehicle or ascending doses of baclofen (3, 10 and 30 mg·kg−1, p.o.) at 4 h intervals and baclofen‐mediated changes in parameters recorded. A pharmacokinetic–pharmacodynamic model was then built by implementing an existing mathematical model of BP in rats. Key Results Final model fits resulted in reasonable parameter estimates and showed that the drug acts on multiple homeostatic processes. In addition, the models testing a single effect on HR, total peripheral resistance or stroke volume alone did not describe the data. A final population model was constructed describing the magnitude and direction of the changes in MAP and HR. Conclusions and Implications The systems pharmacology model developed fits baclofen‐mediated changes in MAP and HR well. The findings correlate with known mechanisms of baclofen pharmacology and suggest that similar models using limited parameter sets may be useful to predict the cardiovascular effects of other pharmacologically active substances. PMID:27448216
Artificial Intelligence in Mitral Valve Analysis
Jeganathan, Jelliffe; Knio, Ziyad; Amador, Yannis; Hai, Ting; Khamooshian, Arash; Matyal, Robina; Khabbaz, Kamal R; Mahmood, Feroze
2017-01-01
Background: Echocardiographic analysis of mitral valve (MV) has become essential for diagnosis and management of patients with MV disease. Currently, the various software used for MV analysis require manual input and are prone to interobserver variability in the measurements. Aim: The aim of this study is to determine the interobserver variability in an automated software that uses artificial intelligence for MV analysis. Settings and Design: Retrospective analysis of intraoperative three-dimensional transesophageal echocardiography data acquired from four patients with normal MV undergoing coronary artery bypass graft surgery in a tertiary hospital. Materials and Methods: Echocardiographic data were analyzed using the eSie Valve Software (Siemens Healthcare, Mountain View, CA, USA). Three examiners analyzed three end-systolic (ES) frames from each of the four patients. A total of 36 ES frames were analyzed and included in the study. Statistical Analysis: A multiple mixed-effects ANOVA model was constructed to determine if the examiner, the patient, and the loop had a significant effect on the average value of each parameter. A Bonferroni correction was used to correct for multiple comparisons, and P = 0.0083 was considered to be significant. Results: Examiners did not have an effect on any of the six parameters tested. Patient and loop had an effect on the average parameter value for each of the six parameters as expected (P < 0.0083 for both). Conclusion: We were able to conclude that using automated analysis, it is possible to obtain results with good reproducibility, which only requires minimal user intervention. PMID:28393769
Abu-Jamous, Basel; Fa, Rui; Roberts, David J; Nandi, Asoke K
2015-06-04
Collective analysis of the increasingly emerging gene expression datasets are required. The recently proposed binarisation of consensus partition matrices (Bi-CoPaM) method can combine clustering results from multiple datasets to identify the subsets of genes which are consistently co-expressed in all of the provided datasets in a tuneable manner. However, results validation and parameter setting are issues that complicate the design of such methods. Moreover, although it is a common practice to test methods by application to synthetic datasets, the mathematical models used to synthesise such datasets are usually based on approximations which may not always be sufficiently representative of real datasets. Here, we propose an unsupervised method for the unification of clustering results from multiple datasets using external specifications (UNCLES). This method has the ability to identify the subsets of genes consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets, and to identify the subsets of genes consistently co-expressed in all given datasets. We also propose the M-N scatter plots validation technique and adopt it to set the parameters of UNCLES, such as the number of clusters, automatically. Additionally, we propose an approach for the synthesis of gene expression datasets using real data profiles in a way which combines the ground-truth-knowledge of synthetic data and the realistic expression values of real data, and therefore overcomes the problem of faithfulness of synthetic expression data modelling. By application to those datasets, we validate UNCLES while comparing it with other conventional clustering methods, and of particular relevance, biclustering methods. We further validate UNCLES by application to a set of 14 real genome-wide yeast datasets as it produces focused clusters that conform well to known biological facts. Furthermore, in-silico-based hypotheses regarding the function of a few previously unknown genes in those focused clusters are drawn. The UNCLES method, the M-N scatter plots technique, and the expression data synthesis approach will have wide application for the comprehensive analysis of genomic and other sources of multiple complex biological datasets. Moreover, the derived in-silico-based biological hypotheses represent subjects for future functional studies.
Prediction of noise constrained optimum takeoff procedures
NASA Technical Reports Server (NTRS)
Padula, S. L.
1980-01-01
An optimization method is used to predict safe, maximum-performance takeoff procedures which satisfy noise constraints at multiple observer locations. The takeoff flight is represented by two-degree-of-freedom dynamical equations with aircraft angle-of-attack and engine power setting as control functions. The engine thrust, mass flow and noise source parameters are assumed to be given functions of the engine power setting and aircraft Mach number. Effective Perceived Noise Levels at the observers are treated as functionals of the control functions. The method is demonstrated by applying it to an Advanced Supersonic Transport aircraft design. The results indicate that automated takeoff procedures (continuously varying controls) can be used to significantly reduce community and certification noise without jeopardizing safety or degrading performance.
Development of a Research Reactor Protocol for Neutron Multiplication Measurements
Arthur, Jennifer Ann; Bahran, Rian Mustafa; Hutchinson, Jesson D.; ...
2018-03-20
A new series of subcritical measurements has been conducted at the zero-power Walthousen Reactor Critical Facility (RCF) at Rensselaer Polytechnic Institute (RPI) using a 3He neutron multiplicity detector. The Critical and Subcritical 0-Power Experiment at Rensselaer (CaSPER) campaign establishes a protocol for advanced subcritical neutron multiplication measurements involving research reactors for validation of neutron multiplication inference techniques, Monte Carlo codes, and associated nuclear data. There has been increased attention and expanded efforts related to subcritical measurements and analyses, and this work provides yet another data set at known reactivity states that can be used in the validation of state-of-the-art Montemore » Carlo computer simulation tools. The diverse (mass, spatial, spectral) subcritical measurement configurations have been analyzed to produce parameters of interest such as singles rates, doubles rates, and leakage multiplication. MCNP ®6.2 was used to simulate the experiment and the resulting simulated data has been compared to the measured results. Comparison of the simulated and measured observables (singles rates, doubles rates, and leakage multiplication) show good agreement. This work builds upon the previous years of collaborative subcritical experiments and outlines a protocol for future subcritical neutron multiplication inference and subcriticality monitoring measurements on pool-type reactor systems.« less
Development of a Research Reactor Protocol for Neutron Multiplication Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer Ann; Bahran, Rian Mustafa; Hutchinson, Jesson D.
A new series of subcritical measurements has been conducted at the zero-power Walthousen Reactor Critical Facility (RCF) at Rensselaer Polytechnic Institute (RPI) using a 3He neutron multiplicity detector. The Critical and Subcritical 0-Power Experiment at Rensselaer (CaSPER) campaign establishes a protocol for advanced subcritical neutron multiplication measurements involving research reactors for validation of neutron multiplication inference techniques, Monte Carlo codes, and associated nuclear data. There has been increased attention and expanded efforts related to subcritical measurements and analyses, and this work provides yet another data set at known reactivity states that can be used in the validation of state-of-the-art Montemore » Carlo computer simulation tools. The diverse (mass, spatial, spectral) subcritical measurement configurations have been analyzed to produce parameters of interest such as singles rates, doubles rates, and leakage multiplication. MCNP ®6.2 was used to simulate the experiment and the resulting simulated data has been compared to the measured results. Comparison of the simulated and measured observables (singles rates, doubles rates, and leakage multiplication) show good agreement. This work builds upon the previous years of collaborative subcritical experiments and outlines a protocol for future subcritical neutron multiplication inference and subcriticality monitoring measurements on pool-type reactor systems.« less
NASA Astrophysics Data System (ADS)
O'Connell, Dylan; Thomas, David H.; Lamb, James M.; Lewis, John H.; Dou, Tai; Sieren, Jered P.; Saylor, Melissa; Hofmann, Christian; Hoffman, Eric A.; Lee, Percy P.; Low, Daniel A.
2018-02-01
To determine if the parameters relating lung tissue displacement to a breathing surrogate signal in a previously published respiratory motion model vary with the rate of breathing during image acquisition. An anesthetized pig was imaged using multiple fast helical scans to sample the breathing cycle with simultaneous surrogate monitoring. Three datasets were collected while the animal was mechanically ventilated with different respiratory rates: 12 bpm (breaths per minute), 17 bpm, and 24 bpm. Three sets of motion model parameters describing the correspondences between surrogate signals and tissue displacements were determined. The model error was calculated individually for each dataset, as well asfor pairs of parameters and surrogate signals from different experiments. The values of one model parameter, a vector field denoted α which related tissue displacement to surrogate amplitude, determined for each experiment were compared. The mean model error of the three datasets was 1.00 ± 0.36 mm with a 95th percentile value of 1.69 mm. The mean error computed from all combinations of parameters and surrogate signals from different datasets was 1.14 ± 0.42 mm with a 95th percentile of 1.95 mm. The mean difference in α over all pairs of experiments was 4.7% ± 5.4%, and the 95th percentile was 16.8%. The mean angle between pairs of α was 5.0 ± 4.0 degrees, with a 95th percentile of 13.2 mm. The motion model parameters were largely unaffected by changes in the breathing rate during image acquisition. The mean error associated with mismatched sets of parameters and surrogate signals was 0.14 mm greater than the error achieved when using parameters and surrogate signals acquired with the same breathing rate, while maximum respiratory motion was 23.23 mm on average.
Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.
High-energy mode-locked fiber lasers using multiple transmission filters and a genetic algorithm.
Fu, Xing; Kutz, J Nathan
2013-03-11
We theoretically demonstrate that in a laser cavity mode-locked by nonlinear polarization rotation (NPR) using sets of waveplates and passive polarizer, the energy performance can be significantly increased by incorporating multiple NPR filters. The NPR filters are engineered so as to mitigate the multi-pulsing instability in the laser cavity which is responsible for limiting the single pulse per round trip energy in a myriad of mode-locked cavities. Engineering of the NPR filters for performance is accomplished by implementing a genetic algorithm that is capable of systematically identifying viable and optimal NPR settings in a vast parameter space. Our study shows that five NPR filters can increase the cavity energy by approximately a factor of five, with additional NPRs contributing little or no enhancements beyond this. With the advent and demonstration of electronic controls for waveplates and polarizers, the analysis suggests a general design and engineering principle that can potentially close the order of magnitude energy gap between fiber based mode-locked lasers and their solid state counterparts.
McKisson, John E.; Barbosa, Fernando
2015-09-01
A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.
NASA Astrophysics Data System (ADS)
Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.
2010-05-01
We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.
Zhu, Huayang; Ricote, Sandrine; Coors, W Grover; Kee, Robert J
2015-01-01
A model-based interpretation of measured equilibrium conductivity and conductivity relaxation is developed to establish thermodynamic, transport, and kinetics parameters for multiple charged defect conducting (MCDC) ceramic materials. The present study focuses on 10% yttrium-doped barium zirconate (BZY10). In principle, using the Nernst-Einstein relationship, equilibrium conductivity measurements are sufficient to establish thermodynamic and transport properties. However, in practice it is difficult to establish unique sets of properties using equilibrium conductivity alone. Combining equilibrium and conductivity-relaxation measurements serves to significantly improve the quantitative fidelity of the derived material properties. The models are developed using a Nernst-Planck-Poisson (NPP) formulation, which enables the quantitative representation of conductivity relaxations caused by very large changes in oxygen partial pressure.
Introduction to the Neutrosophic Quantum Theory
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2014-10-01
Neutrosophic Quantum Theory (NQT) is the study of the principle that certain physical quantities can assume neutrosophic values, instead of discrete values as in quantum theory. These quantities are thus neutrosophically quantized. A neutrosophic values (neutrosophic amount) is expressed by a set (mostly an interval) that approximates (or includes) a discrete value. An oscillator can lose or gain energy by some neutrosophic amount (we mean neither continuously nor discretely, but as a series of integral sets: S, 2S, 3S, ..., where S is a set). In the most general form, one has an ensemble of sets of sets, i.e. R1S1 ,R2S2 ,R3S3 , ..., where all Rn and Sn are sets that may vary in function of time and of other parameters. Several such sets may be equal, or may be reduced to points, or may be empty. {The multiplication of two sets A and B is classically defined as: AB ={ab, a??A and b??B}. And similarly a number n times a set A is defined as: nA ={na, a??A}.} The unit of neutrosophic energy is Hν , where H is a set (in particular an interval) that includes Planck constant h, and ν is the frequency. Therefore, an oscillator could change its energy by a neutrosophic number of quanta: Hν , 2H ν, 3H ν, etc. For example, when H is an interval [h1 ,h2 ] , with 0 <=h1 <=h2 , that contains Planck constant h, then one has: [h1 ν ,h2 ν ], [2h1 ν , 2h2 ν ], [3h1 ν , 3h2 ν ],..., as series of intervals of energy change of the oscillator. The most general form of the units of neutrosophic energy is Hnνn , where all Hn and νn are sets that similarly as above may vary in function of time and of other oscillator and environment parameters. Neutrosophic quantum theory combines classical mechanics and quantum mechanics.
A framework for combining multiple soil moisture retrievals based on maximizing temporal correlation
NASA Astrophysics Data System (ADS)
Kim, Seokhyeon; Parinussa, Robert M.; Liu, Yi. Y.; Johnson, Fiona M.; Sharma, Ashish
2015-08-01
A method for combining two microwave satellite soil moisture products by maximizing the temporal correlation with a reference data set has been developed. The method was applied to two global soil moisture data sets, Japan Aerospace Exploration Agency (JAXA) and Land Parameter Retrieval Model (LPRM), retrieved from the Advanced Microwave Scanning Radiometer 2 observations for the period 2012-2014. A global comparison revealed superior results of the combined product compared to the individual products against the reference data set of ERA-Interim volumetric water content. The global mean temporal correlation coefficient of the combined product with this reference was 0.52 which outperforms the individual JAXA (0.35) as well as the LPRM (0.45) product. Additionally, the performance was evaluated against in situ observations from the International Soil Moisture Network. The combined data set showed a significant improvement in temporal correlation coefficients in the validation compared to JAXA and minor improvements for the LPRM product.
Van Schuerbeek, Peter; Baeken, Chris; De Mey, Johan
2016-01-01
Concerns are raising about the large variability in reported correlations between gray matter morphology and affective personality traits as ‘Harm Avoidance’ (HA). A recent review study (Mincic 2015) stipulated that this variability could come from methodological differences between studies. In order to achieve more robust results by standardizing the data processing procedure, as a first step, we repeatedly analyzed data from healthy females while changing the processing settings (voxel-based morphology (VBM) or region-of-interest (ROI) labeling, smoothing filter width, nuisance parameters included in the regression model, brain atlas and multiple comparisons correction method). The heterogeneity in the obtained results clearly illustrate the dependency of the study outcome to the opted analysis settings. Based on our results and the existing literature, we recommended the use of VBM over ROI labeling for whole brain analyses with a small or intermediate smoothing filter (5-8mm) and a model variable selection step included in the processing procedure. Additionally, it is recommended that ROI labeling should only be used in combination with a clear hypothesis and that authors are encouraged to report their results uncorrected for multiple comparisons as supplementary material to aid review studies. PMID:27096608
Temporal variation and scaling of parameters for a monthly hydrologic model
NASA Astrophysics Data System (ADS)
Deng, Chao; Liu, Pan; Wang, Dingbao; Wang, Weiguang
2018-03-01
The temporal variation of model parameters is affected by the catchment conditions and has a significant impact on hydrological simulation. This study aims to evaluate the seasonality and downscaling of model parameter across time scales based on monthly and mean annual water balance models with a common model framework. Two parameters of the monthly model, i.e., k and m, are assumed to be time-variant at different months. Based on the hydrological data set from 121 MOPEX catchments in the United States, we firstly analyzed the correlation between parameters (k and m) and catchment properties (NDVI and frequency of rainfall events, α). The results show that parameter k is positively correlated with NDVI or α, while the correlation is opposite for parameter m, indicating that precipitation and vegetation affect monthly water balance by controlling temporal variation of parameters k and m. The multiple linear regression is then used to fit the relationship between ε and the means and coefficient of variations of parameters k and m. Based on the empirical equation and the correlations between the time-variant parameters and NDVI, the mean annual parameter ε is downscaled to monthly k and m. The results show that it has lower NSEs than these from model with time-variant k and m being calibrated through SCE-UA, while for several study catchments, it has higher NSEs than that of the model with constant parameters. The proposed method is feasible and provides a useful tool for temporal scaling of model parameter.
A phase I study to assess the single and multiple dose pharmacokinetics of THC/CBD oromucosal spray.
Stott, C G; White, L; Wright, S; Wilbraham, D; Guy, G W
2013-05-01
A Phase I study to assess the single and multipledose pharmacokinetics (PKs) and safety and tolerability of oromucosally administered Δ(9)-tetrahydrocannabinol (THC)/cannabidiol (CBD) spray, an endocannabinoid system modulator, in healthy male subjects. Subjects received either single doses of THC/CBD spray as multiple sprays [2 (5.4 mg THC and 5.0 mg CBD), 4 (10.8 mg THC and 10.0 mg CBD) or 8 (21.6 mg THC and 20.0 mg CBD) daily sprays] or multiple doses of THC/CBD spray (2, 4 or 8 sprays once daily) for nine consecutive days, following fasting for a minimum of 10 h overnight prior to each dosing. Plasma samples were analyzed by gas chromatography-mass spectrometry for CBD, THC, and its primary metabolite 11-hydroxy-THC, and various PK parameters were investigated. Δ(9)-Tetrahydrocannabinol and CBD were rapidly absorbed following single-dose administration. With increasing single and multiple doses of THC/CBD spray, the mean peak plasma concentration (Cmax) increased for all analytes. There was evidence of dose-proportionality in the single but not the multiple dosing data sets. The bioavailability of THC was greater than CBD at single and multiple doses, and there was no evidence of accumulation for any analyte with multiple dosing. Inter-subject variability ranged from moderate to high for all PK parameters in this study. The time to peak plasma concentration (Tmax) was longest for all analytes in the eight spray group, but was similar in the two and four spray groups. THC/CBD spray was well-tolerated in this study and no serious adverse events were reported. The mean Cmax values (<12 ng/mL) recorded in this study were well below those reported in patients who smoked/inhaled cannabis, which is reassuring since elevated Cmax values are linked to significant psychoactivity. There was also no evidence of accumulation on repeated dosing.
Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem
2017-11-01
We propose a computational strategy that falls into the category of prediction/correction iterative-type approaches, for calibrating the hemodynamic model. The proposed method is used to estimate consecutively the values of the two sets of model parameters. Numerical results corresponding to both synthetic and real functional magnetic resonance imaging measurements for a single stimulus as well as for multiple stimuli are reported to highlight the capability of this computational methodology to fully calibrate the considered hemodynamic model. Copyright © 2017 John Wiley & Sons, Ltd.
An Analysis Method for Superconducting Resonator Parameter Extraction with Complex Baseline Removal
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe
2014-01-01
A new semi-empirical model is proposed for extracting the quality (Q) factors of arrays of superconducting microwave kinetic inductance detectors (MKIDs). The determination of the total internal and coupling Q factors enables the computation of the loss in the superconducting transmission lines. The method used allows the simultaneous analysis of multiple interacting discrete resonators with the presence of a complex spectral baseline arising from reflections in the system. The baseline removal allows an unbiased estimate of the device response as measured in a cryogenic instrumentation setting.
NASA Astrophysics Data System (ADS)
Parsons, Mark; Grindrod, Peter
2012-06-01
We introduce a model for a pair of nonlinear evolving networks, defined over a common set of vertices, subject to edgewise competition. Each network may grow new edges spontaneously or through triad closure. Both networks inhibit the other's growth and encourage the other's demise. These nonlinear stochastic competition equations yield to a mean field analysis resulting in a nonlinear deterministic system. There may be multiple equilibria; and bifurcations of different types are shown to occur within a reduced parameter space. This situation models competitive communication networks such as BlackBerry Messenger displacing SMS; or instant messaging displacing emails.
''Do-it-yourself'' software program calculates boiler efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
An easy-to-use software package is described which runs on the IBM Personal Computer. The package calculates boiler efficiency, an important parameter of operating costs and equipment wellbeing. The program stores inputs and calculated results for 20 sets of boiler operating data, called cases. Cases can be displayed and modified on the CRT screen through multiple display pages or copied to a printer. All intermediate calculations are performed by this package. They include: steam enthalpy; water enthalpy; air humidity; gas, oil, coal, and wood heat capacity; and radiation losses.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Berniker, Max; Kording, Konrad P.
2011-01-01
Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574
Optimization of dose and image quality in adult and pediatric computed tomography scans
NASA Astrophysics Data System (ADS)
Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin
2017-11-01
Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.
Sowa-Staszczak, Anna; Lenda-Tracz, Wioletta; Tomaszuk, Monika; Głowa, Bogusław; Hubalewska-Dydejczyk, Alicja
2013-01-01
Somatostatin receptor scintigraphy (SRS) is a useful tool in the assessment of GEP-NET (gastroenteropancreatic neuroendocrine tumor) patients. The choice of appropriate settings of image reconstruction parameters is crucial in interpretation of these images. The aim of the study was to investigate how the GEP NET lesion signal to noise ratio (TCS/TCB) depends on different reconstruction settings for Flash 3D software (Siemens). SRS results of 76 randomly selected patients with confirmed GEP-NET were analyzed. For SPECT studies the data were acquired using standard clinical settings 3-4 h after the injection of 740 MBq 99mTc-[EDDA/HYNIC] octreotate. To obtain final images the OSEM 3D Flash reconstruction with different settings and FBP reconstruction were used. First, the TCS/TCB ratio in voxels was analyzed for different combinations of the number of subsets and the number of iterations of the OSEM 3D Flash reconstruction. Secondly, the same ratio was analyzed for different parameters of the Gaussian filter (with FWHM = 2-4 times greater from the pixel size). Also the influence of scatter correction on the TCS/TCB ratio was investigated. With increasing number of subsets and iterations, the increase of TCS/TCB ratio was observed. With increasing settings of Gauss [FWHM coefficient] filter, the decrease of TCS/TCB ratio was reported. The use of scatter correction slightly decreases the values of this ratio. OSEM algorithm provides a meaningfully better reconstruction of the SRS SPECT study as compared to the FBP technique. A high number of subsets improves image quality (images are smoother). Increasing number of iterations gives a better contrast and the shapes of lesions and organs are sharper. The choice of reconstruction parameters is a compromise between image qualitative appearance and its quantitative accuracy and should not be modified when comparing multiple studies of the same patient.
Modifications of Ti-6Al-4V surfaces by direct-write laser machining of linear grooves
NASA Astrophysics Data System (ADS)
Ulerich, Joseph P.; Ionescu, Lara C.; Chen, Jianbo; Soboyejo, Winston O.; Arnold, Craig B.
2007-02-01
As patients who receive orthopedic implants live longer and opt for surgery at a younger age, the need to extend the in vivo lifetimes of these implants has grown. One approach is to pattern implant surfaces with linear grooves, which elicit a cellular response known as contact guidance. Lasers provide a unique method of generating these surface patterns because they are capable of modifying physical and chemical properties over multiple length scales. In this paper we explore the relationship between surface morphology and laser parameters such as fluence, pulse overlap (translation distance), number of passes, and machining environment. We find that using simple procedures involving multiple passes it is possible to manipulate groove properties such as depth, shape, sub-micron roughness, and chemical composition of the Ti-6Al-4V oxide layer. Finally, we demonstrate this procedure by machining several sets of grooves with the same primary groove parameters but varied secondary characteristics. The significance of the secondary groove characteristics is demonstrated by preliminary cell studies indicating that the grooves exhibit basic features of contact guidance and that the cell proliferation in these grooves are significantly altered despite their similar primary characteristics. With further study it will be possible to use specific laser parameters during groove formation to create optimal physical and chemical properties for improved osseointegration.
Yang, Hong; Xue, Xuejia; Li, Huan; Tay-Chan, Su Chin; Ong, Seng Poon; Tian, Edmund Feng
2017-08-15
In this work, we established a new methodology to simultaneously assess the relative reaction rates of multiple antioxidant compounds in one experimental set-up. This new methodology hypothesizes that the competition among antioxidant compounds towards limiting amount of free radical (in this article, DPPH) would reflect their relative reaction rates. In contrast with the conventional detection of DPPH decrease at 515nm on a spectrophotometer, depletion of antioxidant compounds treated by a series of DPPH concentrations was monitored instead using liquid chromatography coupled with quadrupole time-of-flight (LC-QTOF). A new parameter, namely relative antioxidant activity (RAA), has been proposed to rank these antioxidants according to their reaction rate constants. We have investigated the applicability of RAA using pre-mixed standard phenolic compounds, and also extended this application to two food products, i.e. red wine and green tea. It has been found that RAA correlates well with the reported k values. This new parameter, RAA, provides a new perspective in evaluating antioxidant compounds present in food and herbal matrices. It not only realistically reflects the antioxidant activity of compounds when co-existing with competitive constituents; and it could also quicken up the discovery process in the search for potent yet rare antioxidants from many herbs of food/medicinal origins. Copyright © 2017 Elsevier Ltd. All rights reserved.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.
2016-03-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling
NASA Astrophysics Data System (ADS)
Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.
2015-11-01
We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Estimating ambiguity preferences and perceptions in multiple prior models: Evidence from the field
Dimmock, Stephen G.; Kouwenberg, Roy; Mitchell, Olivia S.; Peijnenburg, Kim
2016-01-01
We develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of moderate to high likelihood involving gains, but ambiguity seeking prevails for low likelihoods and for losses. We show that choices made under ambiguity in the gain domain are best explained by the α-MaxMin model, with one parameter measuring ambiguity aversion (ambiguity preferences) and a second parameter quantifying the perceived degree of ambiguity (perceptions about ambiguity). The ambiguity aversion parameter α is constant and prior probability sets are asymmetric for low and high likelihood events. The data reject several other models, such as MaxMin and MaxMax, as well as symmetric probability intervals. Ambiguity aversion and the perceived degree of ambiguity are both higher for men and for the college-educated. Ambiguity aversion (but not perceived ambiguity) is also positively related to risk aversion. In the loss domain, we find evidence of reflection, implying that ambiguity aversion for gains tends to reverse into ambiguity seeking for losses. Our model’s estimates for preferences and perceptions about ambiguity can be used to analyze the economic and financial implications of such preferences. PMID:26924890
NASA Astrophysics Data System (ADS)
Yufeng, Wang; Qiang, Fu; Meina, Zhao; Fei, Gao; Huige, Di; Yuehui, Song; Dengxin, Hua
2018-01-01
To monitor the variability and the correlation of multiple atmospheric parameters in the whole troposphere and the lower stratosphere, a ground-based ultraviolet multifunctional Raman lidar system was established to simultaneously measure the atmospheric parameters in Xi'an (34.233°N, 108.911°E). A set of dichroic mirrors (DMs) and narrow-band interference filters (IFs) with narrow angles of incidence were utilized to construct a high-efficiency 5-channel polychromator. A series of high-quality data obtained from October 2013 to December 2015 under different weather conditions were used to investigate the functionality of the Raman lidar system and to study the variability of multiple atmospheric parameters in the whole stratosphere. Their conveying characteristics are also investigated using back trajectories with a hybrid single-particle Lagrangian integrated trajectory model (HYSPLIT). The lidar system can be operated efficiently under weather conditions with a cloud backscattering ratio of less than 18 and an atmospheric visibility of 3 km. We observed an obvious temperature inversion phenomenon at the tropopause height of 17-18 km and occasional temperature inversion layers below the boundary layer. The rapidly changing atmospheric water vapor is mostly concentrated at the lower troposphere, below ∼4-5 km, accounting for ∼90% of the total water vapor content at 0.5-10 km. The back trajectory analysis shows that the air flow from the northwest and the west mainly contributes to the transport of aerosols and water vapor over Xi'an. The simultaneous continuous observational results demonstrate the variability and correlation among the multiple atmospheric parameters, and the accumulated water vapor density in the bottom layer causes an increase in the aerosol extinction coefficient and enhances the relative humidity in the early morning. The long-term observations provide a large amount of reliable atmospheric data below the lower stratosphere, and can be used to study their correlation and to improve local climate change research.
Optimal error functional for parameter identification in anisotropic finite strain elasto-plasticity
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Kaygorodtseva, A. A.; Dranishnikov, N. S.
2017-10-01
A problem of parameter identification for a model of finite strain elasto-plasticity is discussed. The utilized phenomenological material model accounts for nonlinear isotropic and kinematic hardening; the model kinematics is described by a nested multiplicative split of the deformation gradient. A hierarchy of optimization problems is considered. First, following the standard procedure, the material parameters are identified through minimization of a certain least square error functional. Next, the focus is placed on finding optimal weighting coefficients which enter the error functional. Toward that end, a stochastic noise with systematic and non-systematic components is introduced to the available measurement results; a superordinate optimization problem seeks to minimize the sensitivity of the resulting material parameters to the introduced noise. The advantage of this approach is that no additional experiments are required; it also provides an insight into the robustness of the identification procedure. As an example, experimental data for the steel 42CrMo4 are considered and a set of weighting coefficients is found, which is optimal in a certain class.
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, O; Giem, J; Young, J
Purpose: To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiotherapy utilizing a single isocenter (SI) versus multiple isocenter (MI) in patients with multiple intracranial metastases. Methods: Seven patients imaged with MRI including SPGR sequence and diagnosed with 2–3 brain metastases were included in this retrospective study. Two sets of stereotactic IMRT treatment plans, (MI vs SI), were generated. The hippocampus was contoured on SPGR sequences and doses received by the hippocampus and whole brain were calculated. The prescribed dose was 25Gy in 5 fractions. The two groups were compared using t-testmore » analysis. Results: There were 17 lesions in 7 patients. The median tumor, right hippocampus, left hippocampus and brain volumes were: 3.37cc, 2.56cc, 3.28cc, and 1417cc respectively. In comparing the two treatment plans, there was no difference in the PTV coverage except in the tail of the DVH curve. All tumors had V95 > 99.5%. The only statistically significant parameter was the V100 (72% vs 45%, p=0.002, favoring MI). All other evaluated parameters including the V95 and V98 did not reveal any statistically significant differences. None of the evaluated dosimetric parameters for the hippocampus (V100, V80, V60, V40, V20, V10, D100, D90, D70, D50, D30, D10) revealed any statistically significant differences (all p-values > 0.31) between MI and SI plans. The total brain dose was slightly higher in the SI plans, especially in the lower dose regions, although this difference was not statistically significant. Utilizing brain-sub-PTV volumes did not change these results. Conclusion: The use of SI treatment planning for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain compared to MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment.« less
NASA Technical Reports Server (NTRS)
Schlegel, T. T.; Arenare, B.; Greco, E. C.; DePalma, J. L.; Starc, V.; Nunez, T.; Medina, R.; Jugo, D.; Rahman, M.A.; Delgado, R.
2007-01-01
We investigated the accuracy of several conventional and advanced resting ECG parameters for identifying obstructive coronary artery disease (CAD) and cardiomyopathy (CM). Advanced high-fidelity 12-lead ECG tests (approx. 5-min supine) were first performed on a "training set" of 99 individuals: 33 with ischemic or dilated CM and low ejection fraction (EF less than 40%); 33 with catheterization-proven obstructive CAD but normal EF; and 33 age-/gender-matched healthy controls. Multiple conventional and advanced ECG parameters were studied for their individual and combined retrospective accuracies in detecting underlying disease, the advanced parameters falling within the following categories: 1) Signal averaged ECG, including 12-lead high frequency QRS (150-250 Hz) plus multiple filtered and unfiltered parameters from the derived Frank leads; 2) 12-lead P, QRS and T-wave morphology via singular value decomposition (SVD) plus signal averaging; 3) Multichannel (12-lead, derived Frank lead, SVD lead) beat-to-beat QT interval variability; 4) Spatial ventricular gradient (and gradient component) variability; and 5) Heart rate variability. Several multiparameter ECG SuperScores were derivable, using stepwise and then generalized additive logistic modeling, that each had 100% retrospective accuracy in detecting underlying CM or CAD. The performance of these same SuperScores was then prospectively evaluated using a test set of another 120 individuals (40 new individuals in each of the CM, CAD and control groups, respectively). All 12-lead ECG SuperScores retrospectively generated for CM continued to perform well in prospectively identifying CM (i.e., areas under the ROC curve greater than 0.95), with one such score (containing just 4 components) maintaining 100% prospective accuracy. SuperScores retrospectively generated for CAD performed somewhat less accurately, with prospective areas under the ROC curve typically in the 0.90-0.95 range. We conclude that resting 12-lead high-fidelity ECG employing and combining the results of several advanced ECG software techniques shows great promise as a rapid and inexpensive tool for screening of heart disease.
Quantitative analysis of single- vs. multiple-set programs in resistance training.
Wolfe, Brian L; LeMura, Linda M; Cole, Phillip J
2004-02-01
The purpose of this study was to examine the existing research on single-set vs. multiple-set resistance training programs. Using the meta-analytic approach, we included studies that met the following criteria in our analysis: (a) at least 6 subjects per group; (b) subject groups consisting of single-set vs. multiple-set resistance training programs; (c) pretest and posttest strength measures; (d) training programs of 6 weeks or more; (e) apparently "healthy" individuals free from orthopedic limitations; and (f) published studies in English-language journals only. Sixteen studies generated 103 effect sizes (ESs) based on a total of 621 subjects, ranging in age from 15-71 years. Across all designs, intervention strategies, and categories, the pretest to posttest ES in muscular strength was (chi = 1.4 +/- 1.4; 95% confidence interval, 0.41-3.8; p < 0.001). The results of 2 x 2 analysis of variance revealed simple main effects for age, training status (trained vs. untrained), and research design (p < 0.001). No significant main effects were found for sex, program duration, and set end point. Significant interactions were found for training status and program duration (6-16 weeks vs. 17-40 weeks) and number of sets performed (single vs. multiple). The data indicated that trained individuals performing multiple sets generated significantly greater increases in strength (p < 0.001). For programs with an extended duration, multiple sets were superior to single sets (p < 0.05). This quantitative review indicates that single-set programs for an initial short training period in untrained individuals result in similar strength gains as multiple-set programs. However, as progression occurs and higher gains are desired, multiple-set programs are more effective.
Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas
2009-01-01
Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170
NASA Astrophysics Data System (ADS)
Boudghene Stambouli, Ahmed; Zendagui, Djawad; Bard, Pierre-Yves; Derras, Boumédiène
2017-07-01
Most modern seismic codes account for site effects using an amplification factor (AF) that modifies the rock acceleration response spectra in relation to a "site condition proxy," i.e., a parameter related to the velocity profile at the site under consideration. Therefore, for practical purposes, it is interesting to identify the site parameters that best control the frequency-dependent shape of the AF. The goal of the present study is to provide a quantitative assessment of the performance of various site condition proxies to predict the main AF features, including the often used short- and mid-period amplification factors, Fa and Fv, proposed by Borcherdt (in Earthq Spectra 10:617-653, 1994). In this context, the linear, viscoelastic responses of a set of 858 actual soil columns from Japan, the USA, and Europe are computed for a set of 14 real accelerograms with varying frequency contents. The correlation between the corresponding site-specific average amplification factors and several site proxies (considered alone or as multiple combinations) is analyzed using the generalized regression neural network (GRNN). The performance of each site proxy combination is assessed through the variance reduction with respect to the initial amplification factor variability of the 858 profiles. Both the whole period range and specific short- and mid-period ranges associated with the Borcherdt factors Fa and Fv are considered. The actual amplification factor of an arbitrary soil profile is found to be satisfactorily approximated with a limited number of site proxies (4-6). As the usual code practice implies a lower number of site proxies (generally one, sometimes two), a sensitivity analysis is conducted to identify the "best performing" site parameters. The best one is the overall velocity contrast between underlying bedrock and minimum velocity in the soil column. Because these are the most difficult and expensive parameters to measure, especially for thick deposits, other more convenient parameters are preferred, especially the couple ( {V_{{{s}30}} ,f0 } ) that leads to a variance reduction in at least 60%. From a code perspective, equations and plots are provided describing the dependence of the short- and mid-period amplification factors Fa and Fv on these two parameters. The robustness of the results is analyzed by performing a similar analysis for two alternative sets of velocity profiles, for which the bedrock velocity is constrained to have the same value for all velocity profiles, which is not the case in the original set.[Figure not available: see fulltext.
ERIC Educational Resources Information Center
Abad, Francisco J.; Olea, Julio; Ponsoda, Vicente
2009-01-01
This article deals with some of the problems that have hindered the application of Samejima's and Thissen and Steinberg's multiple-choice models: (a) parameter estimation difficulties owing to the large number of parameters involved, (b) parameter identifiability problems in the Thissen and Steinberg model, and (c) their treatment of omitted…
Empirical Assessment of the Mean Block Volume of Rock Masses Intersected by Four Joint Sets
NASA Astrophysics Data System (ADS)
Morelli, Gian Luca
2016-05-01
The estimation of a representative value for the rock block volume ( V b) is of huge interest in rock engineering in regards to rock mass characterization purposes. However, while mathematical relationships to precisely estimate this parameter from the spacing of joints can be found in literature for rock masses intersected by three dominant joint sets, corresponding relationships do not actually exist when more than three sets occur. In these cases, a consistent assessment of V b can only be achieved by directly measuring the dimensions of several representative natural rock blocks in the field or by means of more sophisticated 3D numerical modeling approaches. However, Palmström's empirical relationship based on the volumetric joint count J v and on a block shape factor β is commonly used in the practice, although strictly valid only for rock masses intersected by three joint sets. Starting from these considerations, the present paper is primarily intended to investigate the reliability of a set of empirical relationships linking the block volume with the indexes most commonly used to characterize the degree of jointing in a rock mass (i.e. the J v and the mean value of the joint set spacings) specifically applicable to rock masses intersected by four sets of persistent discontinuities. Based on the analysis of artificial 3D block assemblies generated using the software AutoCAD, the most accurate best-fit regression has been found between the mean block volume (V_{{{{b}}_{{m}} }}) of tested rock mass samples and the geometric mean value of the spacings of the joint sets delimiting blocks; thus, indicating this mean value as a promising parameter for the preliminary characterization of the block size. Tests on field outcrops have demonstrated that the proposed empirical methodology has the potential of predicting the mean block volume of multiple-set jointed rock masses with an acceptable accuracy for common uses in most practical rock engineering applications.
Raymond, G M; Bassingthwaighte, J B
This is a practical example of a powerful research strategy: putting together data from studies covering a diversity of conditions can yield a scientifically sound grasp of the phenomenon when the individual observations failed to provide definitive understanding. The rationale is that defining a realistic, quantitative, explanatory hypothesis for the whole set of studies, brings about a "consilience" of the often competing hypotheses considered for individual data sets. An internally consistent conjecture linking multiple data sets simultaneously provides stronger evidence on the characteristics of a system than does analysis of individual data sets limited to narrow ranges of conditions. Our example examines three very different data sets on the clearance of salicylic acid from humans: a high concentration set from aspirin overdoses; a set with medium concentrations from a research study on the influences of the route of administration and of sex on the clearance kinetics, and a set on low dose aspirin for cardiovascular health. Three models were tested: (1) a first order reaction, (2) a Michaelis-Menten (M-M) approach, and (3) an enzyme kinetic model with forward and backward reactions. The reaction rates found from model 1 were distinctly different for the three data sets, having no commonality. The M-M model 2 fitted each of the three data sets but gave a reliable estimates of the Michaelis constant only for the medium level data (K m = 24±5.4 mg/L); analyzing the three data sets together with model 2 gave K m = 18±2.6 mg/L. (Estimating parameters using larger numbers of data points in an optimization increases the degrees of freedom, constraining the range of the estimates). Using the enzyme kinetic model (3) increased the number of free parameters but nevertheless improved the goodness of fit to the combined data sets, giving tighter constraints, and a lower estimated K m = 14.6±2.9 mg/L, demonstrating that fitting diverse data sets with a single model improves confidence in the results. This modeling effort is also an example of reproducible science available at html://www.physiome.org/jsim/models/webmodel/NSR/SalicylicAcidClearance.
Hamahashi, Shugo; Onami, Shuichi; Kitano, Hiroaki
2005-01-01
Background The ability to detect nuclei in embryos is essential for studying the development of multicellular organisms. A system of automated nuclear detection has already been tested on a set of four-dimensional (4D) Nomarski differential interference contrast (DIC) microscope images of Caenorhabditis elegans embryos. However, the system needed laborious hand-tuning of its parameters every time a new image set was used. It could not detect nuclei in the process of cell division, and could detect nuclei only from the two- to eight-cell stages. Results We developed a system that automates the detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. Local image entropy is used to produce regions of the images that have the image texture of the nucleus. From these regions, those that actually detect nuclei are manually selected at the first and last time points of the image set, and an object-tracking algorithm then selects regions that detect nuclei in between the first and last time points. The use of local image entropy makes the system applicable to multiple image sets without the need to change its parameter values. The use of an object-tracking algorithm enables the system to detect nuclei in the process of cell division. The system detected nuclei with high sensitivity and specificity from the one- to 24-cell stages. Conclusion A combination of local image entropy and an object-tracking algorithm enabled highly objective and productive detection of nuclei in a set of 4D DIC microscope images of C. elegans embryos. The system will facilitate genomic and computational analyses of C. elegans embryos. PMID:15910690
Simplex GPS and InSAR Inversion Software
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Pierce, Marlon E.
2012-01-01
Changes in the shape of the Earth's surface can be routinely measured with precisions better than centimeters. Processes below the surface often drive these changes and as a result, investigators require models with inversion methods to characterize the sources. Simplex inverts any combination of GPS (global positioning system), UAVSAR (uninhabited aerial vehicle synthetic aperture radar), and InSAR (interferometric synthetic aperture radar) data simultaneously for elastic response from fault and fluid motions. It can be used to solve for multiple faults and parameters, all of which can be specified or allowed to vary. The software can be used to study long-term tectonic motions and the faults responsible for those motions, or can be used to invert for co-seismic slip from earthquakes. Solutions involving estimation of fault motion and changes in fluid reservoirs such as magma or water are possible. Any arbitrary number of faults or parameters can be considered. Simplex specifically solves for any of location, geometry, fault slip, and expansion/contraction of a single or multiple faults. It inverts GPS and InSAR data for elastic dislocations in a half-space. Slip parameters include strike slip, dip slip, and tensile dislocations. It includes a map interface for both setting up the models and viewing the results. Results, including faults, and observed, computed, and residual displacements, are output in text format, a map interface, and can be exported to KML. The software interfaces with the QuakeTables database allowing a user to select existing fault parameters or data. Simplex can be accessed through the QuakeSim portal graphical user interface or run from a UNIX command line.
Automated parameterization of intermolecular pair potentials using global optimization techniques
NASA Astrophysics Data System (ADS)
Krämer, Andreas; Hülsmann, Marco; Köddermann, Thorsten; Reith, Dirk
2014-12-01
In this work, different global optimization techniques are assessed for the automated development of molecular force fields, as used in molecular dynamics and Monte Carlo simulations. The quest of finding suitable force field parameters is treated as a mathematical minimization problem. Intricate problem characteristics such as extremely costly and even abortive simulations, noisy simulation results, and especially multiple local minima naturally lead to the use of sophisticated global optimization algorithms. Five diverse algorithms (pure random search, recursive random search, CMA-ES, differential evolution, and taboo search) are compared to our own tailor-made solution named CoSMoS. CoSMoS is an automated workflow. It models the parameters' influence on the simulation observables to detect a globally optimal set of parameters. It is shown how and why this approach is superior to other algorithms. Applied to suitable test functions and simulations for phosgene, CoSMoS effectively reduces the number of required simulations and real time for the optimization task.
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
2015-01-01
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437
Correcting Estimates of the Occurrence Rate of Earth-like Exoplanets for Stellar Multiplicity
NASA Astrophysics Data System (ADS)
Cantor, Elliot; Dressing, Courtney D.; Ciardi, David R.; Christiansen, Jessie
2018-06-01
One of the most prominent questions in the exoplanet field has been determining the true occurrence rate of potentially habitable Earth-like planets. NASA’s Kepler mission has been instrumental in answering this question by searching for transiting exoplanets, but follow-up observations of Kepler target stars are needed to determine whether or not the surveyed Kepler targets are in multi-star systems. While many researchers have searched for companions to Kepler planet host stars, few studies have investigated the larger target sample. Regardless of physical association, the presence of nearby stellar companions biases our measurements of a system’s planetary parameters and reduces our sensitivity to small planets. Assuming that all Kepler target stars are single (as is done in many occurrence rate calculations) would overestimate our search completeness and result in an underestimate of the frequency of potentially habitable Earth-like planets. We aim to correct for this bias by characterizing the set of targets for which Kepler could have detected Earth-like planets. We are using adaptive optics (AO) imaging to reveal potential stellar companions and near-infrared spectroscopy to refine stellar parameters for a subset of the Kepler targets that are most amenable to the detection of Earth-like planets. We will then derive correction factors to correct for the biases in the larger set of target stars and determine the true frequency of systems with Earth-like planets. Due to the prevalence of stellar multiples, we expect to calculate an occurrence rate for Earth-like exoplanets that is higher than current figures.
Chaos theory for clinical manifestations in multiple sclerosis.
Akaishi, Tetsuya; Takahashi, Toshiyuki; Nakashima, Ichiro
2018-06-01
Multiple sclerosis (MS) is a demyelinating disease which characteristically shows repeated relapses and remissions irregularly in the central nervous system. At present, the pathological mechanism of MS is unknown and we do not have any theories or mathematical models to explain its disseminated patterns in time and space. In this paper, we present a new theoretical model from a viewpoint of complex system with chaos model to reproduce and explain the non-linear clinical and pathological manifestations in MS. First, we adopted a discrete logistic equation with non-linear dynamics to prepare a scalar quantity for the strength of pathogenic factor at a specific location of the central nervous system at a specific time to reflect the negative feedback in immunity. Then, we set distinct minimum thresholds in the above-mentioned scalar quantity for demyelination possibly causing clinical relapses and for cerebral atrophy. With this simple model, we could theoretically reproduce all the subtypes of relapsing-remitting MS, primary progressive MS, and secondary progressive MS. With the sensitivity to initial conditions and sensitivity to minute change in parameters of the chaos theory, we could also reproduce the spatial dissemination. Such chaotic behavior could be reproduced with other similar upward-convex functions with appropriate set of initial conditions and parameters. In conclusion, by applying chaos theory to the three-dimensional scalar field of the central nervous system, we can reproduce the non-linear outcome of the clinical course and explain the unsolved disseminations in time and space of the MS patients. Copyright © 2018 Elsevier Ltd. All rights reserved.
Application of Scan Statistics to Detect Suicide Clusters in Australia
Cheung, Yee Tak Derek; Spittal, Matthew J.; Williamson, Michelle Kate; Tung, Sui Jay; Pirkis, Jane
2013-01-01
Background Suicide clustering occurs when multiple suicide incidents take place in a small area or/and within a short period of time. In spite of the multi-national research attention and particular efforts in preparing guidelines for tackling suicide clusters, the broader picture of epidemiology of suicide clustering remains unclear. This study aimed to develop techniques in using scan statistics to detect clusters, with the detection of suicide clusters in Australia as example. Methods and Findings Scan statistics was applied to detect clusters among suicides occurring between 2004 and 2008. Manipulation of parameter settings and change of area for scan statistics were performed to remedy shortcomings in existing methods. In total, 243 suicides out of 10,176 (2.4%) were identified as belonging to 15 suicide clusters. These clusters were mainly located in the Northern Territory, the northern part of Western Australia, and the northern part of Queensland. Among the 15 clusters, 4 (26.7%) were detected by both national and state cluster detections, 8 (53.3%) were only detected by the state cluster detection, and 3 (20%) were only detected by the national cluster detection. Conclusions These findings illustrate that the majority of spatial-temporal clusters of suicide were located in the inland northern areas, with socio-economic deprivation and higher proportions of indigenous people. Discrepancies between national and state/territory cluster detection by scan statistics were due to the contrast of the underlying suicide rates across states/territories. Performing both small-area and large-area analyses, and applying multiple parameter settings may yield the maximum benefits for exploring clusters. PMID:23342098
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Performance of stochastic approaches for forecasting river water quality.
Ahmad, S; Khan, I H; Parida, B P
2001-12-01
This study analysed water quality data collected from the river Ganges in India from 1981 to 1990 for forecasting using stochastic models. Initially the box and whisker plots and Kendall's tau test were used to identify the trends during the study period. For detecting the possible intervention in the data the time series plots and cusum charts were used. The three approaches of stochastic modelling which account for the effect of seasonality in different ways. i.e. multiplicative autoregressive integrated moving average (ARIMA) model. deseasonalised model and Thomas-Fiering model were used to model the observed pattern in water quality. The multiplicative ARIMA model having both nonseasonal and seasonal components were, in general, identified as appropriate models. In the deseasonalised modelling approach, the lower order ARIMA models were found appropriate for the stochastic component. The set of Thomas-Fiering models were formed for each month for all water quality parameters. These models were then used to forecast the future values. The error estimates of forecasts from the three approaches were compared to identify the most suitable approach for the reliable forecast. The deseasonalised modelling approach was recommended for forecasting of water quality parameters of a river.
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data.
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-12-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree.
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-01-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree. PMID:24385862
Using multiple data sets to populate probabilistic volcanic event trees
Newhall, C.G.; Pallister, John S.
2014-01-01
The key parameters one needs to forecast outcomes of volcanic unrest are hidden kilometers beneath the Earth’s surface, and volcanic systems are so complex that there will invariably be stochastic elements in the evolution of any unrest. Fortunately, there is sufficient regularity in behaviour that some, perhaps many, eruptions can be forecast with enough certainty for populations to be evacuated and kept safe. Volcanologists charged with forecasting eruptions must try to understand each volcanic system well enough that unrest can be interpreted in terms of pre-eruptive process, but must simultaneously recognize and convey uncertainties in their assessment. We have found that use of event trees helps to focus discussion, integrate data from multiple sources, reach consensus among scientists about both pre-eruptive process and uncertainties and, in some cases, to explain all of this to officials. Figure 1 shows a generic volcanic event tree from Newhall and Hoblitt (2002) that can be modified as needed for each specific volcano. This paper reviews how we and our colleagues have used such trees during a number of volcanic crises worldwide, for rapid hazard assessments in situations in which more formal expert elicitations could not be conducted. We describe how Multiple Data Sets can be used to estimate probabilities at each node and branch. We also present case histories of probability estimation during crises, how the estimates were used by public officials, and some suggestions for future improvements.
Competitive STDP Learning of Overlapping Spatial Patterns.
Krunglevicius, Dalius
2015-08-01
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Emitter and absorber assembly for multiple self-dual operation and directional transparency
NASA Astrophysics Data System (ADS)
Kalozoumis, P. A.; Morfonios, C. V.; Kodaxis, G.; Diakonos, F. K.; Schmelcher, P.
2017-03-01
We demonstrate how to systematically design wave scattering systems with simultaneous coherent perfect absorbing and lasing operation at multiple and prescribed frequencies. The approach is based on the recursive assembly of non-Hermitian emitter and absorber units into self-dual emitter-absorber trimers at different composition levels, exploiting the simple structure of the corresponding transfer matrices. In particular, lifting the restriction to parity-time-symmetric setups enables the realization of emitter and absorber action at distinct frequencies and provides flexibility with respect to the choice of realistic parameters. We further show how the same assembled scatterers can be rearranged to produce unidirectional and bidirectional transparency at the selected frequencies. With the design procedure being generically applicable to wave scattering in single-channel settings, we demonstrate it with concrete examples of photonic multilayer setups.
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.
Leung, Chi-Sing; Wan, Wai Yan; Feng, Ruibin
2017-06-01
Many existing results on fault-tolerant algorithms focus on the single fault source situation, where a trained network is affected by one kind of weight failure. In fact, a trained network may be affected by multiple kinds of weight failure. This paper first studies how the open weight fault and the multiplicative weight noise degrade the performance of radial basis function (RBF) networks. Afterward, we define the objective function for training fault-tolerant RBF networks. Based on the objective function, we then develop two learning algorithms, one batch mode and one online mode. Besides, the convergent conditions of our online algorithm are investigated. Finally, we develop a formula to estimate the test set error of faulty networks trained from our approach. This formula helps us to optimize some tuning parameters, such as RBF width.
Off-line tracking of series parameters in distribution systems using AMI data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Tess L.; Sun, Yannan; Schneider, Kevin
2016-05-01
Electric distribution systems have historically lacked measurement points, and equipment is often operated to its failure point, resulting in customer outages. The widespread deployment of sensors at the distribution level is enabling observability. This paper presents an off-line parameter value tracking procedure that takes advantage of the increasing number of measurement devices being deployed at the distribution level to estimate changes in series impedance parameter values over time. The tracking of parameter values enables non-diurnal and non-seasonal change to be flagged for investigation. The presented method uses an unbalanced Distribution System State Estimation (DSSE) and a measurement residual-based parameter estimationmore » procedure. Measurement residuals from multiple measurement snapshots are combined in order to increase the effective local redundancy and improve the robustness of the calculations in the presence of measurement noise. Data from devices on the primary distribution system and from customer meters, via an AMI system, form the input data set. Results of simulations on the IEEE 13-Node Test Feeder are presented to illustrate the proposed approach applied to changes in series impedance parameters. A 5% change in series resistance elements can be detected in the presence of 2% measurement error when combining less than 1 day of measurement snapshots into a single estimate.« less
Scheiderer, Rachel; Belden, Courtney; Schwab, Darla; Haney, Casey; Paz, Jaime
2013-06-01
For patients with end-stage heart failure awaiting transplantation, lack of donor organs has created an increased need for alternatives such as left ventricular assist device (LVAD) implantation. The purpose of this study is to determine safe and effective exercise parameters for physical therapy in the acute care setting. A systematic literature review was conducted according to PRISMA guidelines using Sackett's Levels of Evidence to rate the evidence. Multiple databases were searched with inclusion criteria of: available in English, inpatient care up to 6 months postoperatively, description of intervention type and exercise parameters. no defined exercise parameters, outpatient treatment, infection post VAD, or palliative or hospice care post VAD. Six studies out of 1,291 articles met inclusion criteria. Common exercise parameters used were the Borg Rating of Perceived Exertion scale 11-13 (6-20 scale) or > 4 (0-10 scale), Dyspnea scale > 2 (0-4 scale) and > 5 (0-10 scale), mean arterial pressure (MAP) 70-95 mmHg, and LVAD flow > 3L/min. Levels of evidence ranged from case controlled to expert opinion. Current evidence on inpatient exercise parameters for patient's status post LVAD implantation is not sufficient to suggest definitive guidelines; however, these exercise parameters provide a reference for patient care.
Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393
Goldman, Johnathan M; More, Haresh T; Yee, Olga; Borgeson, Elizabeth; Remy, Brenda; Rowe, Jasmine; Sadineni, Vikram
2018-06-08
Development of optimal drug product lyophilization cycles is typically accomplished via multiple engineering runs to determine appropriate process parameters. These runs require significant time and product investments, which are especially costly during early phase development when the drug product formulation and lyophilization process are often defined simultaneously. Even small changes in the formulation may require a new set of engineering runs to define lyophilization process parameters. In order to overcome these development difficulties, an eight factor definitive screening design (DSD), including both formulation and process parameters, was executed on a fully human monoclonal antibody (mAb) drug product. The DSD enables evaluation of several interdependent factors to define critical parameters that affect primary drying time and product temperature. From these parameters, a lyophilization development model is defined where near optimal process parameters can be derived for many different drug product formulations. This concept is demonstrated on a mAb drug product where statistically predicted cycle responses agree well with those measured experimentally. This design of experiments (DoE) approach for early phase lyophilization cycle development offers a workflow that significantly decreases the development time of clinically and potentially commercially viable lyophilization cycles for a platform formulation that still has variable range of compositions. Copyright © 2018. Published by Elsevier Inc.
Korjus, Kristjan; Hebart, Martin N; Vicente, Raul
2016-01-01
Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.
Comprehensive derivation of bond-valence parameters for ion pairs involving oxygen
Gagné, Olivier Charles; Hawthorne, Frank Christopher
2015-01-01
Published two-body bond-valence parameters for cation–oxygen bonds have been evaluated via the root mean-square deviation (RMSD) from the valence-sum rule for 128 cations, using 180 194 filtered bond lengths from 31 489 coordination polyhedra. Values of the RMSD range from 0.033–2.451 v.u. (1.1–40.9% per unit of charge) with a weighted mean of 0.174 v.u. (7.34% per unit of charge). The set of best published parameters has been determined for 128 ions and used as a benchmark for the determination of new bond-valence parameters in this paper. Two common methods for the derivation of bond-valence parameters have been evaluated: (1) fixing B and solving for R o; (2) the graphical method. On a subset of 90 ions observed in more than one coordination, fixing B at 0.37 Å leads to a mean weighted-RMSD of 0.139 v.u. (6.7% per unit of charge), while graphical derivation gives 0.161 v.u. (8.0% per unit of charge). The advantages and disadvantages of these (and other) methods of derivation have been considered, leading to the conclusion that current methods of derivation of bond-valence parameters are not satisfactory. A new method of derivation is introduced, the GRG (generalized reduced gradient) method, which leads to a mean weighted-RMSD of 0.128 v.u. (6.1% per unit of charge) over the same sample of 90 multiple-coordination ions. The evaluation of 19 two-parameter equations and 7 three-parameter equations to model the bond-valence–bond-length relation indicates that: (1) many equations can adequately describe the relation; (2) a plateau has been reached in the fit for two-parameter equations; (3) the equation of Brown & Altermatt (1985 ▸) is sufficiently good that use of any of the other equations tested is not warranted. Improved bond-valence parameters have been derived for 135 ions for the equation of Brown & Altermatt (1985 ▸) in terms of both the cation and anion bond-valence sums using the GRG method and our complete data set. PMID:26428406
NASA Astrophysics Data System (ADS)
Caminha, G. B.; Grillo, C.; Rosati, P.; Balestra, I.; Karman, W.; Lombardi, M.; Mercurio, A.; Nonino, M.; Tozzi, P.; Zitrin, A.; Biviano, A.; Girardi, M.; Koekemoer, A. M.; Melchior, P.; Meneghetti, M.; Munari, E.; Suyu, S. H.; Umetsu, K.; Annunziatella, M.; Borgani, S.; Broadhurst, T.; Caputi, K. I.; Coe, D.; Delgado-Correal, C.; Ettori, S.; Fritz, A.; Frye, B.; Gobat, R.; Maier, C.; Monna, A.; Postman, M.; Sartoris, B.; Seitz, S.; Vanzella, E.; Ziegler, B.
2016-03-01
Aims: We perform a comprehensive study of the total mass distribution of the galaxy cluster RXC J2248.7-4431 (z = 0.348) with a set of high-precision strong lensing models, which take advantage of extensive spectroscopic information on many multiply lensed systems. In the effort to understand and quantify inherent systematics in parametric strong lensing modelling, we explore a collection of 22 models in which we use different samples of multiple image families, different parametrizations of the mass distribution and cosmological parameters. Methods: As input information for the strong lensing models, we use the Cluster Lensing And Supernova survey with Hubble (CLASH) imaging data and spectroscopic follow-up observations, with the VIsible Multi-Object Spectrograph (VIMOS) and Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT), to identify and characterize bona fide multiple image families and measure their redshifts down to mF814W ≃ 26. A total of 16 background sources, over the redshift range 1.0-6.1, are multiply lensed into 47 images, 24 of which are spectroscopically confirmed and belong to ten individual sources. These also include a multiply lensed Lyman-α blob at z = 3.118. The cluster total mass distribution and underlying cosmology in the models are optimized by matching the observed positions of the multiple images on the lens plane. Bayesian Markov chain Monte Carlo techniques are used to quantify errors and covariances of the best-fit parameters. Results: We show that with a careful selection of a large sample of spectroscopically confirmed multiple images, the best-fit model can reproduce their observed positions with a rms scatter of 0.̋3 in a fixed flat ΛCDM cosmology, whereas the lack of spectroscopic information or the use of inaccurate photometric redshifts can lead to biases in the values of the model parameters. We find that the best-fit parametrization for the cluster total mass distribution is composed of an elliptical pseudo-isothermal mass distribution with a significant core for the overall cluster halo and truncated pseudo-isothermal mass profiles for the cluster galaxies. We show that by adding bona fide photometric-selected multiple images to the sample of spectroscopic families, one can slightly improve constraints on the model parameters. In particular, we find that the degeneracy between the lens total mass distribution and the underlying geometry of the Universe, which is probed via angular diameter distance ratios between the lens and sources and the observer and sources, can be partially removed. Allowing cosmological parameters to vary together with the cluster parameters, we find (at 68% confidence level) Ωm = 0.25+ 0.13-0.16 and w = -1.07+ 0.16-0.42 for a flat ΛCDM model, and Ωm = 0.31+ 0.12-0.13 and ΩΛ = 0.38+ 0.38-0.27 for a Universe with w = -1 and free curvature. Finally, using toy models mimicking the overall configuration of multiple images and cluster total mass distribution, we estimate the impact of the line-of-sight mass structure on the positional rms to be 0.̋3 ± 0. We argue that the apparent sensitivity of our lensing model to cosmography is due to the combination of the regular potential shape of RXC J2248, a large number of bona fide multiple images out to z = 6.1, and a relatively modest presence of intervening large-scale structure, as revealed by our spectroscopic survey.
He, Fuyuan; Deng, Kaiwen; Zou, Huan; Qiu, Yun; Chen, Feng; Zhou, Honghao
2011-01-01
To study on the differences between chromatopharmacokinetics (pharmacokinetics with fingerprint chromatography) and chromatopharmacodynamics (pharmacodynamics with fingerprint chromatography) of Chinese materia medica formulae to answer the question whether the pharmacokinetic parameters of multiple composites can be utilized to guide the medication of multiple composites. On the base of established four chromatopharmacology (pharmacology with chromatographic fingerprint), the pharmacokinetics, and pharmacodynamics were analyzed comparably on their mathematical model and parameter definition. On the basis of quantitative pharmacology, the function expressions and total statistical parameters, such as total zero moment, total first moment, total second moment of the pharmacokinetics, and pharmacodynamics were analyzed to the common expressions and elucidated results for single and multiple components in Chinese materia medica formulae. Total quantitative pharmacokinetic, i.e., chromatopharmacokinetic parameter were decided by each component pharmacokinetic parameters, whereas the total quantitative pharmacodynamic, i.e., chromatopharmacodynamic parameter were decided by both of pharmacokinetic and pharmacodynamic parameters of each components. The pharmacokinetic parameters were corresponded to pharmacodynamic parameters with an existing stable effective coefficient when the constitutive ratio of each composite was a constant. The effects of Chinese materia medica were all controlled by pharmacokinetic and pharmacodynamic coefficient. It is a special case that the pharmacokinetic parameter could independently guide the clinical medication for single component whereas the chromatopharmacokinetic parameters are not applied to the multiple drug combination system, and not be used to solve problems of chromatopharmacokinetic of Chinese materia medica formulae.
An Integrated Approach to Parameter Learning in Infinite-Dimensional Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyd, Zachary M.; Wendelberger, Joanne Roth
The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less
NASA Technical Reports Server (NTRS)
Macdonald, H.; Waite, W. P.; Kaupp, V. H.; Bridges, L. C.; Storm, M.
1983-01-01
Comparisons between LANDSAT MSS imagery, and aircraft and space radar imagery from different geologic environments in the United States, Panama, Colombia, and New Guinea demonstrate the interdependence of radar system geometry and terrain configuration for optimum retrieval of geologic information. Illustrations suggest that in the case of space radars (SIR-A in particular), the ability to acquire multiple look-angle/look-direction radar images of a given area is more valuable for landform mapping than further improvements in spatial resolution. Radar look-angle is concluded to be one of the most important system parameters of a space radar designed to be used for geologic reconnaissance mapping. The optimum set of system parameters must be determined for imaging different classes of landform features and tailoring the look-angle to local topography.
An Optimization-based Framework to Learn Conditional Random Fields for Multi-label Classification
Naeini, Mahdi Pakdaman; Batal, Iyad; Liu, Zitao; Hong, CharmGil; Hauskrecht, Milos
2015-01-01
This paper studies multi-label classification problem in which data instances are associated with multiple, possibly high-dimensional, label vectors. This problem is especially challenging when labels are dependent and one cannot decompose the problem into a set of independent classification problems. To address the problem and properly represent label dependencies we propose and study a pairwise conditional random Field (CRF) model. We develop a new approach for learning the structure and parameters of the CRF from data. The approach maximizes the pseudo likelihood of observed labels and relies on the fast proximal gradient descend for learning the structure and limited memory BFGS for learning the parameters of the model. Empirical results on several datasets show that our approach outperforms several multi-label classification baselines, including recently published state-of-the-art methods. PMID:25927015
Mapping of chlorophyll a distributions in coastal zones
NASA Technical Reports Server (NTRS)
Johnson, R. W.
1978-01-01
It is pointed out that chlorophyll a is an important environmental parameter for monitoring water quality, nutrient loads, and pollution effects in coastal zones. High chlorophyll a concentrations occur in areas which have high nutrient inflows from sources such as sewage treatment plants and industrial wastes. Low chlorophyll a concentrations may be due to the addition of toxic substances from industrial wastes or other sources. Remote sensing provides an opportunity to assess distributions of water quality parameters, such as chlorophyll a. A description is presented of the chlorophyll a analysis and a quantitative mapping of the James River, Virginia. An approach considered by Johnson (1977) was used in the analysis. An application of the multiple regression analysis technique to a data set collected over the New York Bight, an environmentally different area of the coastal zone, is also discussed.
Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
Wei, Pan; Anderson, Derek T.
2018-01-01
A significant challenge in object detection is accurate identification of an object’s position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications. PMID:29562609
iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems
NASA Astrophysics Data System (ADS)
Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.
2017-11-01
iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred
Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Finite frequency shear wave splitting tomography: a model space search approach
NASA Astrophysics Data System (ADS)
Mondal, P.; Long, M. D.
2017-12-01
Observations of seismic anisotropy provide key constraints on past and present mantle deformation. A common method for upper mantle anisotropy is to measure shear wave splitting parameters (delay time and fast direction). However, the interpretation is not straightforward, because splitting measurements represent an integration of structure along the ray path. A tomographic approach that allows for localization of anisotropy is desirable; however, tomographic inversion for anisotropic structure is a daunting task, since 21 parameters are needed to describe general anisotropy. Such a large parameter space does not allow a straightforward application of tomographic inversion. Building on previous work on finite frequency shear wave splitting tomography, this study aims to develop a framework for SKS splitting tomography with a new parameterization of anisotropy and a model space search approach. We reparameterize the full elastic tensor, reducing the number of parameters to three (a measure of strength based on symmetry considerations for olivine, plus the dip and azimuth of the fast symmetry axis). We compute Born-approximation finite frequency sensitivity kernels relating model perturbations to splitting intensity observations. The strong dependence of the sensitivity kernels on the starting anisotropic model, and thus the strong non-linearity of the inverse problem, makes a linearized inversion infeasible. Therefore, we implement a Markov Chain Monte Carlo technique in the inversion procedure. We have performed tests with synthetic data sets to evaluate computational costs and infer the resolving power of our algorithm for synthetic models with multiple anisotropic layers. Our technique can resolve anisotropic parameters on length scales of ˜50 km for realistic station and event configurations for dense broadband experiments. We are proceeding towards applications to real data sets, with an initial focus on the High Lava Plains of Oregon.
NASA Astrophysics Data System (ADS)
Shibata, Kenichiro; Adhiperdana, Billy G.; Ito, Makoto
2018-01-01
Reconstructions of the dimensions and hydrological features of ancient fluvial channels, such as bankfull depth, bankfull width, and water discharges, have used empirical equations developed from compiled data-sets, mainly from modern meandering rivers, in various tectonic and climatic settings. However, the application of the proposed empirical equations to an ancient fluvial succession should be carefully examined with respect to the tectonic and climatic settings of the objective deposits. In this study, we developed empirical relationships among the mean bankfull channel depth, bankfull channel depth, drainage area, bankfull channel width, mean discharge, and bankfull discharge using data from 24 observation sites of modern gravelly rivers in the Kanto region, central Japan. Some of the equations among these parameters are different from those proposed by previous studies. The discrepancies are considered to reflect tectonic and climatic settings of the present river systems, which are characterized by relatively steeper valley slope, active supply of volcaniclastic sediments, and seasonal precipitation in the Kanto region. The empirical relationships derived from the present study can be applied to modern and ancient gravelly fluvial channels with multiple and alternate bars, developed in convergent margin settings under a temperate climatic condition. The developed empirical equations were applied to a transgressive gravelly fluvial succession of the Paleogene Iwaki Formation, Northeast Japan as a case study. Stratigraphic thicknesses of bar deposits were used for estimation of the bankfull channel depth. In addition, some other geomorphological and hydrological parameters were calculated using the empirical equations developed by the present study. The results indicate that the Iwaki Formation fluvial deposits were formed by a fluvial system that was represented by the dimensions and discharges of channels similar to those of the middle to lower reaches of the modern Kuji River, northern Kanto region. In addition, no distinct temporal changes in paleochannel dimensions and discharges were observed in an overall transgressive Iwaki Formation fluvial system. This implies that a rise in relative sea level did not affect the paleochannel dimensions within a sequence stratigraphic framework.
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
NASA Astrophysics Data System (ADS)
Mirfenderesgi, G.; Bohrer, G.; Matheny, A. M.; Fatichi, S.; Frasson, R. P. M.; Schafer, K. V.
2015-12-01
The Finite-Elements Tree-Crown Hydrodynamics model version 2 (FETCH2) simulates water flow through the tree using the porous media analogy. Empirical equations relate water potential within the stem to stomatal conductance at the leaf level. Leaves are connected to the stem at each height. While still simplified, this approach brings realism to the simulation of transpiration compared with models where stomatal conductance is directly linked to soil moisture. The FETCH2 model accounts for plant hydraulic traits such as xylem conductivity, area of hydro-active xylem, vertical distribution of leaf area, and maximal and minimal xylem water content, and their effect on the dynamics of water flow in the tree system. Such a modeling tool enhances our understanding of the role of hydraulic limitations and allows us to incorporate the effects of short-term water stresses on transpiration. Here, we use FETCH2 parameterized and evaluated with a large sap-flow observations data set, collected from 21 trees of two genera (oak/pine) at Silas Little Experimental Forest, NJ. The well-drained deep sandy soil leads to water stress during many days throughout the growing season. We conduct a set of tree-level transpiration simulations, and use the results to evaluate the effects of different hydraulic strategies on daily transpiration and water use efficiency. We define these "hydraulic strategies" through combinations of multiple sets of parameters in the model that describe the root, stem and leaf hydraulics. After evaluating the performance of the model, we use the results to shed light on the future trajectory of the forest in terms of species-specific transpiration responses. Application of the model on the two co-occurring oak species (Quercus prinus L. and Quercus velutina Lam) shows that the applied modeling approach was successfully captures the differences in water-use strategy through optimizing multiple physiological and hydraulic parameters.
Probing Quark-Gluon-Plasma properties with a Bayesian model-to-data comparison
NASA Astrophysics Data System (ADS)
Cai, Tianji; Bernhard, Jonah; Ke, Weiyao; Bass, Steffen; Duke QCD Group Team
2016-09-01
Experiments at RHIC and LHC study a special state of matter called the Quark Gluon Plasma (QGP), where quarks and gluons roam freely, by colliding relativistic heavy-ions. Given the transitory nature of the QGP, its properties can only be explored by comparing computational models of its formation and evolution to experimental data. The models fall, roughly speaking, under two categories-those solely using relativistic viscous hydrodynamics (pure hydro model) and those that in addition couple to a microscopic Boltzmann transport for the later evolution of the hadronic decay products (hybrid model). Each of these models has multiple parameters that encode the physical properties we want to probe and that need to be calibrated to experimental data, a task which is computationally expensive, but necessary for the knowledge extraction and determination of the models' quality. Our group has developed an analysis technique based on Bayesian Statistics to perform the model calibration and to extract probability distributions for each model parameter. Following the previous work that applies the technique to the hybrid model, we now perform a similar analysis on a pure-hydro model and display the posterior distributions for the same set of model parameters. We also develop a set of criteria to assess the quality of the two models with respect to their ability to describe current experimental data. Funded by Duke University Goldman Sachs Research Fellowship.
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Climbing fibers predict movement kinematics and performance errors.
Streng, Martha L; Popa, Laurentiu S; Ebner, Timothy J
2017-09-01
Requisite for understanding cerebellar function is a complete characterization of the signals provided by complex spike (CS) discharge of Purkinje cells, the output neurons of the cerebellar cortex. Numerous studies have provided insights into CS function, with the most predominant view being that they are evoked by error events. However, several reports suggest that CSs encode other aspects of movements and do not always respond to errors or unexpected perturbations. Here, we evaluated CS firing during a pseudo-random manual tracking task in the monkey ( Macaca mulatta ). This task provides extensive coverage of the work space and relative independence of movement parameters, delivering a robust data set to assess the signals that activate climbing fibers. Using reverse correlation, we determined feedforward and feedback CSs firing probability maps with position, velocity, and acceleration, as well as position error, a measure of tracking performance. The direction and magnitude of the CS modulation were quantified using linear regression analysis. The major findings are that CSs significantly encode all three kinematic parameters and position error, with acceleration modulation particularly common. The modulation is not related to "events," either for position error or kinematics. Instead, CSs are spatially tuned and provide a linear representation of each parameter evaluated. The CS modulation is largely predictive. Similar analyses show that the simple spike firing is modulated by the same parameters as the CSs. Therefore, CSs carry a broader array of signals than previously described and argue for climbing fiber input having a prominent role in online motor control. NEW & NOTEWORTHY This article demonstrates that complex spike (CS) discharge of cerebellar Purkinje cells encodes multiple parameters of movement, including motor errors and kinematics. The CS firing is not driven by error or kinematic events; instead it provides a linear representation of each parameter. In contrast with the view that CSs carry feedback signals, the CSs are predominantly predictive of upcoming position errors and kinematics. Therefore, climbing fibers carry multiple and predictive signals for online motor control. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
Allowable carbon emissions lowered by multiple climate targets.
Steinacher, Marco; Joos, Fortunat; Stocker, Thomas F
2013-07-11
Climate targets are designed to inform policies that would limit the magnitude and impacts of climate change caused by anthropogenic emissions of greenhouse gases and other substances. The target that is currently recognized by most world governments places a limit of two degrees Celsius on the global mean warming since preindustrial times. This would require large sustained reductions in carbon dioxide emissions during the twenty-first century and beyond. Such a global temperature target, however, is not sufficient to control many other quantities, such as transient sea level rise, ocean acidification and net primary production on land. Here, using an Earth system model of intermediate complexity (EMIC) in an observation-informed Bayesian approach, we show that allowable carbon emissions are substantially reduced when multiple climate targets are set. We take into account uncertainties in physical and carbon cycle model parameters, radiative efficiencies, climate sensitivity and carbon cycle feedbacks along with a large set of observational constraints. Within this framework, we explore a broad range of economically feasible greenhouse gas scenarios from the integrated assessment community to determine the likelihood of meeting a combination of specific global and regional targets under various assumptions. For any given likelihood of meeting a set of such targets, the allowable cumulative emissions are greatly reduced from those inferred from the temperature target alone. Therefore, temperature targets alone are unable to comprehensively limit the risks from anthropogenic emissions.
NASA Astrophysics Data System (ADS)
Klein, Ole; Cirpka, Olaf A.; Bastian, Peter; Ippisch, Olaf
2017-04-01
In the geostatistical inverse problem of subsurface hydrology, continuous hydraulic parameter fields, in most cases hydraulic conductivity, are estimated from measurements of dependent variables, such as hydraulic heads, under the assumption that the parameter fields are autocorrelated random space functions. Upon discretization, the continuous fields become large parameter vectors with O (104 -107) elements. While cokriging-like inversion methods have been shown to be efficient for highly resolved parameter fields when the number of measurements is small, they require the calculation of the sensitivity of each measurement with respect to all parameters, which may become prohibitive with large sets of measured data such as those arising from transient groundwater flow. We present a Preconditioned Conjugate Gradient method for the geostatistical inverse problem, in which a single adjoint equation needs to be solved to obtain the gradient of the objective function. Using the autocovariance matrix of the parameters as preconditioning matrix, expensive multiplications with its inverse can be avoided, and the number of iterations is significantly reduced. We use a randomized spectral decomposition of the posterior covariance matrix of the parameters to perform a linearized uncertainty quantification of the parameter estimate. The feasibility of the method is tested by virtual examples of head observations in steady-state and transient groundwater flow. These synthetic tests demonstrate that transient data can reduce both parameter uncertainty and time spent conducting experiments, while the presented methods are able to handle the resulting large number of measurements.
Tao, Fulu; Rötter, Reimund P; Palosuo, Taru; Gregorio Hernández Díaz-Ambrona, Carlos; Mínguez, M Inés; Semenov, Mikhail A; Kersebaum, Kurt Christian; Nendel, Claas; Specka, Xenia; Hoffmann, Holger; Ewert, Frank; Dambreville, Anaelle; Martre, Pierre; Rodríguez, Lucía; Ruiz-Ramos, Margarita; Gaiser, Thomas; Höhn, Jukka G; Salo, Tapio; Ferrise, Roberto; Bindi, Marco; Cammarano, Davide; Schulman, Alan H
2018-03-01
Climate change impact assessments are plagued with uncertainties from many sources, such as climate projections or the inadequacies in structure and parameters of the impact model. Previous studies tried to account for the uncertainty from one or two of these. Here, we developed a triple-ensemble probabilistic assessment using seven crop models, multiple sets of model parameters and eight contrasting climate projections together to comprehensively account for uncertainties from these three important sources. We demonstrated the approach in assessing climate change impact on barley growth and yield at Jokioinen, Finland in the Boreal climatic zone and Lleida, Spain in the Mediterranean climatic zone, for the 2050s. We further quantified and compared the contribution of crop model structure, crop model parameters and climate projections to the total variance of ensemble output using Analysis of Variance (ANOVA). Based on the triple-ensemble probabilistic assessment, the median of simulated yield change was -4% and +16%, and the probability of decreasing yield was 63% and 31% in the 2050s, at Jokioinen and Lleida, respectively, relative to 1981-2010. The contribution of crop model structure to the total variance of ensemble output was larger than that from downscaled climate projections and model parameters. The relative contribution of crop model parameters and downscaled climate projections to the total variance of ensemble output varied greatly among the seven crop models and between the two sites. The contribution of downscaled climate projections was on average larger than that of crop model parameters. This information on the uncertainty from different sources can be quite useful for model users to decide where to put the most effort when preparing or choosing models or parameters for impact analyses. We concluded that the triple-ensemble probabilistic approach that accounts for the uncertainties from multiple important sources provide more comprehensive information for quantifying uncertainties in climate change impact assessments as compared to the conventional approaches that are deterministic or only account for the uncertainties from one or two of the uncertainty sources. © 2017 John Wiley & Sons Ltd.
COACH: profile-profile alignment of protein families using hidden Markov models.
Edgar, Robert C; Sjölander, Kimmen
2004-05-22
Alignments of two multiple-sequence alignments, or statistical models of such alignments (profiles), have important applications in computational biology. The increased amount of information in a profile versus a single sequence can lead to more accurate alignments and more sensitive homolog detection in database searches. Several profile-profile alignment methods have been proposed and have been shown to improve sensitivity and alignment quality compared with sequence-sequence methods (such as BLAST) and profile-sequence methods (e.g. PSI-BLAST). Here we present a new approach to profile-profile alignment we call Comparison of Alignments by Constructing Hidden Markov Models (HMMs) (COACH). COACH aligns two multiple sequence alignments by constructing a profile HMM from one alignment and aligning the other to that HMM. We compare the alignment accuracy of COACH with two recently published methods: Yona and Levitt's prof_sim and Sadreyev and Grishin's COMPASS. On two sets of reference alignments selected from the FSSP database, we find that COACH is able, on average, to produce alignments giving the best coverage or the fewest errors, depending on the chosen parameter settings. COACH is freely available from www.drive5.com/lobster
NASA Astrophysics Data System (ADS)
Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.
2017-08-01
Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.
Improved classification accuracy by feature extraction using genetic algorithms
NASA Astrophysics Data System (ADS)
Patriarche, Julia; Manduca, Armando; Erickson, Bradley J.
2003-05-01
A feature extraction algorithm has been developed for the purposes of improving classification accuracy. The algorithm uses a genetic algorithm / hill-climber hybrid to generate a set of linearly recombined features, which may be of reduced dimensionality compared with the original set. The genetic algorithm performs the global exploration, and a hill climber explores local neighborhoods. Hybridizing the genetic algorithm with a hill climber improves both the rate of convergence, and the final overall cost function value; it also reduces the sensitivity of the genetic algorithm to parameter selection. The genetic algorithm includes the operators: crossover, mutation, and deletion / reactivation - the last of these effects dimensionality reduction. The feature extractor is supervised, and is capable of deriving a separate feature space for each tissue (which are reintegrated during classification). A non-anatomical digital phantom was developed as a gold standard for testing purposes. In tests with the phantom, and with images of multiple sclerosis patients, classification with feature extractor derived features yielded lower error rates than using standard pulse sequences, and with features derived using principal components analysis. Using the multiple sclerosis patient data, the algorithm resulted in a mean 31% reduction in classification error of pure tissues.
Direct system parameter identification of mechanical structures with application to modal analysis
NASA Technical Reports Server (NTRS)
Leuridan, J. M.; Brown, D. L.; Allemang, R. J.
1982-01-01
In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
NASA Astrophysics Data System (ADS)
Wang, Zu-liang; Zhang, Ting; Xie, Shi-yang
2017-01-01
In order to improve the agricultural tracing efficiency and reduce tracking and monitoring cost, agricultural products quality tracking and tracing based on Radio-Frequency Identification(RFID) technology is studied, then tracing and tracking model is set up. Three-layer structure model is established to realize the high quality of agricultural products traceability and tracking. To solve the collision problems between multiple RFID tags and improve the identification efficiency a new reservation slot allocation mechanism is proposed. And then we analyze and optimize the parameter by numerical simulation method.
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Maclean, A.; Tolson, B. A.; Burn, D. H.
2009-05-01
Hydrologic model calibration aims to find a set of parameters that adequately simulates observations of watershed behavior, such as streamflow, or a state variable, such as snow water equivalent (SWE). There are different metrics for evaluating calibration effectiveness that involve quantifying prediction errors, such as the Nash-Sutcliffe (NS) coefficient and bias evaluated for the entire calibration period, on a seasonal basis, for low flows, or for high flows. Many of these metrics are conflicting such that the set of parameters that maximizes the high flow NS differs from the set of parameters that maximizes the low flow NS. Conflicting objectives are very likely when different calibration objectives are based on different fluxes and/or state variables (e.g., NS based on streamflow versus SWE). One of the most popular ways to balance different metrics is to aggregate them based on their importance and find the set of parameters that optimizes a weighted sum of the efficiency metrics. Comparing alternative hydrologic models (e.g., assessing model improvement when a process or more detail is added to the model) based on the aggregated objective might be misleading since it represents one point on the tradeoff of desired error metrics. To derive a more comprehensive model comparison, we solved a bi-objective calibration problem to estimate the tradeoff between two error metrics for each model. Although this approach is computationally more expensive than the aggregation approach, it results in a better understanding of the effectiveness of selected models at each level of every error metric and therefore provides a better rationale for judging relative model quality. The two alternative models used in this study are two MESH hydrologic models (version 1.2) of the Wolf Creek Research basin that differ in their watershed spatial discretization (a single Grouped Response Unit, GRU, versus multiple GRUs). The MESH model, currently under development by Environment Canada, is a coupled land-surface and hydrologic model. Results will demonstrate the conclusions a modeller might make regarding the value of additional watershed spatial discretization under both an aggregated (single-objective) and multi-objective model comparison framework.
Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Clustering network layers with the strata multilayer stochastic block model.
Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J
2016-01-01
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the "strata multilayer stochastic block model" (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called "strata", which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project.
Clustering network layers with the strata multilayer stochastic block model
Stanley, Natalie; Shai, Saray; Taylor, Dane; Mucha, Peter J.
2016-01-01
Multilayer networks are a useful data structure for simultaneously capturing multiple types of relationships between a set of nodes. In such networks, each relational definition gives rise to a layer. While each layer provides its own set of information, community structure across layers can be collectively utilized to discover and quantify underlying relational patterns between nodes. To concisely extract information from a multilayer network, we propose to identify and combine sets of layers with meaningful similarities in community structure. In this paper, we describe the “strata multilayer stochastic block model” (sMLSBM), a probabilistic model for multilayer community structure. The central extension of the model is that there exist groups of layers, called “strata”, which are defined such that all layers in a given stratum have community structure described by a common stochastic block model (SBM). That is, layers in a stratum exhibit similar node-to-community assignments and SBM probability parameters. Fitting the sMLSBM to a multilayer network provides a joint clustering that yields node-to-community and layer-to-stratum assignments, which cooperatively aid one another during inference. We describe an algorithm for separating layers into their appropriate strata and an inference technique for estimating the SBM parameters for each stratum. We demonstrate our method using synthetic networks and a multilayer network inferred from data collected in the Human Microbiome Project. PMID:28435844
Complex Spiral Structure in the HD 100546 Transitional Disk as Revealed by GPI and MagAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follette, Katherine B.; Macintosh, Bruce; Mullen, Wyatt
We present optical and near-infrared high-contrast images of the transitional disk HD 100546 taken with the Magellan Adaptive Optics system (MagAO) and the Gemini Planet Imager (GPI). GPI data include both polarized intensity and total intensity imagery, and MagAO data are taken in Simultaneous Differential Imaging mode at H α . The new GPI H -band total intensity data represent a significant enhancement in sensitivity and field rotation compared to previous data sets and enable a detailed exploration of substructure in the disk. The data are processed with a variety of differential imaging techniques (polarized, angular, reference, and simultaneous differentialmore » imaging) in an attempt to identify the disk structures that are most consistent across wavelengths, processing techniques, and algorithmic parameters. The inner disk cavity at 15 au is clearly resolved in multiple data sets, as are a variety of spiral features. While the cavity and spiral structures are identified at levels significantly distinct from the neighboring regions of the disk under several algorithms and with a range of algorithmic parameters, emission at the location of HD 100546 “ c ” varies from point-like under aggressive algorithmic parameters to a smooth continuous structure with conservative parameters, and is consistent with disk emission. Features identified in the HD 100546 disk bear qualitative similarity to computational models of a moderately inclined two-armed spiral disk, where projection effects and wrapping of the spiral arms around the star result in a number of truncated spiral features in forward-modeled images.« less
NASA Astrophysics Data System (ADS)
Cheng, Y.; Ogden, F. L.; Zhu, J.
2017-12-01
The hydrologic behavior of steep catchments with saprolitic soils in the humid seasonal tropics varies with land use and cover, even when they have identical topographic index and slope distributions, underlying geology and soils textures. Forested catchments can produce more baseflow during the dry season compared to catchments containing substantial amount of pasture, the so-called "sponge effect". During rainfall events, forested catchments can also exhibit lower peak runoff rates and runoff efficiencies compared to pasture catchments. We hypothesize that hydrologic effects of land use arise from differences in preferential flow paths (PFPs) formed by biotic and abiotic factors in the upper one to two meters of soil and that land use effects on hydrological response are described by the relative amounts of forest and pasture within a catchment. Furthermore, we hypothesize that infiltration measurements at different scales allow estimation of PFP-related parameters. These hypotheses are tested by a model that explicitly simulates PFPs using distinct input parameter sets for forest and pasture. Runoff observations from three catchments with pasture, forest, and a mosaic of subsistence agricultural land covers allow model evaluation. Multiple objective criteria indicate that field measurements of infiltration enable PFP-relevant parameter identification and that pasture and forest end member parameter sets describe much of the observed difference. Analysis of water balance components and comparison between average transient water table depth and vertical PFP flow capacity demonstrate that the interplay of lateral and vertical PFPs contribute to the sponge-effect and can explain differences in peak runoff and runoff efficiency.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group
NASA Astrophysics Data System (ADS)
Ardentov, Andrei A.; Sachkov, Yuri L.
2017-12-01
We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.
Weight Status in Persons with Multiple Sclerosis: Implications for Mobility Outcomes
Pilutti, Lara A.; Dlugonski, Deirdre; Pula, John H.; Motl, Robert W.
2012-01-01
The accumulation of excess body weight may have important health and disease consequences for persons with multiple sclerosis (MS). This study examined the effect of weight status on mobility using a comprehensive set of mobility outcomes including ambulatory performance (timed 25-foot walk, T25FW; 6-minute walk, 6MW; oxygen cost of walking, Cw; spatiotemporal parameters of gait; self-reported walking impairment, Multiple Sclerosis Walking Scale-12 (MSWS-12); and free-living activity, accelerometry) in 168 ambulatory persons with MS. Mean (SD) BMI was 27.7 (5.1) kg/m2. Of the 168 participants, 31.0% were classified as normal weight (BMI = 18.5–24.9 kg/m2), 36.3% were classified as overweight (BMI = 25.0–29.9 kg/m2), and 32.7% were classified as obese, classes I and II (BMI = 30–39.9 kg/m2). There were no significant differences among BMI groups on T25FW and 6MW, Cw, spatiotemporal gait parameters, MSWS-12, or daily step and movement counts. The prevalence of overweight and obesity in this sample was almost 70%, but there was not a consistent nor significant impact of BMI on outcomes of mobility. The lack of an effect of weight status on mobility emphasizes the need to focus on and identify other factors which may be important targets of ambulatory performance in persons with MS. PMID:23050129
Weight status in persons with multiple sclerosis: implications for mobility outcomes.
Pilutti, Lara A; Dlugonski, Deirdre; Pula, John H; Motl, Robert W
2012-01-01
The accumulation of excess body weight may have important health and disease consequences for persons with multiple sclerosis (MS). This study examined the effect of weight status on mobility using a comprehensive set of mobility outcomes including ambulatory performance (timed 25-foot walk, T25FW; 6-minute walk, 6MW; oxygen cost of walking, C(w); spatiotemporal parameters of gait; self-reported walking impairment, Multiple Sclerosis Walking Scale-12 (MSWS-12); and free-living activity, accelerometry) in 168 ambulatory persons with MS. Mean (SD) BMI was 27.7 (5.1) kg/m(2). Of the 168 participants, 31.0% were classified as normal weight (BMI = 18.5-24.9 kg/m(2)), 36.3% were classified as overweight (BMI = 25.0-29.9 kg/m(2)), and 32.7% were classified as obese, classes I and II (BMI = 30-39.9 kg/m(2)). There were no significant differences among BMI groups on T25FW and 6MW, C(w), spatiotemporal gait parameters, MSWS-12, or daily step and movement counts. The prevalence of overweight and obesity in this sample was almost 70%, but there was not a consistent nor significant impact of BMI on outcomes of mobility. The lack of an effect of weight status on mobility emphasizes the need to focus on and identify other factors which may be important targets of ambulatory performance in persons with MS.
On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs
NASA Astrophysics Data System (ADS)
Jiang, Minghui
We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.
Using Registered Dental Hygienists to Promote a School-Based Approach to Dental Public Health
Wellever, Anthony; Kelly, Patricia
2017-01-01
We examine a strategy for improving oral health in the United States by focusing on low-income children in school-based settings. Vulnerable children often experience cultural, social, economic, structural, and geographic barriers when trying to access dental services in traditional dental office settings. These disparities have been discussed for more than a decade in multiple US Department of Health and Human Services publications. One solution is to revise dental practice acts to allow registered dental hygienists increased scope of services, expanded public health delivery opportunities, and decreased dentist supervision. We provide examples of how federally qualified health centers have implemented successful school-based dental models within the parameters of two state policies that allow registered dental hygienists varying levels of dentist supervision. Changes to dental practice acts at the state level allowing registered dental hygienists to practice with limited supervision in community settings, such as schools, may provide vulnerable populations greater access to screening and preventive services. We derive our recommendations from expert opinion. PMID:28661808
Greenlees, Janet
2013-01-01
This article examines the position of the working environment within public health priorities and as a contributor to the health of a community. Using two Lancashire textile towns (Burnley and Blackburn) as case studies and drawing on a variety of sources, it highlights how, while legislation set the industry parameters for legal enforcement of working conditions, local public health priorities were pivotal in setting codes of practice. The complexities entwined with identifying the working environment as a cause of ill health and with improving it were entangled within the local community health context. In addition, the multiple understandings of Medical Officers of Health surrounding the remit of their responsibilities impacted the local health context. These did not always parallel national regulations. Indeed, it was these local, community specific forces that set the public health agenda, determined its path and the place of the working environment within this. PMID:24771979
Determination of laser cutting process conditions using the preference selection index method
NASA Astrophysics Data System (ADS)
Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan
2017-03-01
Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.
A holistic approach to ZigBee performance enhancement for home automation networks.
Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep
2014-08-14
Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network.
A Holistic Approach to ZigBee Performance Enhancement for Home Automation Networks
Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep
2014-01-01
Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network. PMID:25196004
A simulation study on Bayesian Ridge regression models for several collinearity levels
NASA Astrophysics Data System (ADS)
Efendi, Achmad; Effrihan
2017-12-01
When analyzing data with multiple regression model if there are collinearities, then one or several predictor variables are usually omitted from the model. However, there sometimes some reasons, for instance medical or economic reasons, the predictors are all important and should be included in the model. Ridge regression model is not uncommon in some researches to use to cope with collinearity. Through this modeling, weights for predictor variables are used for estimating parameters. The next estimation process could follow the concept of likelihood. Furthermore, for the estimation nowadays the Bayesian version could be an alternative. This estimation method does not match likelihood one in terms of popularity due to some difficulties; computation and so forth. Nevertheless, with the growing improvement of computational methodology recently, this caveat should not at the moment become a problem. This paper discusses about simulation process for evaluating the characteristic of Bayesian Ridge regression parameter estimates. There are several simulation settings based on variety of collinearity levels and sample sizes. The results show that Bayesian method gives better performance for relatively small sample sizes, and for other settings the method does perform relatively similar to the likelihood method.
GLOBALLY ADAPTIVE QUANTILE REGRESSION WITH ULTRA-HIGH DIMENSIONAL DATA
Zheng, Qi; Peng, Limin; He, Xuming
2015-01-01
Quantile regression has become a valuable tool to analyze heterogeneous covaraite-response associations that are often encountered in practice. The development of quantile regression methodology for high dimensional covariates primarily focuses on examination of model sparsity at a single or multiple quantile levels, which are typically prespecified ad hoc by the users. The resulting models may be sensitive to the specific choices of the quantile levels, leading to difficulties in interpretation and erosion of confidence in the results. In this article, we propose a new penalization framework for quantile regression in the high dimensional setting. We employ adaptive L1 penalties, and more importantly, propose a uniform selector of the tuning parameter for a set of quantile levels to avoid some of the potential problems with model selection at individual quantile levels. Our proposed approach achieves consistent shrinkage of regression quantile estimates across a continuous range of quantiles levels, enhancing the flexibility and robustness of the existing penalized quantile regression methods. Our theoretical results include the oracle rate of uniform convergence and weak convergence of the parameter estimators. We also use numerical studies to confirm our theoretical findings and illustrate the practical utility of our proposal. PMID:26604424
Electric Propulsion System Selection Process for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Landau, Damon; Chase, James; Kowalkowski, Theresa; Oh, David; Randolph, Thomas; Sims, Jon; Timmerman, Paul
2008-01-01
The disparate design problems of selecting an electric propulsion system, launch vehicle, and flight time all have a significant impact on the cost and robustness of a mission. The effects of these system choices combine into a single optimization of the total mission cost, where the design constraint is a required spacecraft neutral (non-electric propulsion) mass. Cost-optimal systems are designed for a range of mass margins to examine how the optimal design varies with mass growth. The resulting cost-optimal designs are compared with results generated via mass optimization methods. Additional optimizations with continuous system parameters address the impact on mission cost due to discrete sets of launch vehicle, power, and specific impulse. The examined mission set comprises a near-Earth asteroid sample return, multiple main belt asteroid rendezvous, comet rendezvous, comet sample return, and a mission to Saturn.
Flexible Method for Inter-object Communication in C++
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Gould, Jack J.
1994-01-01
A method has been developed for organizing and sharing large amounts of information between objects in C++ code. This method uses a set of object classes to define variables and group them into tables. The variable tables presented here provide a convenient way of defining and cataloging data, as well as a user-friendly input/output system, a standardized set of access functions, mechanisms for ensuring data integrity, methods for interprocessor data transfer, and an interpretive language for programming relationships between parameters. The object-oriented nature of these variable tables enables the use of multiple data types, each with unique attributes and behavior. Because each variable provides its own access methods, redundant table lookup functions can be bypassed, thus decreasing access times while maintaining data integrity. In addition, a method for automatic reference counting was developed to manage memory safely.
Monterial, Mateusz; Marleau, Peter; Paff, Marc; ...
2017-01-20
Here, we present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with bothmore » multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.« less
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Paff, Marc; Clarke, Shaun; Pozzi, Sara
2017-04-01
We present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with both multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.
Multiple Climate States of Habitable Exoplanets: The Role of Obliquity and Irradiance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilic, C.; Raible, C. C.; Stocker, T. F., E-mail: stocker@climate.unibe.ch
Stable, steady climate states on an Earth-size planet with no continents are determined as a function of the tilt of the planet’s rotation axis (obliquity) and stellar irradiance. Using a general circulation model of the atmosphere coupled to a slab ocean and a thermodynamic sea ice model, two states, the Aquaplanet and the Cryoplanet, are found for high and low stellar irradiance, respectively. In addition, four stable states with seasonally and perennially open water are discovered if comprehensively exploring a parameter space of obliquity from 0° to 90° and stellar irradiance from 70% to 135% of the present-day solar constant.more » Within 11% of today’s solar irradiance, we find a rich structure of stable states that extends the area of habitability considerably. For the same set of parameters, different stable states result if simulations are initialized from an aquaplanet or a cryoplanet state. This demonstrates the possibility of multiple equilibria, hysteresis, and potentially rapid climate change in response to small changes in the orbital parameters. The dynamics of the atmosphere of an aquaplanet or a cryoplanet state is investigated for similar values of obliquity and stellar irradiance. The atmospheric circulation substantially differs in the two states owing to the relative strength of the primary drivers of the meridional transport of heat and momentum. At 90° obliquity and present-day solar constant, the atmospheric dynamics of an Aquaplanet state and one with an equatorial ice cover is analyzed.« less
Multiple Climate States of Habitable Exoplanets: The Role of Obliquity and Irradiance
NASA Astrophysics Data System (ADS)
Kilic, C.; Raible, C. C.; Stocker, T. F.
2017-08-01
Stable, steady climate states on an Earth-size planet with no continents are determined as a function of the tilt of the planet’s rotation axis (obliquity) and stellar irradiance. Using a general circulation model of the atmosphere coupled to a slab ocean and a thermodynamic sea ice model, two states, the Aquaplanet and the Cryoplanet, are found for high and low stellar irradiance, respectively. In addition, four stable states with seasonally and perennially open water are discovered if comprehensively exploring a parameter space of obliquity from 0° to 90° and stellar irradiance from 70% to 135% of the present-day solar constant. Within 11% of today’s solar irradiance, we find a rich structure of stable states that extends the area of habitability considerably. For the same set of parameters, different stable states result if simulations are initialized from an aquaplanet or a cryoplanet state. This demonstrates the possibility of multiple equilibria, hysteresis, and potentially rapid climate change in response to small changes in the orbital parameters. The dynamics of the atmosphere of an aquaplanet or a cryoplanet state is investigated for similar values of obliquity and stellar irradiance. The atmospheric circulation substantially differs in the two states owing to the relative strength of the primary drivers of the meridional transport of heat and momentum. At 90° obliquity and present-day solar constant, the atmospheric dynamics of an Aquaplanet state and one with an equatorial ice cover is analyzed.
Automated Glacier Surface Velocity using Multi-Image/Multi-Chip (MIMC) Feature Tracking
NASA Astrophysics Data System (ADS)
Ahn, Y.; Howat, I. M.
2009-12-01
Remote sensing from space has enabled effective monitoring of remote and inhospitable polar regions. Glacier velocity, and its variation in time, is one of the most important parameters needed to understand glacier dynamics, glacier mass balance and contribution to sea level rise. Regular measurements of ice velocity are possible from large and accessible satellite data set archives, such as ASTER and LANDSAT-7. Among satellite imagery, optical imagery (i.e. passive, visible to near-infrared band sensors) provides abundant data with optimal spatial resolution and repeat interval for tracking glacier motion at high temporal resolution. Due to massive amounts of data, computation of ice velocity from feature tracking requires 1) user-friendly interface, 2) minimum local/user parameter inputs and 3) results that need minimum editing. We focus on robust feature tracking, applicable to all currently available optical satellite imagery, that is ASTER, SPOT and LANDSAT etc. We introduce the MIMC (multiple images/multiple chip sizes) matching approach that does not involve any user defined local/empirical parameters except approximate average glacier speed. We also introduce a method for extracting velocity from LANDSAT-7 SLC-off data, which has 22 percent of scene data missing in slanted strips due to failure of the scan line corrector. We apply our approach to major outlet glaciers in west/east Greenland and assess our MIMC feature tracking technique by comparison with conventional correlation matching and other methods (e.g. InSAR).
Inflammation, homocysteine and carotid intima-media thickness.
Baptista, Alexandre P; Cacdocar, Sanjiva; Palmeiro, Hugo; Faísca, Marília; Carrasqueira, Herménio; Morgado, Elsa; Sampaio, Sandra; Cabrita, Ana; Silva, Ana Paula; Bernardo, Idalécio; Gome, Veloso; Neves, Pedro L
2008-01-01
Cardiovascular disease is the main cause of morbidity and mortality in chronic renal patients. Carotid intima-media thickness (CIMT) is one of the most accurate markers of atherosclerosis risk. In this study, the authors set out to evaluate a population of chronic renal patients to determine which factors are associated with an increase in intima-media thickness. We included 56 patients (F=22, M=34), with a mean age of 68.6 years, and an estimated glomerular filtration rate of 15.8 ml/min (calculated by the MDRD equation). Various laboratory and inflammatory parameters (hsCRP, IL-6 and TNF-alpha) were evaluated. All subjects underwent measurement of internal carotid artery intima-media thickness by high-resolution real-time B-mode ultrasonography using a 10 MHz linear transducer. Intima-media thickness was used as a dependent variable in a simple linear regression model, with the various laboratory parameters as independent variables. Only parameters showing a significant correlation with CIMT were evaluated in a multiple regression model: age (p=0.001), hemoglobin (p=00.3), logCRP (p=0.042), logIL-6 (p=0.004) and homocysteine (p=0.002). In the multiple regression model we found that age (p=0.001) and homocysteine (p=0.027) were independently correlated with CIMT. LogIL-6 did not reach statistical significance (p=0.057), probably due to the small population size. The authors conclude that age and homocysteine correlate with carotid intima-media thickness, and thus can be considered as markers/risk factors in chronic renal patients.
Cannabis use by individuals with multiple sclerosis: effects on specific immune parameters.
Sexton, Michelle; Cudaback, Eiron; Abdullah, Rehab A; Finnell, John; Mischley, Laurie K; Rozga, Mary; Lichtman, Aron H; Stella, Nephi
2014-10-01
Cannabinoids affect immune responses in ways that may be beneficial for autoimmune diseases. We sought to determine whether chronic Cannabis use differentially modulates a select number of immune parameters in healthy controls and individuals with multiple sclerosis (MS cases). Subjects were enrolled and consented to a single blood draw, matched for age and BMI. We measured monocyte migration isolated from each subject, as well as plasma levels of endocannabinoids and cytokines. Cases met definition of MS by international diagnostic criteria. Monocyte cell migration measured in control subjects and individuals with MS was similarly inhibited by a set ratio of phytocannabinoids. The plasma levels of CCL2 and IL17 were reduced in non-naïve cannabis users irrespective of the cohorts. We detected a significant increase in the endocannabinoid arachidonoylethanolamine (AEA) in serum from individuals with MS compared to control subjects, and no significant difference in levels of other endocannabinoids and signaling lipids irrespective of Cannabis use. Chronic Cannabis use may affect the immune response to similar extent in individuals with MS and control subjects through the ability of phytocannabinoids to reduce both monocyte migration and cytokine levels in serum. From a panel of signaling lipids, only the levels of AEA are increased in individuals with MS, irrespective of Cannabis use or not. Our results suggest that both MS cases and controls respond similarly to chronic Cannabis use with respect to the immune parameters measured in this study.
CANNABIS USE BY INDIVIDUALS WITH MULTIPLE SCLEROSIS: EFFECTS ON SPECIFIC IMMUNE PARAMETERS
Sexton, Michelle; Cudaback, Eiron; Abdullah, Rehab A.; Finnell, John; Mischley, Laurie K; Rozga, Mary; Lichtman, Aron H.; Stella, Nephi
2014-01-01
Cannabinoids affect immune responses in ways that may be beneficial for autoimmune diseases. We sought to determine whether chronic Cannabis use differentially modulates a select number of immune parameters in healthy controls and individuals with multiple sclerosis (MS cases). Subjects were enrolled and consented to a single blood draw, matched for age and BMI. We measured monocyte migration isolated from each subject, as well as plasma levels of endocannabinoids and cytokines. Cases met definition of MS by international diagnostic criteria. Monocyte cell migration measured in control subjects and individuals with MS were similarly inhibited by a set ratio of phytocannabinoids. The plasma levels of CCL2 and IL17 were reduced in non-naïve cannabis users irrespective of the cohorts. We detected a significant increase in the endocannabinoid arachidonoylethanolamine (AEA) in serum from individuals with MS compared to control subjects, and no significant difference in levels of other endocannabinoids and signaling lipids irrespective of Cannabis use. Chronic Cannabis use may affect the immune response to similar extent in individuals with MS and control subjects through the ability of phytocannabinoids to reduce both monocyte migration and cytokine levels in serum. From a panel of signaling lipids, only the levels of AEA are increased in individuals with MS, irrespective from Cannabis use or not. Our results suggest that both MS cases and controls respond similarly to chronic Cannabis use with respect to the immune parameters measured in this study. PMID:25135301
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Optimal correction and design parameter search by modern methods of rigorous global optimization
NASA Astrophysics Data System (ADS)
Makino, K.; Berz, M.
2011-07-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle optics for the computation of aberrations allow the determination of particularly sharp underestimators for large regions. As a consequence, the subsequent progressive pruning of the allowed search space as part of the optimization progresses is carried out particularly effectively. The end result is the rigorous determination of the single or multiple optimal solutions of the parameter optimization, regardless of their location, their number, and the starting values of optimization. The methods are particularly powerful if executed in interplay with genetic optimizers generating their new populations within the currently active unpruned space. Their current best guess provides rigorous upper bounds of the minima, which can then beneficially be used for better pruning. Examples of the method and its performance will be presented, including the determination of all operating points of desired tunes or chromaticities, etc. in storage ring lattices.
Low Cost and Efficient 3d Indoor Mapping Using Multiple Consumer Rgb-D Cameras
NASA Astrophysics Data System (ADS)
Chen, C.; Yang, B. S.; Song, S.
2016-06-01
Driven by the miniaturization, lightweight of positioning and remote sensing sensors as well as the urgent needs for fusing indoor and outdoor maps for next generation navigation, 3D indoor mapping from mobile scanning is a hot research and application topic. The point clouds with auxiliary data such as colour, infrared images derived from 3D indoor mobile mapping suite can be used in a variety of novel applications, including indoor scene visualization, automated floorplan generation, gaming, reverse engineering, navigation, simulation and etc. State-of-the-art 3D indoor mapping systems equipped with multiple laser scanners product accurate point clouds of building interiors containing billions of points. However, these laser scanner based systems are mostly expensive and not portable. Low cost consumer RGB-D Cameras provides an alternative way to solve the core challenge of indoor mapping that is capturing detailed underlying geometry of the building interiors. Nevertheless, RGB-D Cameras have a very limited field of view resulting in low efficiency in the data collecting stage and incomplete dataset that missing major building structures (e.g. ceilings, walls). Endeavour to collect a complete scene without data blanks using single RGB-D Camera is not technic sound because of the large amount of human labour and position parameters need to be solved. To find an efficient and low cost way to solve the 3D indoor mapping, in this paper, we present an indoor mapping suite prototype that is built upon a novel calibration method which calibrates internal parameters and external parameters of multiple RGB-D Cameras. Three Kinect sensors are mounted on a rig with different view direction to form a large field of view. The calibration procedure is three folds: 1, the internal parameters of the colour and infrared camera inside each Kinect are calibrated using a chess board pattern, respectively; 2, the external parameters between the colour and infrared camera inside each Kinect are calibrated using a chess board pattern; 3, the external parameters between every Kinect are firstly calculated using a pre-set calibration field and further refined by an iterative closet point algorithm. Experiments are carried out to validate the proposed method upon RGB-D datasets collected by the indoor mapping suite prototype. The effectiveness and accuracy of the proposed method is evaluated by comparing the point clouds derived from the prototype with ground truth data collected by commercial terrestrial laser scanner at ultra-high density. The overall analysis of the results shows that the proposed method achieves seamless integration of multiple point clouds form different RGB-D cameras collected at 30 frame per second.
Next-Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, Phil
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William; Fitzgerald, Matthew; Stahl, Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models possible.
Next Generation Lightweight Mirror Modeling Software
NASA Technical Reports Server (NTRS)
Arnold, William R., Sr.; Fitzgerald, Mathew; Rosa, Rubin Jaca; Stahl, H. Philip
2013-01-01
The advances in manufacturing techniques for lightweight mirrors, such as EXELSIS deep core low temperature fusion, Corning's continued improvements in the Frit bonding process and the ability to cast large complex designs, combined with water-jet and conventional diamond machining of glasses and ceramics has created the need for more efficient means of generating finite element models of these structures. Traditional methods of assembling 400,000 + element models can take weeks of effort, severely limiting the range of possible optimization variables. This paper will introduce model generation software developed under NASA sponsorship for the design of both terrestrial and space based mirrors. The software deals with any current mirror manufacturing technique, single substrates, multiple arrays of substrates, as well as the ability to merge submodels into a single large model. The modeler generates both mirror and suspension system elements, suspensions can be created either for each individual petal or the whole mirror. A typical model generation of 250,000 nodes and 450,000 elements only takes 5-10 minutes, much of that time being variable input time. The program can create input decks for ANSYS, ABAQUS and NASTRAN. An archive/retrieval system permits creation of complete trade studies, varying cell size, depth, and petal size, suspension geometry with the ability to recall a particular set of parameters and make small or large changes with ease. The input decks created by the modeler are text files which can be modified by any editor, all the key shell thickness parameters are accessible and comments in deck identify which groups of elements are associated with these parameters. This again makes optimization easier. With ANSYS decks, the nodes representing support attachments are grouped into components; in ABAQUS these are SETS and in NASTRAN as GRIDPOINT SETS, this make integration of these models into large telescope or satellite models easier.
Development of reference equations for spirometry in Japanese children aged 6-18 years.
Takase, Masato; Sakata, Hiroshi; Shikada, Masahiro; Tatara, Katsuyoshi; Fukushima, Takayoshi; Miyakawa, Tomoo
2013-01-01
Spirometry is the most widely used pulmonary function test and the measured values of spirometric parameters need to be evaluated using reference values predicted for the corresponding race, sex, age, and height. However, none of the existing reference equations for Japanese children covers the entire age range of 6-18 years. The Japanese Society of Pediatric Pulmonology had organized a working group in 2006, in order to develop a new set of national standard reference equations for commonly used spirometric parameters that are applicable through the age range of 6-18 years. Quality assured spirometric data were collected through 2006-2008, from 14 institutions in Japan. We applied multiple regression analysis, using age in years (A), square of age (A(2)), height in meters (H), square of height (H(2)), and the product of age and height (AH) as explanatory variables to predict forced vital capacity (FVC), forced expiratory volume in 1 sec (FEV(1)), peak expiratory flow (PEF), forced expiratory flow between 25% and 75% of the FVC (FEF(25-75%)), instantaneous forced expiratory flow when 50% (FEF(50%)) or 75% (FEF(75%)) of the FVC have been expired. Finally, 1,296 tests (674 boys, 622 girls) formed the reference data set. Distributions of the percent predicted values did not differ by ages, confirming excellent fit of the prediction equations throughout the entire age range from 6 to 18 years. Cut-off values (around 5 percentile points) for the parameters were also determined. We recommend the use of this new set of prediction equations together with suggested cut-off values, for assessment of spirometry in Japanese children and adolescents. Copyright © 2012 Wiley Periodicals, Inc.
Sjölin, Maria; Edmund, Jens Morgenthaler
2016-07-01
Dynamic treatment planning algorithms use a dosimetric leaf separation (DLS) parameter to model the multi-leaf collimator (MLC) characteristics. Here, we quantify the dosimetric impact of an incorrect DLS parameter and investigate whether common pretreatment quality assurance (QA) methods can detect this effect. 16 treatment plans with intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT) technique for multiple treatment sites were calculated with a correct and incorrect setting of the DLS, corresponding to a MLC gap difference of 0.5mm. Pretreatment verification QA was performed with a bi-planar diode array phantom and the electronic portal imaging device (EPID). Measurements were compared to the correct and incorrect planned doses using gamma evaluation with both global (G) and local (L) normalization. Correlation, specificity and sensitivity between the dose volume histogram (DVH) points for the planning target volume (PTV) and the gamma passing rates were calculated. The change in PTV and organs at risk DVH parameters were 0.4-4.1%. Good correlation (>0.83) between the PTVmean dose deviation and measured gamma passing rates was observed. Optimal gamma settings with 3%L/3mm (per beam and composite plan) and 3%G/2mm (composite plan) for the diode array phantom and 2%G/2mm (composite plan) for the EPID system were found. Global normalization and per beam ROC analysis of the diode array phantom showed an area under the curve <0.6. A DLS error can worsen pretreatment QA using gamma analysis with reasonable credibility for the composite plan. A low detectability was demonstrated for a 3%G/3mm per beam gamma setting. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Reconstructing Folding Energy Landscapes by Single-Molecule Force Spectroscopy
Woodside, Michael T.; Block, Steven M.
2015-01-01
Folding may be described conceptually in terms of trajectories over a landscape of free energies corresponding to different molecular configurations. In practice, energy landscapes can be difficult to measure. Single-molecule force spectroscopy (SMFS), whereby structural changes are monitored in molecules subjected to controlled forces, has emerged as a powerful tool for probing energy landscapes. We summarize methods for reconstructing landscapes from force spectroscopy measurements under both equilibrium and nonequilibrium conditions. Other complementary, but technically less demanding, methods provide a model-dependent characterization of key features of the landscape. Once reconstructed, energy landscapes can be used to study critical folding parameters, such as the characteristic transition times required for structural changes and the effective diffusion coefficient setting the timescale for motions over the landscape. We also discuss issues that complicate measurement and interpretation, including the possibility of multiple states or pathways and the effects of projecting multiple dimensions onto a single coordinate. PMID:24895850
Falk, Carl F; Cai, Li
2016-06-01
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.
2D data-space cross-gradient joint inversion of MT, gravity and magnetic data
NASA Astrophysics Data System (ADS)
Pak, Yong-Chol; Li, Tonglin; Kim, Gang-Sop
2017-08-01
We have developed a data-space multiple cross-gradient joint inversion algorithm, and validated it through synthetic tests and applied it to magnetotelluric (MT), gravity and magnetic datasets acquired along a 95 km profile in Benxi-Ji'an area of northeastern China. To begin, we discuss a generalized cross-gradient joint inversion for multiple datasets and model parameters sets, and formulate it in data space. The Lagrange multiplier required for the structural coupling in the data-space method is determined using an iterative solver to avoid calculation of the inverse matrix in solving the large system of equations. Next, using model-space and data-space methods, we inverted the synthetic data and field data. Based on our result, the joint inversion in data-space not only delineates geological bodies more clearly than the separate inversion, but also yields nearly equal results with the one in model-space while consuming much less memory.
Peng, Ran; Li, Dongqing
2016-10-07
The ability to create reproducible and inexpensive nanofluidic chips is essential to the fundamental research and applications of nanofluidics. This paper presents a novel and cost-effective method for fabricating a single nanochannel or multiple nanochannels in PDMS chips with controllable channel size and spacing. Single nanocracks or nanocrack arrays, positioned by artificial defects, are first generated on a polystyrene surface with controllable size and spacing by a solvent-induced method. Two sets of optimal working parameters are developed to replicate the nanocracks onto the polymer layers to form the nanochannel molds. The nanochannel molds are used to make the bi-layer PDMS microchannel-nanochannel chips by simple soft lithography. An alignment system is developed for bonding the nanofluidic chips under an optical microscope. Using this method, high quality PDMS nanofluidic chips with a single nanochannel or multiple nanochannels of sub-100 nm width and height and centimeter length can be obtained with high repeatability.
Methods for consistent forewarning of critical events across multiple data channels
Hively, Lee M.
2006-11-21
This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.
2005-01-01
The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.
DES Y1 Results: Validating Cosmological Parameter Estimation Using Simulated Dark Energy Surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacCrann, N.; et al.
We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in themore » $$\\Omega_m-\\sigma_8$$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $$S_8=\\sigma_8(\\Omega_m/0.3)^{0.5}$$ and $$\\Omega_m$$ are smaller than the DES Y1 $$1-\\sigma$$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $$\\Omega_m$$ and $$S_8$$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.« less
Arce, Pedro; Lagares, Juan Ignacio
2018-01-25
We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2 × 2 cm 2 to 40 × 40 cm 2 , a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.
Camargo, Marcela R; Barela, José A; Nozabieli, Andréa J L; Mantovani, Alessandra M; Martinelli, Alessandra R; Fregonesi, Cristina E P T
2015-01-01
The aims of this study were to evaluate aspects of balance, ankle muscle strength and spatiotemporal gait parameters in individuals with diabetic peripheral neuropathy (DPN) and verify whether deficits in spatiotemporal gait parameters were associated with ankle muscle strength and balance performance. Thirty individuals with DPN and 30 control individuals have participated. Spatiotemporal gait parameters were evaluated by measuring the time to walk a set distance during self-selected and maximal walking speeds. Functional mobility and balance performance were assessed using the Functional Reach and the Time Up and Go tests. Ankle isometric muscle strength was assessed with a handheld digital dynamometer. Analyses of variance were employed to verify possible differences between groups and conditions. Multiple linear regression analysis was employed to uncover possible predictors of gait deficits. Gait spatiotemporal, functional mobility, balance performance and ankle muscle strength were affected in individuals with DPN. The Time Up and Go test performance and ankle muscle isometric strength were associated to spatiotemporal gait changes, especially during maximal walking speed condition. Functional mobility and balance performance are damaged in DPN and balance performance and ankle muscle strength can be used to predict spatiotemporal gait parameters in individuals with DPN. Copyright © 2015 Diabetes India. Published by Elsevier Ltd. All rights reserved.
Khanna, Swati; Goyal, Arun; Moholkar, Vijayanand S
2013-01-01
This article addresses the issue of effect of fermentation parameters for conversion of glycerol (in both pure and crude form) into three value-added products, namely, ethanol, butanol, and 1,3-propanediol (1,3-PDO), by immobilized Clostridium pasteurianum and thereby addresses the statistical optimization of this process. The analysis of effect of different process parameters such as agitation rate, fermentation temperature, medium pH, and initial glycerol concentration indicated that medium pH was the most critical factor for total alcohols production in case of pure glycerol as fermentation substrate. On the other hand, initial glycerol concentration was the most significant factor for fermentation with crude glycerol. An interesting observation was that the optimized set of fermentation parameters was found to be independent of the type of glycerol (either pure or crude) used. At optimum conditions of agitation rate (200 rpm), initial glycerol concentration (25 g/L), fermentation temperature (30°C), and medium pH (7.0), the total alcohols production was almost equal in anaerobic shake flasks and 2-L bioreactor. This essentially means that at optimum process parameters, the scale of operation does not affect the output of the process. The immobilized cells could be reused for multiple cycles for both pure and crude glycerol fermentation.
NASA Astrophysics Data System (ADS)
Luna, Aderval S.; da Silva, Arnaldo P.; Ferré, Joan; Boqué, Ricard
This research work describes two studies for the classification and characterization of edible oils and its quality parameters through Fourier transform mid infrared spectroscopy (FT-mid-IR) together with chemometric methods. The discrimination of canola, sunflower, corn and soybean oils was investigated using SVM-DA, SIMCA and PLS-DA. Using FT-mid-IR, DPLS was able to classify 100% of the samples from the validation set, but SIMCA and SVM-DA were not. The quality parameters: refraction index and relative density of edible oils were obtained from reference methods. Prediction models for FT-mid-IR spectra were calculated for these quality parameters using partial least squares (PLS) and support vector machines (SVM). Several preprocessing alternatives (first derivative, multiplicative scatter correction, mean centering, and standard normal variate) were investigated. The best result for the refraction index was achieved with SVM as well as for the relative density except when the preprocessing combination of mean centering and first derivative was used. For both of quality parameters, the best results obtained for the figures of merit expressed by the root mean square error of cross validation (RMSECV) and prediction (RMSEP) were equal to 0.0001.
López, Iván; Borzacconi, Liliana
2010-10-01
A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.
Spectral gap optimization of order parameters for sampling complex molecular systems
Tiwary, Pratyush; Berne, B. J.
2016-01-01
In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365
Biostatistics Series Module 3: Comparing Groups: Numerical Variables.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Numerical data that are normally distributed can be analyzed with parametric tests, that is, tests which are based on the parameters that define a normal distribution curve. If the distribution is uncertain, the data can be plotted as a normal probability plot and visually inspected, or tested for normality using one of a number of goodness of fit tests, such as the Kolmogorov-Smirnov test. The widely used Student's t-test has three variants. The one-sample t-test is used to assess if a sample mean (as an estimate of the population mean) differs significantly from a given population mean. The means of two independent samples may be compared for a statistically significant difference by the unpaired or independent samples t-test. If the data sets are related in some way, their means may be compared by the paired or dependent samples t-test. The t-test should not be used to compare the means of more than two groups. Although it is possible to compare groups in pairs, when there are more than two groups, this will increase the probability of a Type I error. The one-way analysis of variance (ANOVA) is employed to compare the means of three or more independent data sets that are normally distributed. Multiple measurements from the same set of subjects cannot be treated as separate, unrelated data sets. Comparison of means in such a situation requires repeated measures ANOVA. It is to be noted that while a multiple group comparison test such as ANOVA can point to a significant difference, it does not identify exactly between which two groups the difference lies. To do this, multiple group comparison needs to be followed up by an appropriate post hoc test. An example is the Tukey's honestly significant difference test following ANOVA. If the assumptions for parametric tests are not met, there are nonparametric alternatives for comparing data sets. These include Mann-Whitney U-test as the nonparametric counterpart of the unpaired Student's t-test, Wilcoxon signed-rank test as the counterpart of the paired Student's t-test, Kruskal-Wallis test as the nonparametric equivalent of ANOVA and the Friedman's test as the counterpart of repeated measures ANOVA.
Riley, Richard D; Ensor, Joie; Jackson, Dan; Burke, Danielle L
2017-01-01
Many meta-analysis models contain multiple parameters, for example due to multiple outcomes, multiple treatments or multiple regression coefficients. In particular, meta-regression models may contain multiple study-level covariates, and one-stage individual participant data meta-analysis models may contain multiple patient-level covariates and interactions. Here, we propose how to derive percentage study weights for such situations, in order to reveal the (otherwise hidden) contribution of each study toward the parameter estimates of interest. We assume that studies are independent, and utilise a decomposition of Fisher's information matrix to decompose the total variance matrix of parameter estimates into study-specific contributions, from which percentage weights are derived. This approach generalises how percentage weights are calculated in a traditional, single parameter meta-analysis model. Application is made to one- and two-stage individual participant data meta-analyses, meta-regression and network (multivariate) meta-analysis of multiple treatments. These reveal percentage study weights toward clinically important estimates, such as summary treatment effects and treatment-covariate interactions, and are especially useful when some studies are potential outliers or at high risk of bias. We also derive percentage study weights toward methodologically interesting measures, such as the magnitude of ecological bias (difference between within-study and across-study associations) and the amount of inconsistency (difference between direct and indirect evidence in a network meta-analysis).
NASA Astrophysics Data System (ADS)
Paasche, H.; Tronicke, J.
2012-04-01
In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.
Map based navigation for autonomous underwater vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuohy, S.T.; Leonard, J.J.; Bellingham, J.G.
1995-12-31
In this work, a map based navigation algorithm is developed wherein measured geophysical properties are matched to a priori maps. The objectives is a complete algorithm applicable to a small, power-limited AUV which performs in real time to a required resolution with bounded position error. Interval B-Splines are introduced for the non-linear representation of two-dimensional geophysical parameters that have measurement uncertainty. Fine-scale position determination involves the solution of a system of nonlinear polynomial equations with interval coefficients. This system represents the complete set of possible vehicle locations and is formulated as the intersection of contours established on each map frommore » the simultaneous measurement of associated geophysical parameters. A standard filter mechanisms, based on a bounded interval error model, predicts the position of the vehicle and, therefore, screens extraneous solutions. When multiple solutions are found, a tracking mechanisms is applied until a unique vehicle location is determined.« less
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
Adaptive method for electron bunch profile prediction
Scheinker, Alexander; Gessner, Spencer
2015-10-15
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. Thus, the simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrialmore » control system. Finally, the main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.« less
Di Francesco, Fabrizio; De Marco, Gennaro; Scognamiglio, Fabio; Aruta, Valeria; Itro, Angelo
2017-01-01
Complex periprosthetic cases are considered as challenges by clinicians. Clinical and radiographic parameters should be considered separately to make the right choice between an endodontically or periodontally compromised treated tooth and implant. Therefore, in order to decide whether the tooth is safe or not, data that have to be collected are specific parameters of both the patient and the clinician. In addition, the presence of periodontal, prosthetic, and orthodontic diseases requires patients to be set in multidisciplinary approach. The aim of this case report is to describe how the multidisciplinary approach could be the best way to manage difficult cases of implant-prosthetic rehabilitation. How to rehabilitate with fixed prosthesis on natural teeth and dental implants a smoker patient who presents with active periodontitis, multiple edentulous areas, dental malocclusion, and severe aesthetic problems was also described. PMID:28421148
NASA Technical Reports Server (NTRS)
Poosti, Sassaneh; Akopyan, Sirvard; Sakurai, Regina; Yun, Hyejung; Saha, Pranjit; Strickland, Irina; Croft, Kevin; Smith, Weldon; Hoffman, Rodney; Koffend, John;
2006-01-01
TES Level 2 Subsystem is a set of computer programs that performs functions complementary to those of the program summarized in the immediately preceding article. TES Level-2 data pertain to retrieved species (or temperature) profiles, and errors thereof. Geolocation, quality, and other data (e.g., surface characteristics for nadir observations) are also included. The subsystem processes gridded meteorological information and extracts parameters that can be interpolated to the appropriate latitude, longitude, and pressure level based on the date and time. Radiances are simulated using the aforementioned meteorological information for initial guesses, and spectroscopic-parameter tables are generated. At each step of the retrieval, a nonlinear-least-squares- solving routine is run over multiple iterations, retrieving a subset of atmospheric constituents, and error analysis is performed. Scientific TES Level-2 data products are written in a format known as Hierarchical Data Format Earth Observing System 5 (HDF-EOS 5) for public distribution.
NASA Technical Reports Server (NTRS)
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
Adaptive method for electron bunch profile prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinker, Alexander; Gessner, Spencer
2015-10-01
We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial controlmore » system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
Szczecinski, Nicholas S.; Hunt, Alexander J.; Quinn, Roger D.
2017-01-01
A dynamical model of an animal’s nervous system, or synthetic nervous system (SNS), is a potentially transformational control method. Due to increasingly detailed data on the connectivity and dynamics of both mammalian and insect nervous systems, controlling a legged robot with an SNS is largely a problem of parameter tuning. Our approach to this problem is to design functional subnetworks that perform specific operations, and then assemble them into larger models of the nervous system. In this paper, we present networks that perform addition, subtraction, multiplication, division, differentiation, and integration of incoming signals. Parameters are set within each subnetwork to produce the desired output by utilizing the operating range of neural activity, R, the gain of the operation, k, and bounds based on biological values. The assembly of large networks from functional subnetworks underpins our recent results with MantisBot. PMID:28848419
Analysis of rocket flight stability based on optical image measurement
NASA Astrophysics Data System (ADS)
Cui, Shuhua; Liu, Junhu; Shen, Si; Wang, Min; Liu, Jun
2018-02-01
Based on the abundant optical image measurement data from the optical measurement information, this paper puts forward the method of evaluating the rocket flight stability performance by using the measurement data of the characteristics of the carrier rocket in imaging. On the basis of the method of measuring the characteristics of the carrier rocket, the attitude parameters of the rocket body in the coordinate system are calculated by using the measurements data of multiple high-speed television sets, and then the parameters are transferred to the rocket body attack angle and it is assessed whether the rocket has a good flight stability flying with a small attack angle. The measurement method and the mathematical algorithm steps through the data processing test, where you can intuitively observe the rocket flight stability state, and also can visually identify the guidance system or failure analysis.
Organic ferroelectric evaporator with substrate cooling and in situ transport capabilities.
Foreman, K; Labedz, C; Shearer, M; Adenwalla, S
2014-04-01
We report on the design, operation, and performance of a thermal evaporation chamber capable of evaporating organic thin films. Organic thin films are employed in a diverse range of devices and can provide insight into fundamental physical phenomena. However, growing organic thin films is often challenging and requires very specific deposition parameters. The chamber presented here is capable of cooling sample substrates to temperatures below 130 K and allows for the detachment of the sample from the cooling stage and in situ transport. This permits the use of multiple deposition techniques in separate, but connected, deposition chambers without breaking vacuum and therefore provides clean, well characterized interfaces between the organic thin film and any adjoining layers. We also demonstrate a successful thin film deposition of an organic material with a demanding set of deposition parameters, showcasing the success of this design.
Optimization principles and the figure of merit for triboelectric generators.
Peng, Jun; Kang, Stephen Dongmin; Snyder, G Jeffrey
2017-12-01
Energy harvesting with triboelectric nanogenerators is a burgeoning field, with a growing portfolio of creative application schemes attracting much interest. Although power generation capabilities and its optimization are one of the most important subjects, a satisfactory elemental model that illustrates the basic principles and sets the optimization guideline remains elusive. We use a simple model to clarify how the energy generation mechanism is electrostatic induction but with a time-varying character that makes the optimal matching for power generation more restrictive. By combining multiple parameters into dimensionless variables, we pinpoint the optimum condition with only two independent parameters, leading to predictions of the maximum limit of power density, which allows us to derive the triboelectric material and device figure of merit. We reveal the importance of optimizing device capacitance, not only load resistance, and minimizing the impact of parasitic capacitance. Optimized capacitances can lead to an overall increase in power density of more than 10 times.
End-of-winter snow depth variability on glaciers in Alaska
NASA Astrophysics Data System (ADS)
McGrath, Daniel; Sass, Louis; O'Neel, Shad; Arendt, Anthony; Wolken, Gabriel; Gusmeroli, Alessio; Kienholz, Christian; McNeil, Christopher
2015-08-01
A quantitative understanding of snow thickness and snow water equivalent (SWE) on glaciers is essential to a wide range of scientific and resource management topics. However, robust SWE estimates are observationally challenging, in part because SWE can vary abruptly over short distances in complex terrain due to interactions between topography and meteorological processes. In spring 2013, we measured snow accumulation on several glaciers around the Gulf of Alaska using both ground- and helicopter-based ground-penetrating radar surveys, complemented by extensive ground truth observations. We found that SWE can be highly variable (40% difference) over short spatial scales (tens to hundreds of meters), especially in the ablation zone where the underlying ice surfaces are typically rough. Elevation provides the dominant basin-scale influence on SWE, with gradients ranging from 115 to 400 mm/100 m. Regionally, total accumulation and the accumulation gradient are strongly controlled by a glacier's distance from the coastal moisture source. Multiple linear regressions, used to calculate distributed SWE fields, show that robust results require adequate sampling of the true distribution of multiple terrain parameters. Final SWE estimates (comparable to winter balances) show reasonable agreement with both the Parameter-elevation Relationships on Independent Slopes Model climate data set (9-36% difference) and the U.S. Geological Survey Alaska Benchmark Glaciers (6-36% difference). All the glaciers in our study exhibit substantial sensitivity to changing snow-rain fractions, regardless of their location in a coastal or continental climate. While process-based SWE projections remain elusive, the collection of ground-penetrating radar (GPR)-derived data sets provides a greatly enhanced perspective on the spatial distribution of SWE and will pave the way for future work that may eventually allow such projections.
NASA Astrophysics Data System (ADS)
Moin, Paymann; Ma, Kevin; Amezcua, Lilyana; Gertych, Arkadiusz; Liu, Brent
2009-02-01
Multiple sclerosis (MS) is a demyelinating disease of the central nervous system that affects approximately 2.5 million people worldwide. Magnetic resonance imaging (MRI) is an established tool for the assessment of disease activity, progression and response to treatment. The progression of the disease is variable and requires routine follow-up imaging studies. Currently, MRI quantification of multiple sclerosis requires a manual approach to lesion measurement and yields an estimate of lesion volume and interval change. In the setting of several prior studies and a long treatment history, trends related to treatment change quickly become difficult to extrapolate. Our efforts seek to develop an imaging informatics based MS lesion computer aided detection (CAD) package to quantify and track MS lesions including lesion load, volume, and location. Together, with select clinical parameters, this data will be incorporated into an MS specific e- Folder to provide decision support to evaluate and assess treatment options for MS in a manner tailored specifically to an individual based on trends in MS presentation and progression.
NASA Astrophysics Data System (ADS)
Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan
2017-12-01
Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.
Mouradi, Rand; Desai, Nisarg; Erdemir, Ahmet; Agarwal, Ashok
2012-01-01
Recent studies have shown that exposing human semen samples to cell phone radiation leads to a significant decline in sperm parameters. In daily living, a cell phone is usually kept in proximity to the groin, such as in a trouser pocket, separated from the testes by multiple layers of tissue. The aim of this study was to calculate the distance between cell phone and semen sample to set up an in vitro experiment that can mimic real life conditions (cell phone in trouser pocket separated by multiple tissue layers). For this reason, a computational model of scrotal tissues was designed by considering these separating layers, the results of which were used in a series of simulations using the Finite Difference Time Domain (FDTD) method. To provide an equivalent effect of multiple tissue layers, these results showed that the distance between a cell phone and semen sample should be 0.8 cm to 1.8 cm greater than the anticipated distance between a cell phone and the testes.
NASA Astrophysics Data System (ADS)
Mahamood, Rasheedat M.; Akinlabi, Esther T.
2016-03-01
Ti6Al4V is an important Titanium alloy that is mostly used in many applications such as: aerospace, petrochemical and medicine. The excellent corrosion resistance property, the high strength to weight ratio and the retention of properties at high temperature makes them to be favoured in most applications. The high cost of Titanium and its alloys makes their use to be prohibitive in some applications. Ti6Al4V can be cladded on a less expensive material such as steel, thereby reducing cost and providing excellent properties. Laser Metal Deposition (LMD) process, an additive manufacturing process is capable of producing complex part directly from the 3-D CAD model of the part and it also has the capability of handling multiple materials. Processing parameters play an important role in LMD process and in order to achieve desired results at a minimum cost, then the processing parameters need to be properly controlled. This paper investigates the role of processing parameters: laser power, scanning speed, powder flow rate and gas flow rate, on the material utilization efficiency in laser metal deposited Ti6Al4V. A two-level full factorial design of experiment was used in this investigation, to be able to understand the processing parameters that are most significant as well as the interactions among these processing parameters. Four process parameters were used, each with upper and lower settings which results in a combination of sixteen experiments. The laser power settings used was 1.8 and 3 kW, the scanning speed was 0.05 and 0.1 m/s, the powder flow rate was 2 and 4 g/min and the gas flow rate was 2 and 4 l/min. The experiments were designed and analyzed using Design Expert 8 software. The software was used to generate the optimized process parameters which were found to be laser power of 3.2 kW, scanning speed of 0.06 m/s, powder flow rate of 2 g/min and gas flow rate of 3 l/min.
Migliore, Alberto; Integlia, Davide; Bizzi, Emanuele; Piaggio, Tomaso
2015-10-01
There are plenty of different clinical, organizational and economic parameters to consider in order having a complete assessment of the total impact of a pharmaceutical treatment. In the attempt to follow, a holistic approach aimed to provide an evaluation embracing all clinical parameters in order to choose the best treatments, it is necessary to compare and weight multiple criteria. Therefore, a change is required: we need to move from a decision-making context based on the assessment of one single criteria towards a transparent and systematic framework enabling decision makers to assess all relevant parameters simultaneously in order to choose the best treatment to use. In order to apply the MCDA methodology to clinical decision making the best pharmaceutical treatment (or medical devices) to use to treat a specific pathology, we suggest a specific application of the Multiple Criteria Decision Analysis for the purpose, like a Clinical Multi-criteria Decision Assessment CMDA. In CMDA, results from both meta-analysis and observational studies are used by a clinical consensus after attributing weights to specific domains and related parameters. The decision will result from a related comparison of all consequences (i.e., efficacy, safety, adherence, administration route) existing behind the choice to use a specific pharmacological treatment. The match will yield a score (in absolute value) that link each parameter with a specific intervention, and then a final score for each treatment. The higher is the final score; the most appropriate is the intervention to treat disease considering all criteria (domain an parameters). The results will allow the physician to evaluate the best clinical treatment for his patients considering at the same time all relevant criteria such as clinical effectiveness for all parameters and administration route. The use of CMDA model will yield a clear and complete indication of the best pharmaceutical treatment to use for patients, helping physicians to choose drugs with a complete set of information, imputed in the model. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
Novel methods for parameter-based analysis of myocardial tissue in MR images
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.
2007-03-01
The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
A Multiplicative Cascade Model for High-Resolution Space-Time Downscaling of Rainfall
NASA Astrophysics Data System (ADS)
Raut, Bhupendra A.; Seed, Alan W.; Reeder, Michael J.; Jakob, Christian
2018-02-01
Distributions of rainfall with the time and space resolutions of minutes and kilometers, respectively, are often needed to drive the hydrological models used in a range of engineering, environmental, and urban design applications. The work described here is the first step in constructing a model capable of downscaling rainfall to scales of minutes and kilometers from time and space resolutions of several hours and a hundred kilometers. A multiplicative random cascade model known as the Short-Term Ensemble Prediction System is run with parameters from the radar observations at Melbourne (Australia). The orographic effects are added through multiplicative correction factor after the model is run. In the first set of model calculations, 112 significant rain events over Melbourne are simulated 100 times. Because of the stochastic nature of the cascade model, the simulations represent 100 possible realizations of the same rain event. The cascade model produces realistic spatial and temporal patterns of rainfall at 6 min and 1 km resolution (the resolution of the radar data), the statistical properties of which are in close agreement with observation. In the second set of calculations, the cascade model is run continuously for all days from January 2008 to August 2015 and the rainfall accumulations are compared at 12 locations in the greater Melbourne area. The statistical properties of the observations lie with envelope of the 100 ensemble members. The model successfully reproduces the frequency distribution of the 6 min rainfall intensities, storm durations, interarrival times, and autocorrelation function.
Systems and methods for optimal power flow on a radial network
Low, Steven H.; Peng, Qiuyu
2018-04-24
Node controllers and power distribution networks in accordance with embodiments of the invention enable distributed power control. One embodiment includes a node controller including a distributed power control application; a plurality of node operating parameters describing the operating parameter of a node and a set of at least one node selected from the group consisting of an ancestor node and at least one child node; wherein send node operating parameters to nodes in the set of at least one node; receive operating parameters from the nodes in the set of at least one node; calculate a plurality of updated node operating parameters using an iterative process to determine the updated node operating parameters using the node operating parameters that describe the operating parameters of the node and the set of at least one node, where the iterative process involves evaluation of a closed form solution; and adjust node operating parameters.
Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki
2011-04-01
In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.
Behavioral Health and Performance Laboratory Standard Measures (BHP-SM)
NASA Technical Reports Server (NTRS)
Williams, Thomas J.; Cromwell, Ronita
2017-01-01
The Spaceflight Standard Measures is a NASA Johnson Space Center Human Research Project (HRP) project that proposes to collect a set of core measurements, representative of many of the human spaceflight risks, from astronauts before, during and after long-duration International Space Station (ISS) missions. The term "standard measures" is defined as a set of core measurements, including physiological, biochemical, psychosocial, cognitive, and functional, that are reliable, valid, and accepted in terrestrial science, are associated with a specific and measurable outcome known to occur as a consequence of spaceflight, that will be collected in a standardized fashion from all (or most) crewmembers. While such measures might be used to define standards of health and performance or readiness for flight, the prime intent in their collection is to allow longitudinal analysis of multiple parameters in order to answer a variety of operational, occupational, and research-based questions. These questions are generally at a high level, and the approach for this project is to populate the standard measures database with the smallest set of data necessary to indicate further detailed research is required. Also included as standard measures are parameters that are not outcome-based in and of-themselves, but provide ancillary information that supports interpretation of the outcome measures, e.g., nutritional assessment, vehicle environmental parameters, crew debriefs, etc. The project's main aim is to ensure that an optimized minimal set of measures is consistently captured from all ISS crewmembers until the end of Station in order to characterize the human in space. -This allows the HRP to identify, establish, and evaluate a common set of measures for use in spaceflight and analog research to: develop baselines, systematically characterize risk likelihood and consequences, and assess effectiveness of countermeasures that work for behavioral health and performance risk factors. -By standardizing the battery of measures on all crewmembers, it will allow the HRP to evaluate countermeasures that work for one physiological system and ensure another system is not negatively affected. -These measures, named "Standard Measures," will serve as a data repository and be available to other studies under data sharing agreements.
Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries
NASA Astrophysics Data System (ADS)
Koehl, M.; Delacourt, T.; Boutry, C.
2016-06-01
This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.
Bayesian correlated clustering to integrate multiple datasets
Kirk, Paul; Griffin, Jim E.; Savage, Richard S.; Ghahramani, Zoubin; Wild, David L.
2012-01-01
Motivation: The integration of multiple datasets remains a key challenge in systems biology and genomic medicine. Modern high-throughput technologies generate a broad array of different data types, providing distinct—but often complementary—information. We present a Bayesian method for the unsupervised integrative modelling of multiple datasets, which we refer to as MDI (Multiple Dataset Integration). MDI can integrate information from a wide range of different datasets and data types simultaneously (including the ability to model time series data explicitly using Gaussian processes). Each dataset is modelled using a Dirichlet-multinomial allocation (DMA) mixture model, with dependencies between these models captured through parameters that describe the agreement among the datasets. Results: Using a set of six artificially constructed time series datasets, we show that MDI is able to integrate a significant number of datasets simultaneously, and that it successfully captures the underlying structural similarity between the datasets. We also analyse a variety of real Saccharomyces cerevisiae datasets. In the two-dataset case, we show that MDI’s performance is comparable with the present state-of-the-art. We then move beyond the capabilities of current approaches and integrate gene expression, chromatin immunoprecipitation–chip and protein–protein interaction data, to identify a set of protein complexes for which genes are co-regulated during the cell cycle. Comparisons to other unsupervised data integration techniques—as well as to non-integrative approaches—demonstrate that MDI is competitive, while also providing information that would be difficult or impossible to extract using other methods. Availability: A Matlab implementation of MDI is available from http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/. Contact: D.L.Wild@warwick.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23047558
Uppal, Karan; Soltow, Quinlyn A; Strobel, Frederick H; Pittard, W Stephen; Gernert, Kim M; Yu, Tianwei; Jones, Dean P
2013-01-16
Detection of low abundance metabolites is important for de novo mapping of metabolic pathways related to diet, microbiome or environmental exposures. Multiple algorithms are available to extract m/z features from liquid chromatography-mass spectral data in a conservative manner, which tends to preclude detection of low abundance chemicals and chemicals found in small subsets of samples. The present study provides software to enhance such algorithms for feature detection, quality assessment, and annotation. xMSanalyzer is a set of utilities for automated processing of metabolomics data. The utilites can be classified into four main modules to: 1) improve feature detection for replicate analyses by systematic re-extraction with multiple parameter settings and data merger to optimize the balance between sensitivity and reliability, 2) evaluate sample quality and feature consistency, 3) detect feature overlap between datasets, and 4) characterize high-resolution m/z matches to small molecule metabolites and biological pathways using multiple chemical databases. The package was tested with plasma samples and shown to more than double the number of features extracted while improving quantitative reliability of detection. MS/MS analysis of a random subset of peaks that were exclusively detected using xMSanalyzer confirmed that the optimization scheme improves detection of real metabolites. xMSanalyzer is a package of utilities for data extraction, quality control assessment, detection of overlapping and unique metabolites in multiple datasets, and batch annotation of metabolites. The program was designed to integrate with existing packages such as apLCMS and XCMS, but the framework can also be used to enhance data extraction for other LC/MS data software.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Ozer, E-mail: oalgan@ouhsc.edu; Giem, Jared; Young, Julie
To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiation therapy using a single isocenter (SI)–based or multiple isocenter (MI)–based treatment planning in patients with less than 4 brain metastases. In total, 10 patients with magnetic resonance imaging (MRI) demonstrating 2-3 brain metastases were included in this retrospective study, and 2 sets of stereotactic intensity-modulated radiation therapy (IMRT) treatment plans (SI vs MI) were generated. The hippocampus was contoured on SPGR sequences, and doses received by the hippocampus and the brain were calculated and compared between the 2 treatment techniques. A totalmore » of 23 lesions in 10 patients were evaluated. The median tumor volume, the right hippocampus volume, and the left hippocampus volume were 3.15, 3.24, and 2.63 mL, respectively. In comparing the 2 treatment plans, there was no difference in the planning target volume (PTV) coverage except in the tail for the dose-volume histogram (DVH) curve. The only statistically significant dosimetric parameter was the V{sub 100}. All of the other measured dosimetric parameters including the V{sub 95}, V{sub 99}, and D{sub 100} were not significantly different between the 2 treatment planning techniques. None of the dosimetric parameters evaluated for the hippocampus revealed any statistically significant difference between the MI and SI plans. The total brain doses were slightly higher in the SI plans, especially in the lower dose region, although this difference was not statistically different. The use of SI-based treatment plan resulted in a 35% reduction in beam-on time. The use of SI treatments for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain when compared with MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment.« less
Algan, Ozer; Giem, Jared; Young, Julie; Ali, Imad; Ahmad, Salahuddin; Hossain, Sabbir
2015-01-01
To investigate the doses received by the hippocampus and normal brain tissue during a course of stereotactic radiation therapy using a single isocenter (SI)-based or multiple isocenter (MI)-based treatment planning in patients with less than 4 brain metastases. In total, 10 patients with magnetic resonance imaging (MRI) demonstrating 2-3 brain metastases were included in this retrospective study, and 2 sets of stereotactic intensity-modulated radiation therapy (IMRT) treatment plans (SI vs MI) were generated. The hippocampus was contoured on SPGR sequences, and doses received by the hippocampus and the brain were calculated and compared between the 2 treatment techniques. A total of 23 lesions in 10 patients were evaluated. The median tumor volume, the right hippocampus volume, and the left hippocampus volume were 3.15, 3.24, and 2.63mL, respectively. In comparing the 2 treatment plans, there was no difference in the planning target volume (PTV) coverage except in the tail for the dose-volume histogram (DVH) curve. The only statistically significant dosimetric parameter was the V100. All of the other measured dosimetric parameters including the V95, V99, and D100 were not significantly different between the 2 treatment planning techniques. None of the dosimetric parameters evaluated for the hippocampus revealed any statistically significant difference between the MI and SI plans. The total brain doses were slightly higher in the SI plans, especially in the lower dose region, although this difference was not statistically different. The use of SI-based treatment plan resulted in a 35% reduction in beam-on time. The use of SI treatments for patients with up to 3 brain metastases produces similar PTV coverage and similar normal tissue doses to the hippocampus and the brain when compared with MI plans. SI treatment planning should be considered in patients with multiple brain metastases undergoing stereotactic treatment. Copyright © 2015 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hamadneh, Iyad Mohammed
2015-01-01
This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…
Ferreira da Costa, Joana; Silva, David; Caamaño, Olga; Brea, José M; Loza, Maria Isabel; Munteanu, Cristian R; Pazos, Alejandro; García-Mera, Xerardo; González-Díaz, Humbert
2018-06-25
Predicting drug-protein interactions (DPIs) for target proteins involved in dopamine pathways is a very important goal in medicinal chemistry. We can tackle this problem using Molecular Docking or Machine Learning (ML) models for one specific protein. Unfortunately, these models fail to account for large and complex big data sets of preclinical assays reported in public databases. This includes multiple conditions of assays, such as different experimental parameters, biological assays, target proteins, cell lines, organism of the target, or organism of assay. On the other hand, perturbation theory (PT) models allow us to predict the properties of a query compound or molecular system in experimental assays with multiple boundary conditions based on a previously known case of reference. In this work, we report the first PTML (PT + ML) study of a large ChEMBL data set of preclinical assays of compounds targeting dopamine pathway proteins. The best PTML model found predicts 50000 cases with accuracy of 70-91% in training and external validation series. We also compared the linear PTML model with alternative PTML models trained with multiple nonlinear methods (artificial neural network (ANN), Random Forest, Deep Learning, etc.). Some of the nonlinear methods outperform the linear model but at the cost of a notable increment of the complexity of the model. We illustrated the practical use of the new model with a proof-of-concept theoretical-experimental study. We reported for the first time the organic synthesis, chemical characterization, and pharmacological assay of a new series of l-prolyl-l-leucyl-glycinamide (PLG) peptidomimetic compounds. In addition, we performed a molecular docking study for some of these compounds with the software Vina AutoDock. The work ends with a PTML model predictive study of the outcomes of the new compounds in a large number of assays. Therefore, this study offers a new computational methodology for predicting the outcome for any compound in new assays. This PTML method focuses on the prediction with a simple linear model of multiple pharmacological parameters (IC 50 , EC 50 , K i , etc.) for compounds in assays involving different cell lines used, organisms of the protein target, or organism of assay for proteins in the dopamine pathway.
Multiple Wheel Throwing: And Chess Sets.
ERIC Educational Resources Information Center
Sapiro, Maurice
1978-01-01
A chess set project is suggested to teach multiple throwing, the creation on a potter's wheel of several pieces of similar configuration. Processes and finished sets are illustrated with photographs. (SJL)
Compacton solutions in a class of generalized fifth-order Korteweg-de Vries equations.
Cooper, F; Hyman, J M; Khare, A
2001-08-01
Solitons play a fundamental role in the evolution of general initial data for quasilinear dispersive partial differential equations, such as the Korteweg-de Vries (KdV), nonlinear Schrödinger, and the Kadomtsev-Petviashvili equations. These integrable equations have linear dispersion and the solitons have infinite support. We have derived and investigate a new KdV-like Hamiltonian partial differential equation from a four-parameter Lagrangian where the nonlinear dispersion gives rise to solitons with compact support (compactons). The new equation does not seem to be integrable and only mass, momentum, and energy seem to be conserved; yet, the solitons display almost the same modal decompositions and structural stability observed in integrable partial differential equations. The compactons formed from arbitrary initial data, are nonlinearly self-stabilizing, and maintain their coherence after multiple collisions. The robustness of these compactons and the inapplicability of the inverse scattering tools, that worked so well for the KdV equation, make it clear that there is a fundamental mechanism underlying the processes beyond integrability. We have found explicit formulas for multiple classes of compact traveling wave solutions. When there are more than one compacton solution for a particular set of parameters, the wider compacton is the minimum of a reduced Hamiltonian and is the only one that is stable.
Effect of processor temperature on film dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, Shiv P.; Das, Indra J., E-mail: idas@iupui.edu
2012-07-01
Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d{sub max.}, 10 Multiplication-Sign 10 cm{sup 2}, 100 cm) to a given dose. Anmore » automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6 Degree-Sign C (85-105 Degree-Sign F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.« less
Matter effects on binary neutron star waveforms
NASA Astrophysics Data System (ADS)
Read, Jocelyn S.; Baiotti, Luca; Creighton, Jolien D. E.; Friedman, John L.; Giacomazzo, Bruno; Kyutoku, Koutarou; Markakis, Charalampos; Rezzolla, Luciano; Shibata, Masaru; Taniguchi, Keisuke
2013-08-01
Using an extended set of equations of state and a multiple-group multiple-code collaborative effort to generate waveforms, we improve numerical-relativity-based data-analysis estimates of the measurability of matter effects in neutron-star binaries. We vary two parameters of a parametrized piecewise-polytropic equation of state (EOS) to analyze the measurability of EOS properties, via a parameter Λ that characterizes the quadrupole deformability of an isolated neutron star. We find that, to within the accuracy of the simulations, the departure of the waveform from point-particle (or spinless double black-hole binary) inspiral increases monotonically with Λ and changes in the EOS that did not change Λ are not measurable. We estimate with two methods the minimal and expected measurability of Λ in second- and third-generation gravitational-wave detectors. The first estimate using numerical waveforms alone shows that two EOSs which vary in radius by 1.3 km are distinguishable in mergers at 100 Mpc. The second estimate relies on the construction of hybrid waveforms by matching to post-Newtonian inspiral and estimates that the same EOSs are distinguishable in mergers at 300 Mpc. We calculate systematic errors arising from numerical uncertainties and hybrid construction, and we estimate the frequency at which such effects would interfere with template-based searches.
Automatic Multi-sensor Data Quality Checking and Event Detection for Environmental Sensing
NASA Astrophysics Data System (ADS)
LIU, Q.; Zhang, Y.; Zhao, Y.; Gao, D.; Gallaher, D. W.; Lv, Q.; Shang, L.
2017-12-01
With the advances in sensing technologies, large-scale environmental sensing infrastructures are pervasively deployed to continuously collect data for various research and application fields, such as air quality study and weather condition monitoring. In such infrastructures, many sensor nodes are distributed in a specific area and each individual sensor node is capable of measuring several parameters (e.g., humidity, temperature, and pressure), providing massive data for natural event detection and analysis. However, due to the dynamics of the ambient environment, sensor data can be contaminated by errors or noise. Thus, data quality is still a primary concern for scientists before drawing any reliable scientific conclusions. To help researchers identify potential data quality issues and detect meaningful natural events, this work proposes a novel algorithm to automatically identify and rank anomalous time windows from multiple sensor data streams. More specifically, (1) the algorithm adaptively learns the characteristics of normal evolving time series and (2) models the spatial-temporal relationship among multiple sensor nodes to infer the anomaly likelihood of a time series window for a particular parameter in a sensor node. Case studies using different data sets are presented and the experimental results demonstrate that the proposed algorithm can effectively identify anomalous time windows, which may resulted from data quality issues and natural events.
Effective Tree Scattering and Opacity at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; O'Neill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.
2011-01-01
This paper investigates vegetation effects at L-band by using a first-order radiative transfer (RT) model and truck-based microwave measurements over natural conifer stands to assess the applicability of the tau-omega) model over trees. The tau-omega model is a zero-order RT solution that accounts for vegetation effects with effective vegetation parameters (vegetation opacity and single-scattering albedo), which represent the canopy as a whole. This approach inherently ignores multiple-scattering effects and, therefore, has a limited validity depending on the level of scattering within the canopy. The fact that the scattering from large forest components such as branches and trunks is significant at L-band requires that zero-order vegetation parameters be evaluated (compared) along with their theoretical definitions to provide a better understanding of these parameters in the retrieval algorithms as applied to trees. This paper compares the effective vegetation opacities, computed from multi-angular pine tree brightness temperature data, against the results of two independent approaches that provide theoretical and measured optical depths. These two techniques are based on forward scattering theory and radar corner reflector measurements, respectively. The results indicate that the effective vegetation opacity values are smaller than but of similar magnitude to both radar and theoretical estimates. The effective opacity of the zero-order model is thus set equal to the theoretical opacity and an explicit expression for the effective albedo is then obtained from the zero- and first- order RT model comparison. The resultant albedo is found to have a similar magnitude as the effective albedo value obtained from brightness temperature measurements. However, it is less than half of that estimated using the theoretical calculations (0.5 - 0.6 for tree canopies at L-band). This lower observed albedo balances the scattering darkening effect of the large theoretical albedo with a first-order multiple-scattering contribution. The retrieved effective albedo is different from theoretical definitions and not the albedo of single forest elements anymore, but it becomes a global parameter, which depends on all the processes taking place within the canopy, including multiple-scattering.
NASA Astrophysics Data System (ADS)
Bharti, P. K.; Khan, M. I.; Singh, Harbinder
2010-10-01
Off-line quality control is considered to be an effective approach to improve product quality at a relatively low cost. The Taguchi method is one of the conventional approaches for this purpose. Through this approach, engineers can determine a feasible combination of design parameters such that the variability of a product's response can be reduced and the mean is close to the desired target. The traditional Taguchi method was focused on ensuring good performance at the parameter design stage with one quality characteristic, but most products and processes have multiple quality characteristics. The optimal parameter design minimizes the total quality loss for multiple quality characteristics. Several studies have presented approaches addressing multiple quality characteristics. Most of these papers were concerned with maximizing the parameter combination of signal to noise (SN) ratios. The results reveal the advantages of this approach are that the optimal parameter design is the same as the traditional Taguchi method for the single quality characteristic; the optimal design maximizes the amount of reduction of total quality loss for multiple quality characteristics. This paper presents a literature review on solving multi-response problems in the Taguchi method and its successful implementation in various industries.
NASA Astrophysics Data System (ADS)
Jensen, Kristoffer
2002-11-01
A timbre model is proposed for use in multiple applications. This model, which encompasses all voiced isolated musical instruments, has an intuitive parameter set, fixed size, and separates the sounds in dimensions akin to the timbre dimensions as proposed in timbre research. The analysis of the model parameters is fully documented, and it proposes, in particular, a method for the estimation of the difficult decay/release split-point. The main parameters of the model are the spectral envelope, the attack/release durations and relative amplitudes, and the inharmonicity and the shimmer and jitter (which provide both for the slow random variations of the frequencies and amplitudes, and also for additive noises). Some of the applications include synthesis, where a real-time application is being developed with an intuitive gui, classification, and search of sounds based on the content of the sounds, and a further understanding of acoustic musical instrument behavior. In order to present the background of the model, this presentation will start with sinusoidal A/S, some timbre perception research, then present the timbre model, show the validity for individual music instrument sounds, and finally introduce some expression additions to the model.
Influence of fusion dynamics on fission observables: A multidimensional analysis
NASA Astrophysics Data System (ADS)
Schmitt, C.; Mazurek, K.; Nadtochy, P. N.
2018-01-01
An attempt to unfold the respective influence of the fusion and fission stages on typical fission observables, and namely the neutron prescission multiplicity, is proposed. A four-dimensional dynamical stochastic Langevin model is used to calculate the decay by fission of excited compound nuclei produced in a wide set of heavy-ion collisions. The comparison of the results from such a calculation and experimental data is discussed, guided by predictions of the dynamical deterministic HICOL code for the compound-nucleus formation time. While the dependence of the latter on the entrance-channel properties can straigthforwardly explain some observations, a complex interplay between the various parameters of the reaction is found to occur in other cases. A multidimensional analysis of the respective role of these parameters, including entrance-channel asymmetry, bombarding energy, compound-nucleus fissility, angular momentum, and excitation energy, is proposed. It is shown that, depending on the size of the system, apparent inconsistencies may be deduced when projecting onto specific ordering parameters. The work suggests the possibility of delicate compensation effects in governing the measured fission observables, thereby highlighting the necessity of a multidimensional discussion.
NASA Astrophysics Data System (ADS)
Babakhanian, Meghedi; Fan, Richard E.; Mulgaonkar, Amit P.; Singh, Rahul; Culjat, Martin O.; Danesh, Shahab M.; Toro, Ligia; Grundfest, Warren; Melega, William P.
2012-03-01
Low intensity focused ultrasound (LIFU) is now being considered as a noninvasive brain therapy for clinical applications. We maintain that LIFU can efficiently deliver energy from outside the skull to target specific brain regions, effecting localized neuromodulation. However, the underlying molecular mechanisms that drive this LIFU-induced neuromodulation are not well-defined due, in part, to our lack of understanding of how particular sets of LIFU delivery parameters affect the outcome. To efficiently conduct multiple sweeps of different parameters and determine their effects, we have developed an in-vitro system to study the effects of LIFU on different types of cells grown in culture. Presently, we are evaluating how LIFU affects the ionic flux that may underlie neuronal excitation and inhibition observed in-vivo. The results of our in-vitro studies will provide a rationale for selection of optimal LIFU parameter to be used in subsequent in-vivo applications. Thus, a prototype ultrasound cell assay system has been developed to conduct these studies, and is described in this work.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
Andrzejewski, Piotr; Baltzer, Pascal; Polanec, Stephan H.; Sturdza, Alina; Georg, Dietmar; Helbich, Thomas H.; Karanikas, Georgios; Grimm, Christoph; Polterauer, Stephan; Poetter, Richard; Wadsak, Wolfgang; Mitterhauser, Markus; Georg, Petra
2016-01-01
Objectives To investigate fused multiparametric positron emission tomography/magnetic resonance imaging (MP PET/MRI) at 3T in patients with locally advanced cervical cancer, using high-resolution T2-weighted, contrast-enhanced MRI (CE-MRI), diffusion-weighted imaging (DWI), and the radiotracers [18F]fluorodeoxyglucose ([18F]FDG) and [18F]fluoromisonidazol ([18F]FMISO) for the non-invasive detection of tumor heterogeneity for an improved planning of chemo-radiation therapy (CRT). Materials and Methods Sixteen patients with locally advanced cervix were enrolled in this IRB approved and were examined with fused MP [18F]FDG/ [18F]FMISO PET/MRI and in eleven patients complete data sets were acquired. MP PET/MRI was assessed for tumor volume, enhancement (EH)-kinetics, diffusivity, and [18F]FDG/ [18F]FMISO-avidity. Descriptive statistics and voxel-by-voxel analysis of MRI and PET parameters were performed. Correlations were assessed using multiple correlation analysis. Results All tumors displayed imaging parameters concordant with cervix cancer, i.e. type II/III EH-kinetics, restricted diffusivity (median ADC 0.80x10-3mm2/sec), [18F]FDG- (median SUVmax16.2) and [18F]FMISO-avidity (median SUVmax3.1). In all patients, [18F]FMISO PET identified the hypoxic tumor subvolume, which was independent of tumor volume. A voxel-by-voxel analysis revealed only weak correlations between the MRI and PET parameters (0.05–0.22), indicating that each individual parameter yields independent information and the presence of tumor heterogeneity. Conclusion MP [18F]FDG/ [18F]FMISO PET/MRI in patients with cervical cancer facilitates the acquisition of independent predictive and prognostic imaging parameters. MP [18F]FDG/ [18F]FMISO PET/MRI enables insights into tumor biology on multiple levels and provides information on tumor heterogeneity, which has the potential to improve the planning of CRT. PMID:27167829
Pinker, Katja; Andrzejewski, Piotr; Baltzer, Pascal; Polanec, Stephan H; Sturdza, Alina; Georg, Dietmar; Helbich, Thomas H; Karanikas, Georgios; Grimm, Christoph; Polterauer, Stephan; Poetter, Richard; Wadsak, Wolfgang; Mitterhauser, Markus; Georg, Petra
2016-01-01
To investigate fused multiparametric positron emission tomography/magnetic resonance imaging (MP PET/MRI) at 3T in patients with locally advanced cervical cancer, using high-resolution T2-weighted, contrast-enhanced MRI (CE-MRI), diffusion-weighted imaging (DWI), and the radiotracers [18F]fluorodeoxyglucose ([18F]FDG) and [18F]fluoromisonidazol ([18F]FMISO) for the non-invasive detection of tumor heterogeneity for an improved planning of chemo-radiation therapy (CRT). Sixteen patients with locally advanced cervix were enrolled in this IRB approved and were examined with fused MP [18F]FDG/ [18F]FMISO PET/MRI and in eleven patients complete data sets were acquired. MP PET/MRI was assessed for tumor volume, enhancement (EH)-kinetics, diffusivity, and [18F]FDG/ [18F]FMISO-avidity. Descriptive statistics and voxel-by-voxel analysis of MRI and PET parameters were performed. Correlations were assessed using multiple correlation analysis. All tumors displayed imaging parameters concordant with cervix cancer, i.e. type II/III EH-kinetics, restricted diffusivity (median ADC 0.80x10-3mm2/sec), [18F]FDG- (median SUVmax16.2) and [18F]FMISO-avidity (median SUVmax3.1). In all patients, [18F]FMISO PET identified the hypoxic tumor subvolume, which was independent of tumor volume. A voxel-by-voxel analysis revealed only weak correlations between the MRI and PET parameters (0.05-0.22), indicating that each individual parameter yields independent information and the presence of tumor heterogeneity. MP [18F]FDG/ [18F]FMISO PET/MRI in patients with cervical cancer facilitates the acquisition of independent predictive and prognostic imaging parameters. MP [18F]FDG/ [18F]FMISO PET/MRI enables insights into tumor biology on multiple levels and provides information on tumor heterogeneity, which has the potential to improve the planning of CRT.
NASA Astrophysics Data System (ADS)
Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes
2005-04-01
Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.
Using the NEMA NU 4 PET image quality phantom in multipinhole small-animal SPECT.
Harteveld, Anita A; Meeuwis, Antoi P W; Disselhorst, Jonathan A; Slump, Cornelis H; Oyen, Wim J G; Boerman, Otto C; Visser, Eric P
2011-10-01
Several commercial small-animal SPECT scanners using multipinhole collimation are presently available. However, generally accepted standards to characterize the performance of these scanners do not exist. Whereas for small-animal PET, the National Electrical Manufacturers Association (NEMA) NU 4 standards have been defined in 2008, such standards are still lacking for small-animal SPECT. In this study, the image quality parameters associated with the NEMA NU 4 image quality phantom were determined for a small-animal multipinhole SPECT scanner. Multiple whole-body scans of the NEMA NU 4 image quality phantom of 1-h duration were performed in a U-SPECT-II scanner using (99m)Tc with activities ranging between 8.4 and 78.2 MBq. The collimator contained 75 pinholes of 1.0-mm diameter and had a bore diameter of 98 mm. Image quality parameters were determined as a function of average phantom activity, number of iterations, postreconstruction spatial filter, and scatter correction. In addition, a mouse was injected with (99m)Tc-hydroxymethylene diphosphonate and was euthanized 6.5 h after injection. Multiple whole-body scans of this mouse of 1-h duration were acquired for activities ranging between 3.29 and 52.7 MBq. An increase in the number of iterations was accompanied by an increase in the recovery coefficients for the small rods (RC(rod)), an increase in the noise in the uniform phantom region, and a decrease in spillover ratios for the cold-air- and water-filled scatter compartments (SOR(air) and SOR(wat)). Application of spatial filtering reduced image noise but lowered RC(rod). Filtering did not influence SOR(air) and SOR(wat). Scatter correction reduced SOR(air) and SOR(wat). The effect of total phantom activity was primarily seen in a reduction of image noise with increasing activity. RC(rod), SOR(air), and SOR(wat) were more or less constant as a function of phantom activity. The relation between acquisition and reconstruction settings and image quality was confirmed in the (99m)Tc-hydroxymethylene diphosphonate mouse scans. Although developed for small-animal PET, the NEMA NU 4 image quality phantom was found to be useful for small-animal SPECT as well, allowing for objective determination of image quality parameters and showing the trade-offs between several of these parameters on variation of acquisition and reconstruction settings.
Maltreatment in multiple-birth children.
Lang, Cathleen A; Cox, Matthew J; Flores, Glenn
2013-12-01
The rate of multiple births has increased over the last two decades. In 1982, an increased frequency of injuries among this patient population was noted, but few studies have evaluated the increased incidence of maltreatment in twins. The study aim was to evaluate the features of all multiple-birth children with substantiated physical abuse and/or neglect over a four-year period at a major children's hospital. A Retrospective chart review was conducted of multiple-gestation children in which at least one child in the multiple set experienced child maltreatment from January 2006 to December 2009. Data regarding the child, injuries, family, and perpetrators were abstracted. We evaluated whether family and child characteristics were associated with maltreatment, and whether types of injuries were similar within multiple sets. For comparison, data from the same time period for single-birth maltreated children also were abstracted, including child age, gestational age at birth, and injury type. There were 19 sets of multiple births in which at least one child had abusive injuries and/or neglect. In 10 of 19 sets (53%), all multiples were found to have a form of maltreatment, and all children in these multiple sets shared at least one injury type. Parents lived together in 63% of cases. Fathers and mothers were the alleged perpetrator in 42% of the cases. Multiple-gestation-birth maltreated children were significantly more likely than single-birth maltreated children to have abdominal trauma (13% vs. 1%, respectively; p<.01), fractures (83% vs. 39%; p<.01), and to be injured at a younger mean age (12.8 months vs. 34.8 months; p<.01). Siblings of maltreated, multiple-gestation children often, but not always, were abused. In sets with two maltreated children, children usually shared the same modes of maltreatment. Multiples are significantly more likely than singletons to be younger and experience fractures and abdominal trauma. The findings support the current standard practice of evaluating all children in a multiple set when one is found to be abused or neglected. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel
CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less
Visual attention is required for multiple object tracking.
Tran, Annie; Hoffman, James E
2016-12-01
In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Novel layered clustering-based approach for generating ensemble of classifiers.
Rahman, Ashfaqur; Verma, Brijesh
2011-05-01
This paper introduces a novel concept for creating an ensemble of classifiers. The concept is based on generating an ensemble of classifiers through clustering of data at multiple layers. The ensemble classifier model generates a set of alternative clustering of a dataset at different layers by randomly initializing the clustering parameters and trains a set of base classifiers on the patterns at different clusters in different layers. A test pattern is classified by first finding the appropriate cluster at each layer and then using the corresponding base classifier. The decisions obtained at different layers are fused into a final verdict using majority voting. As the base classifiers are trained on overlapping patterns at different layers, the proposed approach achieves diversity among the individual classifiers. Identification of difficult-to-classify patterns through clustering as well as achievement of diversity through layering leads to better classification results as evidenced from the experimental results.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
Systematic study of rapidity dispersion parameter in high energy nucleus-nucleus interactions
NASA Astrophysics Data System (ADS)
Bhattacharyya, Swarnapratim; Haiduc, Maria; Neagu, Alina Tania; Firu, Elena
2014-03-01
A systematic study of rapidity dispersion parameter as a quantitative measure of clustering of particles has been carried out in the interactions of 16O, 28Si and 32S projectiles at 4.5 A GeV/c with heavy (AgBr) and light (CNO) groups of targets present in the nuclear emulsion. For all the interactions, the total ensemble of events has been divided into four overlapping multiplicity classes depending on the number of shower particles. For all the interactions and for each multiplicity class, the rapidity dispersion parameter values indicate the occurrence of clusterization during the multiparticle production at Dubna energy. The measured rapidity dispersion parameter values are found to decrease with the increase of average multiplicity for all the interactions. The dependence of rapidity dispersion parameter on the average multiplicity can be successfully described by a relation D(η) = a + b
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
Multi-wavelength and multiband RE-doped optical fiber source array for WDM-GPON applications
NASA Astrophysics Data System (ADS)
Perez-Sanchez, G. G.; Bertoldi-Martins, I.; Gallion, P.; Gosset, C.; Álvarez-Chávez, J. A.
2013-12-01
In this paper, a multiband, multi-wavelength, all-fibre source array consisting of an 810nm pump laser diode, thretwo fiber splitters and three segments of Er-, Tm- and Nd-doped fiber is proposed for PON applications. In the set-up, cascaded pairs of standard fiber gratings are used for extracting the required multiple wavelengths within their corresponding bands. A thorough design parameter description, optical array details and full simulation results, such as: full multi-wavelength spectrum, peak and average powers for each generated wavelength, linewidth at FWHM for each generated signal, and individual and overall conversion efficiency, will be included in the manuscript.
NASA Technical Reports Server (NTRS)
Mcdougal, David S. (Editor)
1990-01-01
FIRE (First ISCCP Regional Experiment) is a U.S. cloud-radiation research program formed in 1984 to increase the basic understanding of cirrus and marine stratocumulus cloud systems, to develop realistic parameterizations for these systems, and to validate and improve ISCCP cloud product retrievals. Presentations of results culminating the first 5 years of FIRE research activities were highlighted. The 1986 Cirrus Intensive Field Observations (IFO), the 1987 Marine Stratocumulus IFO, the Extended Time Observations (ETO), and modeling activities are described. Collaborative efforts involving the comparison of multiple data sets, incorporation of data measurements into modeling activities, validation of ISCCP cloud parameters, and development of parameterization schemes for General Circulation Models (GCMs) are described.
Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise
2015-04-06
Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.
Diffraction-geometry refinement in the DIALS framework
Waterman, David G.; Winter, Graeme; Gildea, Richard J.; ...
2016-03-30
Rapid data collection and modern computing resources provide the opportunity to revisit the task of optimizing the model of diffraction geometry prior to integration. A comprehensive description is given of new software that builds upon established methods by performing a single global refinement procedure, utilizing a smoothly varying model of the crystal lattice where appropriate. This global refinement technique extends to multiple data sets, providing useful constraints to handle the problem of correlated parameters, particularly for small wedges of data. Examples of advanced uses of the software are given and the design is explained in detail, with particular emphasis onmore » the flexibility and extensibility it entails.« less
Acoustic characteristics of the medium with gradient change of impedance
NASA Astrophysics Data System (ADS)
Hu, Bo; Yang, Desen; Sun, Yu; Shi, Jie; Shi, Shengguo; Zhang, Haoyang
2015-10-01
The medium with gradient change of acoustic impedance is a new acoustic structure which developed from multiple layer structures. In this paper, the inclusion is introduced and a new set of equations is developed. It can obtain better acoustic properties based on the medium with gradient change of acoustic impedance. Theoretical formulation has been systematically addressed which demonstrates how the idea of utilizing this method. The sound reflection and absorption coefficients were obtained. At last, the validity and the correctness of this method are assessed by simulations. The results show that appropriate design of parameters of the medium can improve underwater acoustic properties.
One-loop β-function for an infinite-parameter family of gauge theories
NASA Astrophysics Data System (ADS)
Krasnov, Kirill
2015-03-01
We continue to study an infinite-parametric family of gauge theories with an arbitrary function of the self-dual part of the field strength as the Lagrangian. The arising one-loop divergences are computed using the background field method. We show that they can all be absorbed by a local redefinition of the gauge field, as well as multiplicative renormalisations of the couplings. Thus, this family of theories is one-loop renormalisable. The infinite set of β-functions for the couplings is compactly stored in a renormalisation group flow for a single function of the curvature. The flow is obtained explicitly.