Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mathematical Formulation of Multivariate Euclidean Models for Discrimination Methods.
ERIC Educational Resources Information Center
Mullen, Kenneth; Ennis, Daniel M.
1987-01-01
Multivariate models for the triangular and duo-trio methods are described, and theoretical methods are compared to a Monte Carlo simulation. Implications are discussed for a new theory of multidimensional scaling which challenges the traditional assumption that proximity measures and perceptual distances are monotonically related. (Author/GDC)
A new subgrid-scale representation of hydrometeor fields using a multivariate PDF
Griffin, Brian M.; Larson, Vincent E.
2016-06-03
The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. In conclusion, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
Williams, L. Keoki; Buu, Anne
2017-01-01
We propose a multivariate genome-wide association test for mixed continuous, binary, and ordinal phenotypes. A latent response model is used to estimate the correlation between phenotypes with different measurement scales so that the empirical distribution of the Fisher’s combination statistic under the null hypothesis is estimated efficiently. The simulation study shows that our proposed correlation estimation methods have high levels of accuracy. More importantly, our approach conservatively estimates the variance of the test statistic so that the type I error rate is controlled. The simulation also shows that the proposed test maintains the power at the level very close to that of the ideal analysis based on known latent phenotypes while controlling the type I error. In contrast, conventional approaches–dichotomizing all observed phenotypes or treating them as continuous variables–could either reduce the power or employ a linear regression model unfit for the data. Furthermore, the statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that conducting a multivariate test on multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests. The proposed method also offers a new approach to analyzing the Fagerström Test for Nicotine Dependence as multivariate phenotypes in genome-wide association studies. PMID:28081206
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
An imputed forest composition map for New England screened by species range boundaries
Matthew J. Duveneck; Jonathan R. Thompson; B. Tyler Wilson
2015-01-01
Initializing forest landscape models (FLMs) to simulate changes in tree species composition requires accurate fine-scale forest attribute information mapped continuously over large areas. Nearest-neighbor imputation maps, maps developed from multivariate imputation of field plots, have high potential for use as the initial condition within FLMs, but the tendency for...
Exploring connectivity with large-scale Granger causality on resting-state functional MRI.
DSouza, Adora M; Abidin, Anas Z; Leistritz, Lutz; Wismüller, Axel
2017-08-01
Large-scale Granger causality (lsGC) is a recently developed, resting-state functional MRI (fMRI) connectivity analysis approach that estimates multivariate voxel-resolution connectivity. Unlike most commonly used multivariate approaches, which establish coarse-resolution connectivity by aggregating voxel time-series avoiding an underdetermined problem, lsGC estimates voxel-resolution, fine-grained connectivity by incorporating an embedded dimension reduction. We investigate application of lsGC on realistic fMRI simulations, modeling smoothing of neuronal activity by the hemodynamic response function and repetition time (TR), and empirical resting-state fMRI data. Subsequently, functional subnetworks are extracted from lsGC connectivity measures for both datasets and validated quantitatively. We also provide guidelines to select lsGC free parameters. Results indicate that lsGC reliably recovers underlying network structure with area under receiver operator characteristic curve (AUC) of 0.93 at TR=1.5s for a 10-min session of fMRI simulations. Furthermore, subnetworks of closely interacting modules are recovered from the aforementioned lsGC networks. Results on empirical resting-state fMRI data demonstrate recovery of visual and motor cortex in close agreement with spatial maps obtained from (i) visuo-motor fMRI stimulation task-sequence (Accuracy=0.76) and (ii) independent component analysis (ICA) of resting-state fMRI (Accuracy=0.86). Compared with conventional Granger causality approach (AUC=0.75), lsGC produces better network recovery on fMRI simulations. Furthermore, it cannot recover functional subnetworks from empirical fMRI data, since quantifying voxel-resolution connectivity is not possible as consequence of encountering an underdetermined problem. Functional network recovery from fMRI data suggests that lsGC gives useful insight into connectivity patterns from resting-state fMRI at a multivariate voxel-resolution. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko
2015-04-01
Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Multiscale analysis of information dynamics for linear multivariate processes.
Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele
2016-08-01
In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.
Exploring image data assimilation in the prospect of high-resolution satellite oceanic observations
NASA Astrophysics Data System (ADS)
Durán Moro, Marina; Brankart, Jean-Michel; Brasseur, Pierre; Verron, Jacques
2017-07-01
Satellite sensors increasingly provide high-resolution (HR) observations of the ocean. They supply observations of sea surface height (SSH) and of tracers of the dynamics such as sea surface salinity (SSS) and sea surface temperature (SST). In particular, the Surface Water Ocean Topography (SWOT) mission will provide measurements of the surface ocean topography at very high-resolution (HR) delivering unprecedented information on the meso-scale and submeso-scale dynamics. This study investigates the feasibility to use these measurements to reconstruct meso-scale features simulated by numerical models, in particular on the vertical dimension. A methodology to reconstruct three-dimensional (3D) multivariate meso-scale scenes is developed by using a HR numerical model of the Solomon Sea region. An inverse problem is defined in the framework of a twin experiment where synthetic observations are used. A true state is chosen among the 3D multivariate states which is considered as a reference state. In order to correct a first guess of this true state, a two-step analysis is carried out. A probability distribution of the first guess is defined and updated at each step of the analysis: (i) the first step applies the analysis scheme of a reduced-order Kalman filter to update the first guess probability distribution using SSH observation; (ii) the second step minimizes a cost function using observations of HR image structure and a new probability distribution is estimated. The analysis is extended to the vertical dimension using 3D multivariate empirical orthogonal functions (EOFs) and the probabilistic approach allows the update of the probability distribution through the two-step analysis. Experiments show that the proposed technique succeeds in correcting a multivariate state using meso-scale and submeso-scale information contained in HR SSH and image structure observations. It also demonstrates how the surface information can be used to reconstruct the ocean state below the surface.
Spatiotemporal multivariate mixture models for Bayesian model selection in disease mapping.
Lawson, A B; Carroll, R; Faes, C; Kirby, R S; Aregay, M; Watjou, K
2017-12-01
It is often the case that researchers wish to simultaneously explore the behavior of and estimate overall risk for multiple, related diseases with varying rarity while accounting for potential spatial and/or temporal correlation. In this paper, we propose a flexible class of multivariate spatio-temporal mixture models to fill this role. Further, these models offer flexibility with the potential for model selection as well as the ability to accommodate lifestyle, socio-economic, and physical environmental variables with spatial, temporal, or both structures. Here, we explore the capability of this approach via a large scale simulation study and examine a motivating data example involving three cancers in South Carolina. The results which are focused on four model variants suggest that all models possess the ability to recover simulation ground truth and display improved model fit over two baseline Knorr-Held spatio-temporal interaction model variants in a real data application.
NASA Astrophysics Data System (ADS)
Azami, Hamed; Escudero, Javier
2017-01-01
Multiscale entropy (MSE) is an appealing tool to characterize the complexity of time series over multiple temporal scales. Recent developments in the field have tried to extend the MSE technique in different ways. Building on these trends, we propose the so-called refined composite multivariate multiscale fuzzy entropy (RCmvMFE) whose coarse-graining step uses variance (RCmvMFEσ2) or mean (RCmvMFEμ). We investigate the behavior of these multivariate methods on multichannel white Gaussian and 1/ f noise signals, and two publicly available biomedical recordings. Our simulations demonstrate that RCmvMFEσ2 and RCmvMFEμ lead to more stable results and are less sensitive to the signals' length in comparison with the other existing multivariate multiscale entropy-based methods. The classification results also show that using both the variance and mean in the coarse-graining step offers complexity profiles with complementary information for biomedical signal analysis. We also made freely available all the Matlab codes used in this paper.
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Fontes, Cristiano Hora; Budman, Hector
2017-11-01
A clustering problem involving multivariate time series (MTS) requires the selection of similarity metrics. This paper shows the limitations of the PCA similarity factor (SPCA) as a single metric in nonlinear problems where there are differences in magnitude of the same process variables due to expected changes in operation conditions. A novel method for clustering MTS based on a combination between SPCA and the average-based Euclidean distance (AED) within a fuzzy clustering approach is proposed. Case studies involving either simulated or real industrial data collected from a large scale gas turbine are used to illustrate that the hybrid approach enhances the ability to recognize normal and fault operating patterns. This paper also proposes an oversampling procedure to create synthetic multivariate time series that can be useful in commonly occurring situations involving unbalanced data sets. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Large-scale derived flood frequency analysis based on continuous simulation
NASA Astrophysics Data System (ADS)
Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno
2016-04-01
There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.
Jiang, Xuejun; Guo, Xu; Zhang, Ning; Wang, Bo
2018-01-01
This article presents and investigates performance of a series of robust multivariate nonparametric tests for detection of location shift between two multivariate samples in randomized controlled trials. The tests are built upon robust estimators of distribution locations (medians, Hodges-Lehmann estimators, and an extended U statistic) with both unscaled and scaled versions. The nonparametric tests are robust to outliers and do not assume that the two samples are drawn from multivariate normal distributions. Bootstrap and permutation approaches are introduced for determining the p-values of the proposed test statistics. Simulation studies are conducted and numerical results are reported to examine performance of the proposed statistical tests. The numerical results demonstrate that the robust multivariate nonparametric tests constructed from the Hodges-Lehmann estimators are more efficient than those based on medians and the extended U statistic. The permutation approach can provide a more stringent control of Type I error and is generally more powerful than the bootstrap procedure. The proposed robust nonparametric tests are applied to detect multivariate distributional difference between the intervention and control groups in the Thai Healthy Choices study and examine the intervention effect of a four-session motivational interviewing-based intervention developed in the study to reduce risk behaviors among youth living with HIV. PMID:29672555
A path-level exact parallelization strategy for sequential simulation
NASA Astrophysics Data System (ADS)
Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.
2018-01-01
Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.
Simulating Multivariate Nonnormal Data Using an Iterative Algorithm
ERIC Educational Resources Information Center
Ruscio, John; Kaczetow, Walter
2008-01-01
Simulating multivariate nonnormal data with specified correlation matrices is difficult. One especially popular method is Vale and Maurelli's (1983) extension of Fleishman's (1978) polynomial transformation technique to multivariate applications. This requires the specification of distributional moments and the calculation of an intermediate…
NASA Astrophysics Data System (ADS)
Bonne, F.; Alamir, M.; Bonnay, P.
2017-02-01
This paper deals with multivariable constrained model predictive control for Warm Compression Stations (WCS). WCSs are subject to numerous constraints (limits on pressures, actuators) that need to be satisfied using appropriate algorithms. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to achieve precise control of pressures in normal operation or to avoid reaching stopping criteria (such as excessive pressures) under high disturbances (such as a pulsed heat load expected to take place in future fusion reactors, expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details the simulator used to validate this new control scheme and the associated simulation results on the SBTs WCS. This work is partially supported through the French National Research Agency (ANR), task agreement ANR-13-SEED-0005.
Multivariate and Multiscale Data Assimilation in Terrestrial Systems: A Review
Montzka, Carsten; Pauwels, Valentijn R. N.; Franssen, Harrie-Jan Hendricks; Han, Xujun; Vereecken, Harry
2012-01-01
More and more terrestrial observational networks are being established to monitor climatic, hydrological and land-use changes in different regions of the World. In these networks, time series of states and fluxes are recorded in an automated manner, often with a high temporal resolution. These data are important for the understanding of water, energy, and/or matter fluxes, as well as their biological and physical drivers and interactions with and within the terrestrial system. Similarly, the number and accuracy of variables, which can be observed by spaceborne sensors, are increasing. Data assimilation (DA) methods utilize these observations in terrestrial models in order to increase process knowledge as well as to improve forecasts for the system being studied. The widely implemented automation in observing environmental states and fluxes makes an operational computation more and more feasible, and it opens the perspective of short-time forecasts of the state of terrestrial systems. In this paper, we review the state of the art with respect to DA focusing on the joint assimilation of observational data precedents from different spatial scales and different data types. An introduction is given to different DA methods, such as the Ensemble Kalman Filter (EnKF), Particle Filter (PF) and variational methods (3/4D-VAR). In this review, we distinguish between four major DA approaches: (1) univariate single-scale DA (UVSS), which is the approach used in the majority of published DA applications, (2) univariate multiscale DA (UVMS) referring to a methodology which acknowledges that at least some of the assimilated data are measured at a different scale than the computational grid scale, (3) multivariate single-scale DA (MVSS) dealing with the assimilation of at least two different data types, and (4) combined multivariate multiscale DA (MVMS). Finally, we conclude with a discussion on the advantages and disadvantages of the assimilation of multiple data types in a simulation model. Existing approaches can be used to simultaneously update several model states and model parameters if applicable. In other words, the basic principles for multivariate data assimilation are already available. We argue that a better understanding of the measurement errors for different observation types, improved estimates of observation bias and improved multiscale assimilation methods for data which scale nonlinearly is important to properly weight them in multiscale multivariate data assimilation. In this context, improved cross-validation of different data types, and increased ground truth verification of remote sensing products are required. PMID:23443380
Multivariate and multiscale data assimilation in terrestrial systems: a review.
Montzka, Carsten; Pauwels, Valentijn R N; Franssen, Harrie-Jan Hendricks; Han, Xujun; Vereecken, Harry
2012-11-26
More and more terrestrial observational networks are being established to monitor climatic, hydrological and land-use changes in different regions of the World. In these networks, time series of states and fluxes are recorded in an automated manner, often with a high temporal resolution. These data are important for the understanding of water, energy, and/or matter fluxes, as well as their biological and physical drivers and interactions with and within the terrestrial system. Similarly, the number and accuracy of variables, which can be observed by spaceborne sensors, are increasing. Data assimilation (DA) methods utilize these observations in terrestrial models in order to increase process knowledge as well as to improve forecasts for the system being studied. The widely implemented automation in observing environmental states and fluxes makes an operational computation more and more feasible, and it opens the perspective of short-time forecasts of the state of terrestrial systems. In this paper, we review the state of the art with respect to DA focusing on the joint assimilation of observational data precedents from different spatial scales and different data types. An introduction is given to different DA methods, such as the Ensemble Kalman Filter (EnKF), Particle Filter (PF) and variational methods (3/4D-VAR). In this review, we distinguish between four major DA approaches: (1) univariate single-scale DA (UVSS), which is the approach used in the majority of published DA applications, (2) univariate multiscale DA (UVMS) referring to a methodology which acknowledges that at least some of the assimilated data are measured at a different scale than the computational grid scale, (3) multivariate single-scale DA (MVSS) dealing with the assimilation of at least two different data types, and (4) combined multivariate multiscale DA (MVMS). Finally, we conclude with a discussion on the advantages and disadvantages of the assimilation of multiple data types in a simulation model. Existing approaches can be used to simultaneously update several model states and model parameters if applicable. In other words, the basic principles for multivariate data assimilation are already available. We argue that a better understanding of the measurement errors for different observation types, improved estimates of observation bias and improved multiscale assimilation methods for data which scale nonlinearly is important to properly weight them in multiscale multivariate data assimilation. In this context, improved cross-validation of different data types, and increased ground truth verification of remote sensing products are required.
Ng, Danny Siu-Chun; Sun, Zihan; Young, Alvin Lerrmann; Ko, Simon Tak-Chuen; Lok, Jerry Ka-Hing; Lai, Timothy Yuk-Yau; Sikder, Shameema; Tham, Clement C
2018-01-01
To identify residents' perceived barriers to learning phacoemulsification surgical procedures and to evaluate whether virtual reality simulation training changed these perceptions. The ophthalmology residents undertook a simulation phacoemulsification course and proficiency assessment on the Eyesi system using the previously validated training modules of intracapsular navigation, anti-tremor, capsulorrhexis, and cracking/chopping. A cross-sectional, multicenter survey on the perceived difficulties in performing phacoemulsification tasks on patients, based on the validated International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubric (ICO-OSCAR), using a 5-point Likert scale (1 = least and 5 = most difficulty), was conducted among residents with or without prior simulation training. Mann-Whitney U tests were carried out to compare the mean scores, and multivariate regression analyses were performed to evaluate the association of lower scores with the following potential predictors: 1) higher level trainee, 2) can complete phacoemulsification most of the time (>90%) without supervisor's intervention, and 3) prior simulation training. The study was conducted in ophthalmology residency training programs in five regional hospitals in Hong Kong. Of the 22 residents, 19 responded (86.3%), of which 13 (68.4%) had completed simulation training. Nucleus cracking/chopping was ranked highest in difficulty by all respondents followed by capsulorrhexis completion and nucleus rotation/manipulation. Respondents with prior simulation training had significantly lower difficulty scores on these three tasks (nucleus cracking/chopping 3.85 vs 4.75, P = 0.03; capsulorrhexis completion 3.31 vs 4.40, P = 0.02; and nucleus rotation/manipulation 3.00 vs 4.75, P = 0.01). In multivariate analyses, simulation training was significantly associated with lower difficulty scores on these three tasks. Residents who had completed Eyesi simulation training had higher confidence in performing the most difficult tasks perceived during phacoemulsification.
Ng, Danny Siu-Chun; Sun, Zihan; Young, Alvin Lerrmann; Ko, Simon Tak-Chuen; Lok, Jerry Ka-Hing; Lai, Timothy Yuk-Yau; Sikder, Shameema; Tham, Clement C
2018-01-01
Objective To identify residents’ perceived barriers to learning phacoemulsification surgical procedures and to evaluate whether virtual reality simulation training changed these perceptions. Design The ophthalmology residents undertook a simulation phacoemulsification course and proficiency assessment on the Eyesi system using the previously validated training modules of intracapsular navigation, anti-tremor, capsulorrhexis, and cracking/chopping. A cross-sectional, multicenter survey on the perceived difficulties in performing phacoemulsification tasks on patients, based on the validated International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric (ICO-OSCAR), using a 5-point Likert scale (1 = least and 5 = most difficulty), was conducted among residents with or without prior simulation training. Mann–Whitney U tests were carried out to compare the mean scores, and multivariate regression analyses were performed to evaluate the association of lower scores with the following potential predictors: 1) higher level trainee, 2) can complete phacoemulsification most of the time (>90%) without supervisor’s intervention, and 3) prior simulation training. Setting The study was conducted in ophthalmology residency training programs in five regional hospitals in Hong Kong. Results Of the 22 residents, 19 responded (86.3%), of which 13 (68.4%) had completed simulation training. Nucleus cracking/chopping was ranked highest in difficulty by all respondents followed by capsulorrhexis completion and nucleus rotation/manipulation. Respondents with prior simulation training had significantly lower difficulty scores on these three tasks (nucleus cracking/chopping 3.85 vs 4.75, P = 0.03; capsulorrhexis completion 3.31 vs 4.40, P = 0.02; and nucleus rotation/manipulation 3.00 vs 4.75, P = 0.01). In multivariate analyses, simulation training was significantly associated with lower difficulty scores on these three tasks. Conclusion Residents who had completed Eyesi simulation training had higher confidence in performing the most difficult tasks perceived during phacoemulsification. PMID:29785084
Measuring multiple spike train synchrony.
Kreuz, Thomas; Chicharro, Daniel; Andrzejak, Ralph G; Haas, Julie S; Abarbanel, Henry D I
2009-10-15
Measures of multiple spike train synchrony are essential in order to study issues such as spike timing reliability, network synchronization, and neuronal coding. These measures can broadly be divided in multivariate measures and averages over bivariate measures. One of the most recent bivariate approaches, the ISI-distance, employs the ratio of instantaneous interspike intervals (ISIs). In this study we propose two extensions of the ISI-distance, the straightforward averaged bivariate ISI-distance and the multivariate ISI-diversity based on the coefficient of variation. Like the original measure these extensions combine many properties desirable in applications to real data. In particular, they are parameter-free, time scale independent, and easy to visualize in a time-resolved manner, as we illustrate with in vitro recordings from a cortical neuron. Using a simulated network of Hindemarsh-Rose neurons as a controlled configuration we compare the performance of our methods in distinguishing different levels of multi-neuron spike train synchrony to the performance of six other previously published measures. We show and explain why the averaged bivariate measures perform better than the multivariate ones and why the multivariate ISI-diversity is the best performer among the multivariate methods. Finally, in a comparison against standard methods that rely on moving window estimates, we use single-unit monkey data to demonstrate the advantages of the instantaneous nature of our methods.
Multivariate analysis of scale-dependent associations between bats and landscape structure
Gorresen, P.M.; Willig, M.R.; Strauss, R.E.
2005-01-01
The assessment of biotic responses to habitat disturbance and fragmentation generally has been limited to analyses at a single spatial scale. Furthermore, methods to compare responses between scales have lacked the ability to discriminate among patterns related to the identity, strength, or direction of associations of biotic variables with landscape attributes. We present an examination of the relationship of population- and community-level characteristics of phyllostomid bats with habitat features that were measured at multiple spatial scales in Atlantic rain forest of eastern Paraguay. We used a matrix of partial correlations between each biotic response variable (i.e., species abundance, species richness, and evenness) and a suite of landscape characteristics to represent the multifaceted associations of bats with spatial structure. Correlation matrices can correspond based on either the strength (i.e., magnitude) or direction (i.e., sign) of association. Therefore, a simulation model independently evaluated correspondence in the magnitude and sign of correlations among scales, and results were combined via a meta-analysis to provide an overall test of significance. Our approach detected both species-specific differences in response to landscape structure and scale dependence in those responses. This matrix-simulation approach has broad applicability to ecological situations in which multiple intercorrelated factors contribute to patterns in space or time. ?? 2005 by the Ecological Society of America.
Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Schröter, Kai; Merz, Bruno
2016-05-01
Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.
Field applications of stand-off sensing using visible/NIR multivariate optical computing
NASA Astrophysics Data System (ADS)
Eastwood, DeLyle; Soyemi, Olusola O.; Karunamuni, Jeevanandra; Zhang, Lixia; Li, Hongli; Myrick, Michael L.
2001-02-01
12 A novel multivariate visible/NIR optical computing approach applicable to standoff sensing will be demonstrated with porphyrin mixtures as examples. The ultimate goal is to develop environmental or counter-terrorism sensors for chemicals such as organophosphorus (OP) pesticides or chemical warfare simulants in the near infrared spectral region. The mathematical operation that characterizes prediction of properties via regression from optical spectra is a calculation of inner products between the spectrum and the pre-determined regression vector. The result is scaled appropriately and offset to correspond to the basis from which the regression vector is derived. The process involves collecting spectroscopic data and synthesizing a multivariate vector using a pattern recognition method. Then, an interference coating is designed that reproduces the pattern of the multivariate vector in its transmission or reflection spectrum, and appropriate interference filters are fabricated. High and low refractive index materials such as Nb2O5 and SiO2 are excellent choices for the visible and near infrared regions. The proof of concept has now been established for this system in the visible and will later be extended to chemicals such as OP compounds in the near and mid-infrared.
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Soeder, J. F.; Seldner, K.; Cwynar, D. S.
1977-01-01
The design, evaluation, and testing of a practical, multivariable, linear quadratic regulator control for the F100 turbofan engine were accomplished. NASA evaluation of the multivariable control logic and implementation are covered. The evaluation utilized a real time, hybrid computer simulation of the engine. Results of the evaluation are presented, and recommendations concerning future engine testing of the control are made. Results indicated that the engine testing of the control should be conducted as planned.
Analyzing Multiple Outcomes in Clinical Research Using Multivariate Multilevel Models
Baldwin, Scott A.; Imel, Zac E.; Braithwaite, Scott R.; Atkins, David C.
2014-01-01
Objective Multilevel models have become a standard data analysis approach in intervention research. Although the vast majority of intervention studies involve multiple outcome measures, few studies use multivariate analysis methods. The authors discuss multivariate extensions to the multilevel model that can be used by psychotherapy researchers. Method and Results Using simulated longitudinal treatment data, the authors show how multivariate models extend common univariate growth models and how the multivariate model can be used to examine multivariate hypotheses involving fixed effects (e.g., does the size of the treatment effect differ across outcomes?) and random effects (e.g., is change in one outcome related to change in the other?). An online supplemental appendix provides annotated computer code and simulated example data for implementing a multivariate model. Conclusions Multivariate multilevel models are flexible, powerful models that can enhance clinical research. PMID:24491071
Terascale direct numerical simulations of turbulent combustion using S3D
NASA Astrophysics Data System (ADS)
Chen, J. H.; Choudhary, A.; de Supinski, B.; DeVries, M.; Hawkes, E. R.; Klasky, S.; Liao, W. K.; Ma, K. L.; Mellor-Crummey, J.; Podhorszki, N.; Sankaran, R.; Shende, S.; Yoo, C. S.
2009-01-01
Computational science is paramount to the understanding of underlying processes in internal combustion engines of the future that will utilize non-petroleum-based alternative fuels, including carbon-neutral biofuels, and burn in new combustion regimes that will attain high efficiency while minimizing emissions of particulates and nitrogen oxides. Next-generation engines will likely operate at higher pressures, with greater amounts of dilution and utilize alternative fuels that exhibit a wide range of chemical and physical properties. Therefore, there is a significant role for high-fidelity simulations, direct numerical simulations (DNS), specifically designed to capture key turbulence-chemistry interactions in these relatively uncharted combustion regimes, and in particular, that can discriminate the effects of differences in fuel properties. In DNS, all of the relevant turbulence and flame scales are resolved numerically using high-order accurate numerical algorithms. As a consequence terascale DNS are computationally intensive, require massive amounts of computing power and generate tens of terabytes of data. Recent results from terascale DNS of turbulent flames are presented here, illustrating its role in elucidating flame stabilization mechanisms in a lifted turbulent hydrogen/air jet flame in a hot air coflow, and the flame structure of a fuel-lean turbulent premixed jet flame. Computing at this scale requires close collaborations between computer and combustion scientists to provide optimized scaleable algorithms and software for terascale simulations, efficient collective parallel I/O, tools for volume visualization of multiscale, multivariate data and automating the combustion workflow. The enabling computer science, applied to combustion science, is also required in many other terascale physics and engineering simulations. In particular, performance monitoring is used to identify the performance of key kernels in the DNS code, S3D and especially memory intensive loops in the code. Through the careful application of loop transformations, data reuse in cache is exploited thereby reducing memory bandwidth needs, and hence, improving S3D's nodal performance. To enhance collective parallel I/O in S3D, an MPI-I/O caching design is used to construct a two-stage write-behind method for improving the performance of write-only operations. The simulations generate tens of terabytes of data requiring analysis. Interactive exploration of the simulation data is enabled by multivariate time-varying volume visualization. The visualization highlights spatial and temporal correlations between multiple reactive scalar fields using an intuitive user interface based on parallel coordinates and time histogram. Finally, an automated combustion workflow is designed using Kepler to manage large-scale data movement, data morphing, and archival and to provide a graphical display of run-time diagnostics.
A multiple-fan active control wind tunnel for outdoor wind speed and direction simulation
NASA Astrophysics Data System (ADS)
Wang, Jia-Ying; Meng, Qing-Hao; Luo, Bing; Zeng, Ming
2018-03-01
This article presents a new type of active controlled multiple-fan wind tunnel. The wind tunnel consists of swivel plates and arrays of direct current fans, and the rotation speed of each fan and the shaft angle of each swivel plate can be controlled independently for simulating different kinds of outdoor wind fields. To measure the similarity between the simulated wind field and the outdoor wind field, wind speed and direction time series of two kinds of wind fields are recorded by nine two-dimensional ultrasonic anemometers, and then statistical properties of the wind signals in different time scales are analyzed based on the empirical mode decomposition. In addition, the complexity of wind speed and direction time series is also investigated using multiscale entropy and multivariate multiscale entropy. Results suggest that the simulated wind field in the multiple-fan wind tunnel has a high degree of similarity with the outdoor wind field.
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
Evaluation of an F100 multivariable control using a real-time engine simulation
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Skira, C.; Soeder, J. F.
1977-01-01
A multivariable control design for the F100 turbofan engine was evaluated, as part of the F100 multivariable control synthesis (MVCS) program. The evaluation utilized a real-time, hybrid computer simulation of the engine and a digital computer implementation of the control. Significant results of the evaluation are presented and recommendations concerning future engine testing of the control are made.
Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa
2015-01-01
Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.
Gene set analysis using variance component tests.
Huang, Yen-Tsung; Lin, Xihong
2013-06-28
Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.
Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.
2017-12-01
A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
The Fourier decomposition method for nonlinear and non-stationary time series analysis
Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-01-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao
2017-01-01
The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
Strategies for Interactive Visualization of Large Scale Climate Simulations
NASA Astrophysics Data System (ADS)
Xie, J.; Chen, C.; Ma, K.; Parvis
2011-12-01
With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Implementation Challenges for Multivariable Control: What You Did Not Learn in School
NASA Technical Reports Server (NTRS)
Garg, Sanjay
2008-01-01
Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation.
Wherry, Susan A.; Wood, Tamara M.
2018-04-27
A whole lake eutrophication (WLE) model approach for phosphorus and cyanobacterial biomass in Upper Klamath Lake, south-central Oregon, is presented here. The model is a successor to a previous model developed to inform a Total Maximum Daily Load (TMDL) for phosphorus in the lake, but is based on net primary production (NPP), which can be calculated from dissolved oxygen, rather than scaling up a small-scale description of cyanobacterial growth and respiration rates. This phase 3 WLE model is a refinement of the proof-of-concept developed in phase 2, which was the first attempt to use NPP to simulate cyanobacteria in the TMDL model. The calibration of the calculated NPP WLE model was successful, with performance metrics indicating a good fit to calibration data, and the calculated NPP WLE model was able to simulate mid-season bloom decreases, a feature that previous models could not reproduce.In order to use the model to simulate future scenarios based on phosphorus load reduction, a multivariate regression model was created to simulate NPP as a function of the model state variables (phosphorus and chlorophyll a) and measured meteorological and temperature model inputs. The NPP time series was split into a low- and high-frequency component using wavelet analysis, and regression models were fit to the components separately, with moderate success.The regression models for NPP were incorporated in the WLE model, referred to as the “scenario” WLE (SWLE), and the fit statistics for phosphorus during the calibration period were mostly unchanged. The fit statistics for chlorophyll a, however, were degraded. These statistics are still an improvement over prior models, and indicate that the SWLE is appropriate for long-term predictions even though it misses some of the seasonal variations in chlorophyll a.The complete whole lake SWLE model, with multivariate regression to predict NPP, was used to make long-term simulations of the response to 10-, 20-, and 40-percent reductions in tributary nutrient loads. The long-term mean water column concentration of total phosphorus was reduced by 9, 18, and 36 percent, respectively, in response to these load reductions. The long-term water column chlorophyll a concentration was reduced by 4, 13, and 44 percent, respectively. The adjustment to a new equilibrium between the water column and sediments occurred over about 30 years.
A Simpli ed, General Approach to Simulating from Multivariate Copula Functions
Barry Goodwin
2012-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses \\probability{...
a New Framework for Characterising Simulated Droughts for Future Climates
NASA Astrophysics Data System (ADS)
Sharma, A.; Rashid, M.; Johnson, F.
2017-12-01
Significant attention has been focussed on metrics for quantifying drought. Lesser attention has been given to the unsuitability of current metrics in quantifying drought in a changing climate due to the clear non-stationarity in potential and actual evapotranspiration well into the future (Asadi-Zarch et al, 2015). This talk presents a new basis for simulating drought designed specifically for use with climate model simulations. Given the known uncertainty of climate model rainfall simulations, along with their inability to represent low-frequency variability attributes, the approach here adopts a predictive model for drought using selected atmospheric indicators. This model is based on a wavelet decomposition of relevant atmospheric predictors to filter out less relevant frequencies and formulate a better characterisation of the drought metric chosen as response. Once ascertained using observed precipication and associated atmospheric variables, these can be formulated from GCM simulations using a multivariate bias correction tool (Mehrotra and Sharma, 2016) that accounts for low-frequency variability, and a regression tool that accounts for nonlinear dependence (Sharma and Mehrotra, 2014). Use of only the relevant frequencies, as well as the corrected representation of cross-variable dependence, allows greater accuracy in characterising observed drought, from GCM simulations. Using simulations from a range of GCMs across Australia, we show here that this new method offers considerable advantages in representing drought compared to traditionally followed alternatives that rely on modelled rainfall instead. Reference:Asadi Zarch, M. A., B. Sivakumar, and A. Sharma (2015), Droughts in a warming climate: A global assessment of Standardized precipitation index (SPI) and Reconnaissance drought index (RDI), Journal of Hydrology, 526, 183-195. Mehrotra, R., and A. Sharma (2016), A Multivariate Quantile-Matching Bias Correction Approach with Auto- and Cross-Dependence across Multiple Time Scales: Implications for Downscaling, Journal of Climate, 29(10), 3519-3539. Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 50, 650-660, doi:10.1002/2013WR013845.
Chang, Yi-Ting; Tam, Wai-Cheong C; Shiah, Yung-Jong; Chiang, Shih-Kuang
2017-09-01
The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) is often used in forensic psychological/psychiatric assessment. This was a pilot study on the utility of the Chinese MMPI-2 in detecting feigned mental disorders. The sample consisted of 194 university students who were either simulators (informed or uninformed) or controls. All the participants were administered the Chinese MMPI-2 and the Structured Interview of Reported Symptoms-2 (SIRS-2). The results of the SIRS-2 were utilized to classify the participants into the feigning or control groups. The effectiveness of eight detection indices was investigated by using item analysis, multivariate analysis of covariance (MANCOVA), and receiver operating characteristic (ROC) analysis. Results indicated that informed-simulating participants with prior knowledge of mental disorders did not perform better in avoiding feigning detection than uninformed-simulating participants. In addition, the eight detection indices of the Chinese MMPI-2 were effective in discriminating participants in the feigning and control groups, and the best cut-off scores of three of the indices were higher than those obtained from the studies using the English MMPI-2. Thus, in this sample of university students, the utility of the Chinese MMPI-2 in detecting feigned mental disorders was tentatively supported, and the Chinese Infrequency Scale (ICH), a scale developed specifically for the Chinese MMPI-2, was also supported as a valid scale for validity checking. © 2017 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
A Network-Based Algorithm for Clustering Multivariate Repeated Measures Data
NASA Technical Reports Server (NTRS)
Koslovsky, Matthew; Arellano, John; Schaefer, Caroline; Feiveson, Alan; Young, Millennia; Lee, Stuart
2017-01-01
The National Aeronautics and Space Administration (NASA) Astronaut Corps is a unique occupational cohort for which vast amounts of measures data have been collected repeatedly in research or operational studies pre-, in-, and post-flight, as well as during multiple clinical care visits. In exploratory analyses aimed at generating hypotheses regarding physiological changes associated with spaceflight exposure, such as impaired vision, it is of interest to identify anomalies and trends across these expansive datasets. Multivariate clustering algorithms for repeated measures data may help parse the data to identify homogeneous groups of astronauts that have higher risks for a particular physiological change. However, available clustering methods may not be able to accommodate the complex data structures found in NASA data, since the methods often rely on strict model assumptions, require equally-spaced and balanced assessment times, cannot accommodate missing data or differing time scales across variables, and cannot process continuous and discrete data simultaneously. To fill this gap, we propose a network-based, multivariate clustering algorithm for repeated measures data that can be tailored to fit various research settings. Using simulated data, we demonstrate how our method can be used to identify patterns in complex data structures found in practice.
Massive Halos in Millennium Gas Simulations: Multivariate Scaling Relations
NASA Astrophysics Data System (ADS)
Stanek, R.; Rasia, E.; Evrard, A. E.; Pearce, F.; Gazzola, L.
2010-06-01
The joint likelihood of observable cluster signals reflects the astrophysical evolution of the coupled baryonic and dark matter components in massive halos, and its knowledge will enhance cosmological parameter constraints in the coming era of large, multiwavelength cluster surveys. We present a computational study of intrinsic covariance in cluster properties using halo populations derived from Millennium Gas Simulations (MGS). The MGS are re-simulations of the original 500 h -1 Mpc Millennium Simulation performed with gas dynamics under two different physical treatments: shock heating driven by gravity only (GO) and a second treatment with cooling and preheating (PH). We examine relationships among structural properties and observable X-ray and Sunyaev-Zel'dovich (SZ) signals for samples of thousands of halos with M 200 >= 5 × 1013 h -1 M sun and z < 2. While the X-ray scaling behavior of PH model halos at low redshift offers a good match to local clusters, the model exhibits non-standard features testable with larger surveys, including weakly running slopes in hot gas observable-mass relations and ~10% departures from self-similar redshift evolution for 1014 h -1 M sun halos at redshift z ~ 1. We find that the form of the joint likelihood of signal pairs is generally well described by a multivariate, log-normal distribution, especially in the PH case which exhibits less halo substructure than the GO model. At fixed mass and epoch, joint deviations of signal pairs display mainly positive correlations, especially the thermal SZ effect paired with either hot gas fraction (r = 0.88/0.69 for PH/GO at z = 0) or X-ray temperature (r = 0.62/0.83). The levels of variance in X-ray luminosity, temperature, and gas mass fraction are sensitive to the physical treatment, but offsetting shifts in the latter two measures maintain a fixed 12% scatter in the integrated SZ signal under both gas treatments. We discuss halo mass selection by signal pairs, and find a minimum mass scatter of 4% in the PH model by combining thermal SZ and gas fraction measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonne, François; Bonnay, Patrick; Alamir, Mazen
2014-01-29
In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsedmore » heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.« less
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin
2014-01-01
In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F
2017-04-01
Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin.
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination (R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
Factors Controlling Sediment Load in The Central Anatolia Region of Turkey: Ankara River Basin
NASA Astrophysics Data System (ADS)
Duru, Umit; Wohl, Ellen; Ahmadi, Mehdi
2017-05-01
Better understanding of the factors controlling sediment load at a catchment scale can facilitate estimation of soil erosion and sediment transport rates. The research summarized here enhances understanding of correlations between potential control variables on suspended sediment loads. The Soil and Water Assessment Tool was used to simulate flow and sediment at the Ankara River basin. Multivariable regression analysis and principal component analysis were then performed between sediment load and controlling variables. The physical variables were either directly derived from a Digital Elevation Model or from field maps or computed using established equations. Mean observed sediment rate is 6697 ton/year and mean sediment yield is 21 ton/y/km² from the gage. Soil and Water Assessment Tool satisfactorily simulated observed sediment load with Nash-Sutcliffe efficiency, relative error, and coefficient of determination ( R²) values of 0.81, -1.55, and 0.93, respectively in the catchment. Therefore, parameter values from the physically based model were applied to the multivariable regression analysis as well as principal component analysis. The results indicate that stream flow, drainage area, and channel width explain most of the variability in sediment load among the catchments. The implications of the results, efficient siltation management practices in the catchment should be performed to stream flow, drainage area, and channel width.
Multivariate frequency domain analysis of protein dynamics
NASA Astrophysics Data System (ADS)
Matsunaga, Yasuhiro; Fuchigami, Sotaro; Kidera, Akinori
2009-03-01
Multivariate frequency domain analysis (MFDA) is proposed to characterize collective vibrational dynamics of protein obtained by a molecular dynamics (MD) simulation. MFDA performs principal component analysis (PCA) for a bandpass filtered multivariate time series using the multitaper method of spectral estimation. By applying MFDA to MD trajectories of bovine pancreatic trypsin inhibitor, we determined the collective vibrational modes in the frequency domain, which were identified by their vibrational frequencies and eigenvectors. At near zero temperature, the vibrational modes determined by MFDA agreed well with those calculated by normal mode analysis. At 300 K, the vibrational modes exhibited characteristic features that were considerably different from the principal modes of the static distribution given by the standard PCA. The influences of aqueous environments were discussed based on two different sets of vibrational modes, one derived from a MD simulation in water and the other from a simulation in vacuum. Using the varimax rotation, an algorithm of the multivariate statistical analysis, the representative orthogonal set of eigenmodes was determined at each vibrational frequency.
A note on a simplified and general approach to simulating from multivariate copula functions
Barry K. Goodwin
2013-01-01
Copulas have become an important analytic tool for characterizing multivariate distributions and dependence. One is often interested in simulating data from copula estimates. The process can be analytically and computationally complex and usually involves steps that are unique to a given parametric copula. We describe an alternative approach that uses âProbability-...
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1986-01-01
A hypothetical turbofan engine simplified simulation with a multivariable control and sensor failure detection, isolation, and accommodation logic (HYTESS II) is presented. The digital program, written in FORTRAN, is self-contained, efficient, realistic and easily used. Simulated engine dynamics were developed from linearized operating point models. However, essential nonlinear effects are retained. The simulation is representative of the hypothetical, low bypass ratio turbofan engine with an advanced control and failure detection logic. Included is a description of the engine dynamics, the control algorithm, and the sensor failure detection logic. Details of the simulation including block diagrams, variable descriptions, common block definitions, subroutine descriptions, and input requirements are given. Example simulation results are also presented.
A climate-based multivariate extreme emulator of met-ocean-hydrological events for coastal flooding
NASA Astrophysics Data System (ADS)
Camus, Paula; Rueda, Ana; Mendez, Fernando J.; Tomas, Antonio; Del Jesus, Manuel; Losada, Iñigo J.
2015-04-01
Atmosphere-ocean general circulation models (AOGCMs) are useful to analyze large-scale climate variability (long-term historical periods, future climate projections). However, applications such as coastal flood modeling require climate information at finer scale. Besides, flooding events depend on multiple climate conditions: waves, surge levels from the open-ocean and river discharge caused by precipitation. Therefore, a multivariate statistical downscaling approach is adopted to reproduce relationships between variables and due to its low computational cost. The proposed method can be considered as a hybrid approach which combines a probabilistic weather type downscaling model with a stochastic weather generator component. Predictand distributions are reproduced modeling the relationship with AOGCM predictors based on a physical division in weather types (Camus et al., 2012). The multivariate dependence structure of the predictand (extreme events) is introduced linking the independent marginal distributions of the variables by a probabilistic copula regression (Ben Ayala et al., 2014). This hybrid approach is applied for the downscaling of AOGCM data to daily precipitation and maximum significant wave height and storm-surge in different locations along the Spanish coast. Reanalysis data is used to assess the proposed method. A commonly predictor for the three variables involved is classified using a regression-guided clustering algorithm. The most appropriate statistical model (general extreme value distribution, pareto distribution) for daily conditions is fitted. Stochastic simulation of the present climate is performed obtaining the set of hydraulic boundary conditions needed for high resolution coastal flood modeling. References: Camus, P., Menéndez, M., Méndez, F.J., Izaguirre, C., Espejo, A., Cánovas, V., Pérez, J., Rueda, A., Losada, I.J., Medina, R. (2014b). A weather-type statistical downscaling framework for ocean wave climate. Journal of Geophysical Research, doi: 10.1002/2014JC010141. Ben Ayala, M.A., Chebana, F., Ouarda, T.B.M.J. (2014). Probabilistic Gaussian Copula Regression Model for Multisite and Multivariable Downscaling, Journal of Climate, 27, 3331-3347.
Data-driven Climate Modeling and Prediction
NASA Astrophysics Data System (ADS)
Kondrashov, D. A.; Chekroun, M.
2016-12-01
Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.
REGIONAL-SCALE WIND FIELD CLASSIFICATION EMPLOYING CLUSTER ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glascoe, L G; Glaser, R E; Chin, H S
2004-06-17
The classification of time-varying multivariate regional-scale wind fields at a specific location can assist event planning as well as consequence and risk analysis. Further, wind field classification involves data transformation and inference techniques that effectively characterize stochastic wind field variation. Such a classification scheme is potentially useful for addressing overall atmospheric transport uncertainty and meteorological parameter sensitivity issues. Different methods to classify wind fields over a location include the principal component analysis of wind data (e.g., Hardy and Walton, 1978) and the use of cluster analysis for wind data (e.g., Green et al., 1992; Kaufmann and Weber, 1996). The goalmore » of this study is to use a clustering method to classify the winds of a gridded data set, i.e, from meteorological simulations generated by a forecast model.« less
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo
2018-03-01
In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Chow, Alexander K; Sherer, Benjamin A; Yura, Emily; Kielb, Stephanie; Kocjancic, Ervin; Eggener, Scott; Turk, Thomas; Park, Sangtae; Psutka, Sarah; Abern, Michael; Latchamsetty, Kalyan C; Coogan, Christopher L
2017-11-01
To evaluate the Urological resident's attitude and experience with surgical simulation in residency education using a multi-institutional, multi-modality model. Residents from 6 area urology training programs rotated through simulation stations in 4 consecutive sessions from 2014 to 2017. Workshops included GreenLight photovaporization of the prostate, ureteroscopic stone extraction, laparoscopic peg transfer, 3-dimensional laparoscopy rope pass, transobturator sling placement, intravesical injection, high definition video system trainer, vasectomy, and Urolift. Faculty members provided teaching assistance, objective scoring, and verbal feedback. Participants completed a nonvalidated questionnaire evaluating utility of the workshop and soliciting suggestions for improvement. Sixty-three of 75 participants (84%) (postgraduate years 1-6) completed the exit questionnaire. Median rating of exercise usefulness on a scale of 1-10 ranged from 7.5 to 9. On a scale of 0-10, cumulative median scores of the course remained high over 4 years: time limit per station (9; interquartile range [IQR] 2), faculty instruction (9, IQR 2), ease of use (9, IQR 2), face validity (8, IQR 3), and overall course (9, IQR 2). On multivariate analysis, there was no difference in rating of domains between postgraduate years. Sixty-seven percent (42/63) believe that simulation training should be a requirement of Urology residency. Ninety-seven percent (63/65) viewed the laboratory as beneficial to their education. This workshop model is a valuable training experience for residents. Most participants believe that surgical simulation is beneficial and should be a requirement for Urology residency. High ratings of usefulness for each exercise demonstrated excellent face validity provided by the course. Copyright © 2017 Elsevier Inc. All rights reserved.
Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye
2016-01-13
A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Martini, A.; Lomachenko, K. A.; Pankin, I. A.; Negri, C.; Berlier, G.; Beato, P.; Falsig, H.; Bordiga, S.; Lamberti, C.
2017-01-01
The small pore Cu-CHA zeolite is attracting increasing attention as a versatile platform to design novel single-site catalysts for deNOx applications and for the direct conversion of methane to methanol. Understanding at the atomic scale how the catalyst composition influences the Cu-species formed during thermal activation is a key step to unveil the relevant composition–activity relationships. Herein, we explore by in situ XAS the impact of Cu-CHA catalyst composition on temperature-dependent Cu-speciation and reducibility. Advanced multivariate analysis of in situ XANES in combination with DFT-assisted simulation of XANES spectra and multi-component EXAFS fits as well as in situ FTIR spectroscopy of adsorbed N2 allow us to obtain unprecedented quantitative structural information on the complex dynamics during the speciation of Cu-sites inside the framework of the CHA zeolite. PMID:29147509
Perturbative Gaussianizing transforms for cosmological fields
NASA Astrophysics Data System (ADS)
Hall, Alex; Mead, Alexander
2018-01-01
Constraints on cosmological parameters from large-scale structure have traditionally been obtained from two-point statistics. However, non-linear structure formation renders these statistics insufficient in capturing the full information content available, necessitating the measurement of higher order moments to recover information which would otherwise be lost. We construct quantities based on non-linear and non-local transformations of weakly non-Gaussian fields that Gaussianize the full multivariate distribution at a given order in perturbation theory. Our approach does not require a model of the fields themselves and takes as input only the first few polyspectra, which could be modelled or measured from simulations or data, making our method particularly suited to observables lacking a robust perturbative description such as the weak-lensing shear. We apply our method to simulated density fields, finding a significantly reduced bispectrum and an enhanced correlation with the initial field. We demonstrate that our method reconstructs a large proportion of the linear baryon acoustic oscillations, improving the information content over the raw field by 35 per cent. We apply the transform to toy 21 cm intensity maps, showing that our method still performs well in the presence of complications such as redshift-space distortions, beam smoothing, pixel noise and foreground subtraction. We discuss how this method might provide a route to constructing a perturbative model of the fully non-Gaussian multivariate likelihood function.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Simulation Exploration through Immersive Parallel Planes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas J; Bush, Brian W; Gruchalla, Kenny M
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
Simulation Exploration through Immersive Parallel Planes: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunhart-Lupo, Nicholas; Bush, Brian W.; Gruchalla, Kenny
We present a visualization-driven simulation system that tightly couples systems dynamics simulations with an immersive virtual environment to allow analysts to rapidly develop and test hypotheses in a high-dimensional parameter space. To accomplish this, we generalize the two-dimensional parallel-coordinates statistical graphic as an immersive 'parallel-planes' visualization for multivariate time series emitted by simulations running in parallel with the visualization. In contrast to traditional parallel coordinate's mapping the multivariate dimensions onto coordinate axes represented by a series of parallel lines, we map pairs of the multivariate dimensions onto a series of parallel rectangles. As in the case of parallel coordinates, eachmore » individual observation in the dataset is mapped to a polyline whose vertices coincide with its coordinate values. Regions of the rectangles can be 'brushed' to highlight and select observations of interest: a 'slider' control allows the user to filter the observations by their time coordinate. In an immersive virtual environment, users interact with the parallel planes using a joystick that can select regions on the planes, manipulate selection, and filter time. The brushing and selection actions are used to both explore existing data as well as to launch additional simulations corresponding to the visually selected portions of the input parameter space. As soon as the new simulations complete, their resulting observations are displayed in the virtual environment. This tight feedback loop between simulation and immersive analytics accelerates users' realization of insights about the simulation and its output.« less
A stream temperature model for the Peace-Athabasca River basin
NASA Astrophysics Data System (ADS)
Morales-Marin, L. A.; Rokaya, P.; Wheater, H. S.; Lindenschmidt, K. E.
2017-12-01
Water temperature plays a fundamental role in water ecosystem functioning. Because it regulates flow energy and metabolic rates in organism productivity over a broad spectrum of space and time scales, water temperature constitutes an important indicator of aquatic ecosystems health. In cold region basins, stream water temperature modelling is also fundamental to predict ice freeze-up and break-up events in order to improve flood management. Multiple model approaches such as linear and multivariable regression methods, neural network and thermal energy budged models have been developed and implemented to simulate stream water temperature. Most of these models have been applied to specific stream reaches and trained using observed data, but very little has been done to simulate water temperature in large catchment river networks. We present the coupling of RBM model, a semi-Lagrangian water temperature model for advection-dominated river system, and MESH, a semi-distributed hydrological model, to simulate stream water temperature in river catchments. The coupled models are implemented in the Peace-Athabasca River basin in order to analyze the variation in stream temperature regimes under changing hydrological and meteorological conditions. Uncertainty of stream temperature simulations is also assessed in order to determine the degree of reliability of the estimates.
Rotorcraft flying qualities improvement using advanced control
NASA Technical Reports Server (NTRS)
Walker, D.; Postlethwaite, I.; Howitt, J.; Foster, N.
1993-01-01
We report on recent experience gained when a multivariable helicopter flight control law was tested on the Large Motion Simulator (LMS) at DRA Bedford. This was part of a study into the application of multivariable control theory to the design of full-authority flight control systems for high-performance helicopters. In this paper, we present some of the results that were obtained during the piloted simulation trial and from subsequent off-line simulation and analysis. The performance provided by the control law led to level 1 handling quality ratings for almost all of the mission task elements assessed, both during the real-time and off-line analysis.
NASA Astrophysics Data System (ADS)
Lei, Meizhen; Wang, Liqiang
2018-01-01
The halbach-type linear oscillatory motor (HT-LOM) is multi-variable, highly coupled, nonlinear and uncertain, and difficult to get a satisfied result by conventional PID control. An incremental adaptive fuzzy controller (IAFC) for stroke tracking was presented, which combined the merits of PID control, the fuzzy inference mechanism and the adaptive algorithm. The integral-operation is added to the conventional fuzzy control algorithm. The fuzzy scale factor can be online tuned according to the load force and stroke command. The simulation results indicate that the proposed control scheme can achieve satisfied stroke tracking performance and is robust with respect to parameter variations and external disturbance.
NASA Astrophysics Data System (ADS)
Ronayne, Michael J.; Gorelick, Steven M.; Zheng, Chunmiao
2010-10-01
We developed a new model of aquifer heterogeneity to analyze data from a single-well injection-withdrawal tracer test conducted at the Macrodispersion Experiment (MADE) site on the Columbus Air Force Base in Mississippi (USA). The physical heterogeneity model is a hybrid that combines 3-D lithofacies to represent submeter scale, highly connected channels within a background matrix based on a correlated multivariate Gaussian hydraulic conductivity field. The modeled aquifer architecture is informed by a variety of field data, including geologic core sampling. Geostatistical properties of this hybrid heterogeneity model are consistent with the statistics of the hydraulic conductivity data set based on extensive borehole flowmeter testing at the MADE site. The representation of detailed, small-scale geologic heterogeneity allows for explicit simulation of local preferential flow and slow advection, processes that explain the complex tracer response from the injection-withdrawal test. Based on the new heterogeneity model, advective-dispersive transport reproduces key characteristics of the observed tracer recovery curve, including a delayed concentration peak and a low-concentration tail. Importantly, our results suggest that intrafacies heterogeneity is responsible for local-scale mass transfer.
NASA Astrophysics Data System (ADS)
Rueda, A.; Alvarez Antolinez, J. A.; Hegermiller, C.; Serafin, K.; Anderson, D. L.; Ruggiero, P.; Barnard, P.; Erikson, L. H.; Vitousek, S.; Camus, P.; Tomas, A.; Gonzalez, M.; Mendez, F. J.
2016-02-01
Long-term coastal evolution and coastal flooding hazards are the result of the non-linear interaction of multiple oceanographic, hydrological, geological and meteorological forcings (e.g., astronomical tide, monthly mean sea level, large-scale storm surge, dynamic wave set-up, shoreline evolution, backshore erosion). Additionally, interannual variability and trends in storminess and sea level rise are climate drivers that must be considered. Moreover, the chronology of the hydraulic boundary conditions plays an important role since a collection of consecutive minor storm events can have more impact than the 100-yr return level event. Therefore, proper modeling of shoreline erosion, beach recovery and coastal flooding should consider the sequence of storms, the multivariate nature of the hydrodynamic forcings, and the different time scales of interest (seasonality, interannual and decadal variability). To address this `beautiful problem', we propose a hybrid approach that combines: (a) numerical hydrodynamic and morphodynamic models (SWAN for wave transformation, a shoreline change model, X-Beach for modeling infragravity waves and erosion of the backshore during extreme events and RFSM-EDA (Jamieson et al, 2012) for high resolution flooding of the coastal hinterland); (b) long-term data bases (observational and hindcast) of sea state parameters, astronomical tides and non-tidal residuals; and (c) statistical downscaling techniques, non-linear data mining, and extreme value models. The statistical downscaling approaches for multivariate variables are based on circulation patterns (Espejo et al., 2014), the chronology of the circulation patterns (Guanche et al, 2013) and the event hydrographs of multivariate extremes, resulting in a time-dependent climate emulator of hydraulic boundary conditions for coupled simulations of the coastal change and flooding models. ReferencesEspejo et al (2014) Spectral ocean wave climate variability based on circulation patterns, J Phys Oc, doi: 10.1175/JPO-D-13-0276.1 Guanche et al (2013) Autoregressive logistic regression applied to atmospheric circulation patterns, Clim Dyn, doi: 10.1007/s00382-013-1690-3 Jamieson et al (2012) A highly efficient 2D flood model with sub-element topography, Proc. Of the Inst Civil Eng., 165(10), 581-595
Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM.
Mair, Patrick; Satorra, Albert; Bentler, Peter M
2012-07-01
This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.
Multivariate Meta-Analysis of Genetic Association Studies: A Simulation Study
Neupane, Binod; Beyene, Joseph
2015-01-01
In a meta-analysis with multiple end points of interests that are correlated between or within studies, multivariate approach to meta-analysis has a potential to produce more precise estimates of effects by exploiting the correlation structure between end points. However, under random-effects assumption the multivariate estimation is more complex (as it involves estimation of more parameters simultaneously) than univariate estimation, and sometimes can produce unrealistic parameter estimates. Usefulness of multivariate approach to meta-analysis of the effects of a genetic variant on two or more correlated traits is not well understood in the area of genetic association studies. In such studies, genetic variants are expected to roughly maintain Hardy-Weinberg equilibrium within studies, and also their effects on complex traits are generally very small to modest and could be heterogeneous across studies for genuine reasons. We carried out extensive simulation to explore the comparative performance of multivariate approach with most commonly used univariate inverse-variance weighted approach under random-effects assumption in various realistic meta-analytic scenarios of genetic association studies of correlated end points. We evaluated the performance with respect to relative mean bias percentage, and root mean square error (RMSE) of the estimate and coverage probability of corresponding 95% confidence interval of the effect for each end point. Our simulation results suggest that multivariate approach performs similarly or better than univariate method when correlations between end points within or between studies are at least moderate and between-study variation is similar or larger than average within-study variation for meta-analyses of 10 or more genetic studies. Multivariate approach produces estimates with smaller bias and RMSE especially for the end point that has randomly or informatively missing summary data in some individual studies, when the missing data in the endpoint are imputed with null effects and quite large variance. PMID:26196398
Multivariate Meta-Analysis of Genetic Association Studies: A Simulation Study.
Neupane, Binod; Beyene, Joseph
2015-01-01
In a meta-analysis with multiple end points of interests that are correlated between or within studies, multivariate approach to meta-analysis has a potential to produce more precise estimates of effects by exploiting the correlation structure between end points. However, under random-effects assumption the multivariate estimation is more complex (as it involves estimation of more parameters simultaneously) than univariate estimation, and sometimes can produce unrealistic parameter estimates. Usefulness of multivariate approach to meta-analysis of the effects of a genetic variant on two or more correlated traits is not well understood in the area of genetic association studies. In such studies, genetic variants are expected to roughly maintain Hardy-Weinberg equilibrium within studies, and also their effects on complex traits are generally very small to modest and could be heterogeneous across studies for genuine reasons. We carried out extensive simulation to explore the comparative performance of multivariate approach with most commonly used univariate inverse-variance weighted approach under random-effects assumption in various realistic meta-analytic scenarios of genetic association studies of correlated end points. We evaluated the performance with respect to relative mean bias percentage, and root mean square error (RMSE) of the estimate and coverage probability of corresponding 95% confidence interval of the effect for each end point. Our simulation results suggest that multivariate approach performs similarly or better than univariate method when correlations between end points within or between studies are at least moderate and between-study variation is similar or larger than average within-study variation for meta-analyses of 10 or more genetic studies. Multivariate approach produces estimates with smaller bias and RMSE especially for the end point that has randomly or informatively missing summary data in some individual studies, when the missing data in the endpoint are imputed with null effects and quite large variance.
NASA Astrophysics Data System (ADS)
Bates, P. D.; Quinn, N.; Sampson, C. C.; Smith, A.; Wing, O.; Neal, J. C.
2017-12-01
Remotely sensed data has transformed the field of large scale hydraulic modelling. New digital elevation, hydrography and river width data has allowed such models to be created for the first time, and remotely sensed observations of water height, slope and water extent has allowed them to be calibrated and tested. As a result, we are now able to conduct flood risk analyses at national, continental or even global scales. However, continental scale analyses have significant additional complexity compared to typical flood risk modelling approaches. Traditional flood risk assessment uses frequency curves to define the magnitude of extreme flows at gauging stations. The flow values for given design events, such as the 1 in 100 year return period flow, are then used to drive hydraulic models in order to produce maps of flood hazard. Such an approach works well for single gauge locations and local models because over relatively short river reaches (say 10-60km) one can assume that the return period of an event does not vary. At regional to national scales and across multiple river catchments this assumption breaks down, and for a given flood event the return period will be different at different gauging stations, a pattern known as the event `footprint'. Despite this, many national scale risk analyses still use `constant in space' return period hazard layers (e.g. the FEMA Special Flood Hazard Areas) in their calculations. Such an approach can estimate potential exposure, but will over-estimate risk and cannot determine likely flood losses over a whole region or country. We address this problem by using a stochastic model to simulate many realistic extreme event footprints based on observed gauged flows and the statistics of gauge to gauge correlations. We take the entire USGS gauge data catalogue for sites with > 45 years of record and use a conditional approach for multivariate extreme values to generate sets of flood events with realistic return period variation in space. We undertake a number of quality checks of the stochastic model and compare real and simulated footprints to show that the method is able to re-create realistic patterns even at continental scales where there is large variation in flood generating mechanisms. We then show how these patterns can be used to drive a large scale 2D hydraulic to predict regional scale flooding.
Griffin, Brian M.; Larson, Vincent E.
2016-11-25
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
Multivariate flood risk assessment: reinsurance perspective
NASA Astrophysics Data System (ADS)
Ghizzoni, Tatiana; Ellenrieder, Tobias
2013-04-01
For insurance and re-insurance purposes the knowledge of the spatial characteristics of fluvial flooding is fundamental. The probability of simultaneous flooding at different locations during one event and the associated severity and losses have to be estimated in order to assess premiums and for accumulation control (Probable Maximum Losses calculation). Therefore, the identification of a statistical model able to describe the multivariate joint distribution of flood events in multiple location is necessary. In this context, copulas can be viewed as alternative tools for dealing with multivariate simulations as they allow to formalize dependence structures of random vectors. An application of copula function for flood scenario generation is presented for Australia (Queensland, New South Wales and Victoria) where 100.000 possible flood scenarios covering approximately 15.000 years were simulated.
Chen, Yong; Luo, Sheng; Chu, Haitao; Wei, Peng
2013-05-01
Multivariate meta-analysis is useful in combining evidence from independent studies which involve several comparisons among groups based on a single outcome. For binary outcomes, the commonly used statistical models for multivariate meta-analysis are multivariate generalized linear mixed effects models which assume risks, after some transformation, follow a multivariate normal distribution with possible correlations. In this article, we consider an alternative model for multivariate meta-analysis where the risks are modeled by the multivariate beta distribution proposed by Sarmanov (1966). This model have several attractive features compared to the conventional multivariate generalized linear mixed effects models, including simplicity of likelihood function, no need to specify a link function, and has a closed-form expression of distribution functions for study-specific risk differences. We investigate the finite sample performance of this model by simulation studies and illustrate its use with an application to multivariate meta-analysis of adverse events of tricyclic antidepressants treatment in clinical trials.
Calibration of a distributed hydrologic model for six European catchments using remote sensing data
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, M. C.; Mendiguren González, G.; Kumar, R.; Rakovec, O.; Samaniego, L. E.
2017-12-01
While observed streamflow has been the single reference for most conventional hydrologic model calibration exercises, the availability of spatially distributed remote sensing observations provide new possibilities for multi-variable calibration assessing both spatial and temporal variability of different hydrologic processes. In this study, we first identify the key transfer parameters of the mesoscale Hydrologic Model (mHM) controlling both the discharge and the spatial distribution of actual evapotranspiration (AET) across six central European catchments (Elbe, Main, Meuse, Moselle, Neckar and Vienne). These catchments are selected based on their limited topographical and climatic variability which enables to evaluate the effect of spatial parameterization on the simulated evapotranspiration patterns. We develop a European scale remote sensing based actual evapotranspiration dataset at a 1 km grid scale driven primarily by land surface temperature observations from MODIS using the TSEB approach. Using the observed AET maps we analyze the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mHM model. This model allows calibrating one-basin-at-a-time or all-basins-together using its unique structure and multi-parameter regionalization approach. Results will indicate any tradeoffs between spatial pattern and discharge simulation during model calibration and through validation against independent internal discharge locations. Moreover, added value on internal water balances will be analyzed.
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Robust Nonlinear Feedback Control of Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, William L.; Balas, Gary J.; Litt, Jonathan (Technical Monitor)
2001-01-01
This is the final report on the research performed under NASA Glen grant NASA/NAG-3-1975 concerning feedback control of the Pratt & Whitney (PW) STF 952, a twin spool, mixed flow, after burning turbofan engine. The research focussed on the design of linear and gain-scheduled, multivariable inner-loop controllers for the PW turbofan engine using H-infinity and linear, parameter-varying (LPV) control techniques. The nonlinear turbofan engine simulation was provided by PW within the NASA Rocket Engine Transient Simulator (ROCETS) simulation software environment. ROCETS was used to generate linearized models of the turbofan engine for control design and analysis as well as the simulation environment to evaluate the performance and robustness of the controllers. Comparison between the H-infinity, and LPV controllers are made with the baseline multivariable controller and developed by Pratt & Whitney engineers included in the ROCETS simulation. Simulation results indicate that H-infinity and LPV techniques effectively achieve desired response characteristics with minimal cross coupling between commanded values and are very robust to unmodeled dynamics and sensor noise.
NASA Astrophysics Data System (ADS)
Farahi, Arya; Evrard, August E.; McCarthy, Ian; Barnes, David J.; Kay, Scott T.
2018-05-01
Using tens of thousands of halos realized in the BAHAMAS and MACSIS simulations produced with a consistent astrophysics treatment that includes AGN feedback, we validate a multi-property statistical model for the stellar and hot gas mass behavior in halos hosting groups and clusters of galaxies. The large sample size allows us to extract fine-scale mass-property relations (MPRs) by performing local linear regression (LLR) on individual halo stellar mass (Mstar) and hot gas mass (Mgas) as a function of total halo mass (Mhalo). We find that: 1) both the local slope and variance of the MPRs run with mass (primarily) and redshift (secondarily); 2) the conditional likelihood, p(Mstar, Mgas| Mhalo, z) is accurately described by a multivariate, log-normal distribution, and; 3) the covariance of Mstar and Mgas at fixed Mhalo is generally negative, reflecting a partially closed baryon box model for high mass halos. We validate the analytical population model of Evrard et al. (2014), finding sub-percent accuracy in the log-mean halo mass selected at fixed property, ⟨ln Mhalo|Mgas⟩ or ⟨ln Mhalo|Mstar⟩, when scale-dependent MPR parameters are employed. This work highlights the potential importance of allowing for running in the slope and scatter of MPRs when modeling cluster counts for cosmological studies. We tabulate LLR fit parameters as a function of halo mass at z = 0, 0.5 and 1 for two popular mass conventions.
Jia, Erik; Chen, Tianlu
2018-01-01
Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130
NASA Astrophysics Data System (ADS)
Leung, Juliana Y.; Srinivasan, Sanjay
2016-09-01
Modeling transport process at large scale requires proper scale-up of subsurface heterogeneity and an understanding of its interaction with the underlying transport mechanisms. A technique based on volume averaging is applied to quantitatively assess the scaling characteristics of effective mass transfer coefficient in heterogeneous reservoir models. The effective mass transfer coefficient represents the combined contribution from diffusion and dispersion to the transport of non-reactive solute particles within a fluid phase. Although treatment of transport problems with the volume averaging technique has been published in the past, application to geological systems exhibiting realistic spatial variability remains a challenge. Previously, the authors developed a new procedure where results from a fine-scale numerical flow simulation reflecting the full physics of the transport process albeit over a sub-volume of the reservoir are integrated with the volume averaging technique to provide effective description of transport properties. The procedure is extended such that spatial averaging is performed at the local-heterogeneity scale. In this paper, the transport of a passive (non-reactive) solute is simulated on multiple reservoir models exhibiting different patterns of heterogeneities, and the scaling behavior of effective mass transfer coefficient (Keff) is examined and compared. One such set of models exhibit power-law (fractal) characteristics, and the variability of dispersion and Keff with scale is in good agreement with analytical expressions described in the literature. This work offers an insight into the impacts of heterogeneity on the scaling of effective transport parameters. A key finding is that spatial heterogeneity models with similar univariate and bivariate statistics may exhibit different scaling characteristics because of the influence of higher order statistics. More mixing is observed in the channelized models with higher-order continuity. It reinforces the notion that the flow response is influenced by the higher-order statistical description of heterogeneity. An important implication is that when scaling-up transport response from lab-scale results to the field scale, it is necessary to account for the scale-up of heterogeneity. Since the characteristics of higher-order multivariate distributions and large-scale heterogeneity are typically not captured in small-scale experiments, a reservoir modeling framework that captures the uncertainty in heterogeneity description should be adopted.
A Bayesian approach to multivariate measurement system assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamada, Michael Scott
This article considers system assessment for multivariate measurements and presents a Bayesian approach to analyzing gauge R&R study data. The evaluation of variances for univariate measurement becomes the evaluation of covariance matrices for multivariate measurements. The Bayesian approach ensures positive definite estimates of the covariance matrices and easily provides their uncertainty. Furthermore, various measurement system assessment criteria are easily evaluated. The approach is illustrated with data from a real gauge R&R study as well as simulated data.
A Bayesian approach to multivariate measurement system assessment
Hamada, Michael Scott
2016-07-01
This article considers system assessment for multivariate measurements and presents a Bayesian approach to analyzing gauge R&R study data. The evaluation of variances for univariate measurement becomes the evaluation of covariance matrices for multivariate measurements. The Bayesian approach ensures positive definite estimates of the covariance matrices and easily provides their uncertainty. Furthermore, various measurement system assessment criteria are easily evaluated. The approach is illustrated with data from a real gauge R&R study as well as simulated data.
Evaluation of a pilot workload metric for simulated VTOL landing tasks
NASA Technical Reports Server (NTRS)
North, R. A.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.
Detecting significant change in stream benthic macroinvertebrate communities in wilderness areas
Milner, Alexander M.; Woodward, Andrea; Freilich, Jerome E.; Black, Robert W.; Resh, Vincent H.
2016-01-01
Within a region, both MDS analyses typically identified similar years as exceeding reference condition variation, illustrating the utility of the approach for identifying wider spatial scale effects that influence more than one stream. MDS responded to both simulated water temperature stress and a pollutant event, and generally outlying years on MDS plots could be explained by environmental variables, particularly higher precipitation. Multivariate control charts successfully identified whether shifts in community structure identified by MDS were significant and whether the shift represented a press disturbance (long-term change) or a pulse disturbance. We consider a combination of TD and MDS with control charts to be a potentially powerful tool for determining years significantly outside of a reference condition variation.
A robust bayesian estimate of the concordance correlation coefficient.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
A need for assessment of agreement arises in many situations including statistical biomarker qualification or assay or method validation. Concordance correlation coefficient (CCC) is one of the most popular scaled indices reported in evaluation of agreement. Robust methods for CCC estimation currently present an important statistical challenge. Here, we propose a novel Bayesian method of robust estimation of CCC based on multivariate Student's t-distribution and compare it with its alternatives. Furthermore, we extend the method to practically relevant settings, enabling incorporation of confounding covariates and replications. The superiority of the new approach is demonstrated using simulation as well as real datasets from biomarker application in electroencephalography (EEG). This biomarker is relevant in neuroscience for development of treatments for insomnia.
Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.
NASA Astrophysics Data System (ADS)
Elliott, William Dewey
1995-01-01
A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over several simulation timesteps. One MD application described here highlights the utility of including long range contributions to Lennard-Jones potential in constant pressure simulations. Another application shows the time dependence of long range forces in a multiple time step MD simulation.
A dual theory of price and value in a meso-scale economic model with stochastic profit rate
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2014-12-01
The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.
Temporal Variability of Observed and Simulated Hyperspectral Earth Reflectance
NASA Technical Reports Server (NTRS)
Roberts, Yolanda; Pilewskie, Peter; Kindel, Bruce; Feldman, Daniel; Collins, William D.
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is a climate observation system designed to study Earth's climate variability with unprecedented absolute radiometric accuracy and SI traceability. Observation System Simulation Experiments (OSSEs) were developed using GCM output and MODTRAN to simulate CLARREO reflectance measurements during the 21st century as a design tool for the CLARREO hyperspectral shortwave imager. With OSSE simulations of hyperspectral reflectance, Feldman et al. [2011a,b] found that shortwave reflectance is able to detect changes in climate variables during the 21st century and improve time-to-detection compared to broadband measurements. The OSSE has been a powerful tool in the design of the CLARREO imager and for understanding the effect of climate change on the spectral variability of reflectance, but it is important to evaluate how well the OSSE simulates the Earth's present-day spectral variability. For this evaluation we have used hyperspectral reflectance measurements from the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY), a shortwave spectrometer that was operational between March 2002 and April 2012. To study the spectral variability of SCIAMACHY-measured and OSSE-simulated reflectance, we used principal component analysis (PCA), a spectral decomposition technique that identifies dominant modes of variability in a multivariate data set. Using quantitative comparisons of the OSSE and SCIAMACHY PCs, we have quantified how well the OSSE captures the spectral variability of Earth?s climate system at the beginning of the 21st century relative to SCIAMACHY measurements. These results showed that the OSSE and SCIAMACHY data sets share over 99% of their total variance in 2004. Using the PCs and the temporally distributed reflectance spectra projected onto the PCs (PC scores), we can study the temporal variability of the observed and simulated reflectance spectra. Multivariate time series analysis of the PC scores using techniques such as Singular Spectrum Analysis (SSA) and Multichannel SSA will provide information about the temporal variability of the dominant variables. Quantitative comparison techniques can evaluate how well the OSSE reproduces the temporal variability observed by SCIAMACHY spectral reflectance measurements during the first decade of the 21st century. PCA of OSSE-simulated reflectance can also be used to study how the dominant spectral variables change on centennial scales for forced and unforced climate change scenarios. To have confidence in OSSE predictions of the spectral variability of hyperspectral reflectance, it is first necessary for us to evaluate the degree to which the OSSE simulations are able to reproduce the Earth?s present-day spectral variability.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Multivariate Cryptography Based on Clipped Hopfield Neural Network.
Wang, Jia; Cheng, Lee-Ming; Su, Tong
2018-02-01
Designing secure and efficient multivariate public key cryptosystems [multivariate cryptography (MVC)] to strengthen the security of RSA and ECC in conventional and quantum computational environment continues to be a challenging research in recent years. In this paper, we will describe multivariate public key cryptosystems based on extended Clipped Hopfield Neural Network (CHNN) and implement it using the MVC (CHNN-MVC) framework operated in space. The Diffie-Hellman key exchange algorithm is extended into the matrix field, which illustrates the feasibility of its new applications in both classic and postquantum cryptography. The efficiency and security of our proposed new public key cryptosystem CHNN-MVC are simulated and found to be NP-hard. The proposed algorithm will strengthen multivariate public key cryptosystems and allows hardware realization practicality.
EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis
NASA Astrophysics Data System (ADS)
Žvokelj, Matej; Zupan, Samo; Prebil, Ivan
2016-05-01
A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.
Kische, Hanna; Ewert, Ralf; Fietze, Ingo; Gross, Stefan; Wallaschofski, Henri; Völzke, Henry; Dörr, Marcus; Nauck, Matthias; Obst, Anne; Stubbe, Beate; Penzel, Thomas; Haring, Robin
2016-11-01
Associations between sex hormones and sleep habits originate mainly from small and selected patient-based samples. We examined data from a population-based sample with various sleep characteristics and the major part of sex hormones measured by mass spectrometry. We used data from 204 men and 213 women of the cross-sectional Study of Health in Pomerania-TREND. Associations of total T (TT) and free T, androstenedione (ASD), estrone, estradiol (E2), dehydroepiandrosterone-sulphate, SHBG, and E2 to TT ratio with sleep measures (including total sleep time, sleep efficiency, wake after sleep onset, apnea-hypopnea index [AHI], Insomnia Severity Index, Epworth Sleepiness Scale, and Pittsburgh Sleep Quality Index) were assessed by sex-specific multivariable regression models. In men, age-adjusted associations of TT (odds ratio 0.62; 95% confidence interval (CI) 0.46-0.83), free T, and SHBG with AHI were rendered nonsignificant after multivariable adjustment. In multivariable analyses, ASD was associated with Epworth Sleepiness Scale (β-coefficient per SD increase in ASD: -0.71; 95% CI: -1.18 to -0.25). In women, multivariable analyses showed positive associations of dehydroepiandrosterone-sulphate with wake after sleep onset (β-coefficient: .16; 95% CI 0.03-0.28) and of E2 and E2 to TT ratio with Epworth Sleepiness Scale. Additionally, free T and SHBG were associated with AHI in multivariable models among premenopausal women. The present cross-sectional, population-based study observed sex-specific associations of androgens, E2, and SHBG with sleep apnea and daytime sleepiness. However, multivariable-adjusted analyses confirmed the impact of body composition and health-related lifestyle on the association between sex hormones and sleep.
Power of Models in Longitudinal Study: Findings from a Full-Crossed Simulation Design
ERIC Educational Resources Information Center
Fang, Hua; Brooks, Gordon P.; Rizzo, Maria L.; Espy, Kimberly Andrews; Barcikowski, Robert S.
2009-01-01
Because the power properties of traditional repeated measures and hierarchical multivariate linear models have not been clearly determined in the balanced design for longitudinal studies in the literature, the authors present a power comparison study of traditional repeated measures and hierarchical multivariate linear models under 3…
ERIC Educational Resources Information Center
Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.
2010-01-01
The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality…
Exploring the Dynamics of Dyadic Interactions via Hierarchical Segmentation
ERIC Educational Resources Information Center
Hsieh, Fushing; Ferrer, Emilio; Chen, Shu-Chun; Chow, Sy-Miin
2010-01-01
In this article we present an exploratory tool for extracting systematic patterns from multivariate data. The technique, hierarchical segmentation (HS), can be used to group multivariate time series into segments with similar discrete-state recurrence patterns and it is not restricted by the stationarity assumption. We use a simulation study to…
Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM
ERIC Educational Resources Information Center
Mair, Patrick; Satorra, Albert; Bentler, Peter M.
2012-01-01
This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo…
Heggeseth, Brianna C; Jewell, Nicholas P
2013-07-20
Multivariate Gaussian mixtures are a class of models that provide a flexible parametric approach for the representation of heterogeneous multivariate outcomes. When the outcome is a vector of repeated measurements taken on the same subject, there is often inherent dependence between observations. However, a common covariance assumption is conditional independence-that is, given the mixture component label, the outcomes for subjects are independent. In this paper, we study, through asymptotic bias calculations and simulation, the impact of covariance misspecification in multivariate Gaussian mixtures. Although maximum likelihood estimators of regression and mixing probability parameters are not consistent under misspecification, they have little asymptotic bias when mixture components are well separated or if the assumed correlation is close to the truth even when the covariance is misspecified. We also present a robust standard error estimator and show that it outperforms conventional estimators in simulations and can indicate that the model is misspecified. Body mass index data from a national longitudinal study are used to demonstrate the effects of misspecification on potential inferences made in practice. Copyright © 2013 John Wiley & Sons, Ltd.
Evaluation of an F100 multivariable control using a real-time engine simulation
NASA Technical Reports Server (NTRS)
Szuch, J. R.; Soeder, J. F.; Skira, C.
1977-01-01
The control evaluated has been designed for the F100-PW-100 turbofan engine. The F100 engine represents the current state-of-the-art in aircraft gas turbine technology. The control makes use of a multivariable, linear quadratic regulator. The evaluation procedure employed utilized a real-time hybrid computer simulation of the F100 engine and an implementation of the control logic on the NASA LeRC digital computer/controller. The results of the evaluation indicated that the control logic and its implementation will be capable of controlling the engine throughout its operating range.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2003-01-01
A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES
He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.
2017-01-01
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225
Effect of noise in principal component analysis with an application to ozone pollution
NASA Astrophysics Data System (ADS)
Tsakiri, Katerina G.
This thesis analyzes the effect of independent noise in principal components of k normally distributed random variables defined by a covariance matrix. We prove that the principal components as well as the canonical variate pairs determined from joint distribution of original sample affected by noise can be essentially different in comparison with those determined from the original sample. However when the differences between the eigenvalues of the original covariance matrix are sufficiently large compared to the level of the noise, the effect of noise in principal components and canonical variate pairs proved to be negligible. The theoretical results are supported by simulation study and examples. Moreover, we compare our results about the eigenvalues and eigenvectors in the two dimensional case with other models examined before. This theory can be applied in any field for the decomposition of the components in multivariate analysis. One application is the detection and prediction of the main atmospheric factor of ozone concentrations on the example of Albany, New York. Using daily ozone, solar radiation, temperature, wind speed and precipitation data, we determine the main atmospheric factor for the explanation and prediction of ozone concentrations. A methodology is described for the decomposition of the time series of ozone and other atmospheric variables into the global term component which describes the long term trend and the seasonal variations, and the synoptic scale component which describes the short term variations. By using the Canonical Correlation Analysis, we show that solar radiation is the only main factor between the atmospheric variables considered here for the explanation and prediction of the global and synoptic scale component of ozone. The global term components are modeled by a linear regression model, while the synoptic scale components by a vector autoregressive model and the Kalman filter. The coefficient of determination, R2, for the prediction of the synoptic scale ozone component was found to be the highest when we consider the synoptic scale component of the time series for solar radiation and temperature. KEY WORDS: multivariate analysis; principal component; canonical variate pairs; eigenvalue; eigenvector; ozone; solar radiation; spectral decomposition; Kalman filter; time series prediction
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.
Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.
Classical nucleation theory in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM
NASA Technical Reports Server (NTRS)
Crane, Robert G.; Hewitson, B. C.
1991-01-01
A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.
Classical nucleation theory in the phase-field crystal model.
Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas
2018-04-01
A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.
Nagraj, Nandini; Slocik, Joseph M; Phillips, David M; Kelley-Loughnane, Nancy; Naik, Rajesh R; Potyrailo, Radislav A
2013-08-07
Peptide-capped AYSSGAPPMPPF gold nanoparticles were demonstrated for highly selective chemical vapor sensing using individual multivariable inductor-capacitor-resistor (LCR) resonators. Their multivariable response was achieved by measuring their resonance impedance spectra followed by multivariate spectral analysis. Detection of model toxic vapors and chemical agent simulants, such as acetonitrile, dichloromethane and methyl salicylate, was performed. Dichloromethane (dielectric constant εr = 9.1) and methyl salicylate (εr = 9.0) were discriminated using a single sensor. These sensing materials coupled to multivariable transducers can provide numerous opportunities for tailoring the vapor response selectivity based on the diversity of the amino acid composition of the peptides, and by the modulation of the nature of peptide-nanoparticle interactions through designed combinations of hydrophobic and hydrophilic amino acids.
Jackson, Dan; White, Ian R; Riley, Richard D
2013-01-01
Multivariate meta-analysis is becoming more commonly used. Methods for fitting the multivariate random effects model include maximum likelihood, restricted maximum likelihood, Bayesian estimation and multivariate generalisations of the standard univariate method of moments. Here, we provide a new multivariate method of moments for estimating the between-study covariance matrix with the properties that (1) it allows for either complete or incomplete outcomes and (2) it allows for covariates through meta-regression. Further, for complete data, it is invariant to linear transformations. Our method reduces to the usual univariate method of moments, proposed by DerSimonian and Laird, in a single dimension. We illustrate our method and compare it with some of the alternatives using a simulation study and a real example. PMID:23401213
Detecting event-related changes of multivariate phase coupling in dynamic brain networks.
Canolty, Ryan T; Cadieu, Charles F; Koepsell, Kilian; Ganguly, Karunesh; Knight, Robert T; Carmena, Jose M
2012-04-01
Oscillatory phase coupling within large-scale brain networks is a topic of increasing interest within systems, cognitive, and theoretical neuroscience. Evidence shows that brain rhythms play a role in controlling neuronal excitability and response modulation (Haider B, McCormick D. Neuron 62: 171-189, 2009) and regulate the efficacy of communication between cortical regions (Fries P. Trends Cogn Sci 9: 474-480, 2005) and distinct spatiotemporal scales (Canolty RT, Knight RT. Trends Cogn Sci 14: 506-515, 2010). In this view, anatomically connected brain areas form the scaffolding upon which neuronal oscillations rapidly create and dissolve transient functional networks (Lakatos P, Karmos G, Mehta A, Ulbert I, Schroeder C. Science 320: 110-113, 2008). Importantly, testing these hypotheses requires methods designed to accurately reflect dynamic changes in multivariate phase coupling within brain networks. Unfortunately, phase coupling between neurophysiological signals is commonly investigated using suboptimal techniques. Here we describe how a recently developed probabilistic model, phase coupling estimation (PCE; Cadieu C, Koepsell K Neural Comput 44: 3107-3126, 2010), can be used to investigate changes in multivariate phase coupling, and we detail the advantages of this model over the commonly employed phase-locking value (PLV; Lachaux JP, Rodriguez E, Martinerie J, Varela F. Human Brain Map 8: 194-208, 1999). We show that the N-dimensional PCE is a natural generalization of the inherently bivariate PLV. Using simulations, we show that PCE accurately captures both direct and indirect (network mediated) coupling between network elements in situations where PLV produces erroneous results. We present empirical results on recordings from humans and nonhuman primates and show that the PCE-estimated coupling values are different from those using the bivariate PLV. Critically on these empirical recordings, PCE output tends to be sparser than the PLVs, indicating fewer significant interactions and perhaps a more parsimonious description of the data. Finally, the physical interpretation of PCE parameters is straightforward: the PCE parameters correspond to interaction terms in a network of coupled oscillators. Forward modeling of a network of coupled oscillators with parameters estimated by PCE generates synthetic data with statistical characteristics identical to empirical signals. Given these advantages over the PLV, PCE is a useful tool for investigating multivariate phase coupling in distributed brain networks.
Multivariate Longitudinal Analysis with Bivariate Correlation Test
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model’s parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated. PMID:27537692
Multivariate Longitudinal Analysis with Bivariate Correlation Test.
Adjakossa, Eric Houngla; Sadissou, Ibrahim; Hounkonnou, Mahouton Norbert; Nuel, Gregory
2016-01-01
In the context of multivariate multilevel data analysis, this paper focuses on the multivariate linear mixed-effects model, including all the correlations between the random effects when the dimensional residual terms are assumed uncorrelated. Using the EM algorithm, we suggest more general expressions of the model's parameters estimators. These estimators can be used in the framework of the multivariate longitudinal data analysis as well as in the more general context of the analysis of multivariate multilevel data. By using a likelihood ratio test, we test the significance of the correlations between the random effects of two dependent variables of the model, in order to investigate whether or not it is useful to model these dependent variables jointly. Simulation studies are done to assess both the parameter recovery performance of the EM estimators and the power of the test. Using two empirical data sets which are of longitudinal multivariate type and multivariate multilevel type, respectively, the usefulness of the test is illustrated.
A Multivariate Descriptive Model of Motivation for Orthodontic Treatment.
ERIC Educational Resources Information Center
Hackett, Paul M. W.; And Others
1993-01-01
Motivation for receiving orthodontic treatment was studied among 109 young adults, and a multivariate model of the process is proposed. The combination of smallest scale analysis and Partial Order Scalogram Analysis by base Coordinates (POSAC) illustrates an interesting methodology for health treatment studies and explores motivation for dental…
Multivariate Relationships between Statistics Anxiety and Motivational Beliefs
ERIC Educational Resources Information Center
Baloglu, Mustafa; Abbassi, Amir; Kesici, Sahin
2017-01-01
In general, anxiety has been found to be associated with motivational beliefs and the current study investigated multivariate relationships between statistics anxiety and motivational beliefs among 305 college students (60.0% women). The Statistical Anxiety Rating Scale, the Motivated Strategies for Learning Questionnaire, and a set of demographic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.
A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less
Smith, Zachary J; Strombom, Sven; Wachsmann-Hogiu, Sebastian
2011-08-29
A multivariate optical computer has been constructed consisting of a spectrograph, digital micromirror device, and photomultiplier tube that is capable of determining absolute concentrations of individual components of a multivariate spectral model. We present experimental results on ternary mixtures, showing accurate quantification of chemical concentrations based on integrated intensities of fluorescence and Raman spectra measured with a single point detector. We additionally show in simulation that point measurements based on principal component spectra retain the ability to classify cancerous from noncancerous T cells.
Prentice, Ross L; Zhao, Shanshan
2018-01-01
The Dabrowska (Ann Stat 16:1475-1489, 1988) product integral representation of the multivariate survivor function is extended, leading to a nonparametric survivor function estimator for an arbitrary number of failure time variates that has a simple recursive formula for its calculation. Empirical process methods are used to sketch proofs for this estimator's strong consistency and weak convergence properties. Summary measures of pairwise and higher-order dependencies are also defined and nonparametrically estimated. Simulation evaluation is given for the special case of three failure time variates.
NASA Technical Reports Server (NTRS)
Soeder, J. F.
1983-01-01
As turbofan engines become more complex, the development of controls necessitate the use of multivariable control techniques. A control developed for the F100-PW-100(3) turbofan engine by using linear quadratic regulator theory and other modern multivariable control synthesis techniques is described. The assembly language implementation of this control on an SEL 810B minicomputer is described. This implementation was then evaluated by using a real-time hybrid simulation of the engine. The control software was modified to run with a real engine. These modifications, in the form of sensor and actuator failure checks and control executive sequencing, are discussed. Finally recommendations for control software implementations are presented.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
Multivariate multiscale entropy of financial markets
NASA Astrophysics Data System (ADS)
Lu, Yunfan; Wang, Jun
2017-11-01
In current process of quantifying the dynamical properties of the complex phenomena in financial market system, the multivariate financial time series are widely concerned. In this work, considering the shortcomings and limitations of univariate multiscale entropy in analyzing the multivariate time series, the multivariate multiscale sample entropy (MMSE), which can evaluate the complexity in multiple data channels over different timescales, is applied to quantify the complexity of financial markets. Its effectiveness and advantages have been detected with numerical simulations with two well-known synthetic noise signals. For the first time, the complexity of four generated trivariate return series for each stock trading hour in China stock markets is quantified thanks to the interdisciplinary application of this method. We find that the complexity of trivariate return series in each hour show a significant decreasing trend with the stock trading time progressing. Further, the shuffled multivariate return series and the absolute multivariate return series are also analyzed. As another new attempt, quantifying the complexity of global stock markets (Asia, Europe and America) is carried out by analyzing the multivariate returns from them. Finally we utilize the multivariate multiscale entropy to assess the relative complexity of normalized multivariate return volatility series with different degrees.
NASA Astrophysics Data System (ADS)
Talbot, C.; McClure, J. E.; Armstrong, R. T.; Mostaghimi, P.; Hu, Y.; Miller, C. T.
2017-12-01
Microscale experimental and computational methods can be used to evaluate fundamental microscale mechanisms and deduce macroscale constitutive relationships and parameter values. The link between the microscale and the macroscale is especially demanding, because technical issues arise regarding the necessary scale of the system needed for a meaningful set of macroscale measures to be insensitive to the size of the system, which is known as a representative elementary volume (REV). While the REV scale is routinely determined for single-phase flow in porous media, no systematic study of the scale of a REV for the comprehensive set of macroscale measures considered here has been reported in the literature. A comprehensive set of measures of the macroscale state is developed. We further develop and apply methods to predict the REV scale and quantify the uncertainty of the estimate for this set of macroscale quantities. We model the system state in terms of standard errors of macroscale quantities as a multivariate Gaussian process dependent on the size of the domain simulated. We determine predictive distributions of function values and a posterior distributions of weights using standard kernels, as well as a kernel constructed using relationships between physical quantities. For each kernel, we discuss the decay of the mean and covariance with increasing domain size, and use cross-validation to facilitate model selection. The procedure yields a model of the domain size needed to achieve a REV with quantifiable uncertainty. We present results in the context of multiphase fluid flow through a highly resolved realization of sandstone imaged using micro-CT. A 1440x1440x4320 section of the full 2520x2520x5280 imaged medium is simulated using the lattice-Boltzmann method. We compare the fidelity of the predictive model with results obtained by an analogous approach using polynomial regression.
NASA Astrophysics Data System (ADS)
Veiga, P.; Rubal, M.; Vieira, R.; Arenas, F.; Sousa-Pinto, I.
2013-03-01
Natural assemblages are variable in space and time; therefore, quantification of their variability is imperative to identify relevant scales for investigating natural or anthropogenic processes shaping these assemblages. We studied the variability of intertidal macroalgal assemblages on the North Portuguese coast, considering three spatial scales (from metres to 10 s of kilometres) following a hierarchical design. We tested the hypotheses that (1) spatial pattern will be invariant at all the studied scales and (2) spatial variability of macroalgal assemblages obtained by using species will be consistent with that obtained using functional groups. This was done considering as univariate variables: total biomass and number of taxa as well as biomass of the most important species and functional groups and as multivariate variables the structure of macroalgal assemblages, both considering species and functional groups. Most of the univariate results confirmed the first hypothesis except for the total number of taxa and foliose macroalgae that showed significant variability at the scale of site and area, respectively. In contrast, when multivariate patterns were examined, the first hypothesis was rejected except at the scale of 10 s of kilometres. Both uni- and multivariate results indicated that variation was larger at the smallest scale, and thus, small-scale processes seem to have more effect on spatial variability patterns. Macroalgal assemblages, both considering species and functional groups as surrogate, showed consistent spatial patterns, and therefore, the second hypothesis was confirmed. Consequently, functional groups may be considered a reliable biological surrogate to study changes on macroalgal assemblages at least along the investigated Portuguese coastline.
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Self-similarity and flow characteristics of vertical-axis wind turbine wakes: an LES study
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Dabiri, John O.
2017-04-01
Large eddy simulation (LES) is coupled with a turbine model to study the structure of the wake behind a vertical-axis wind turbine (VAWT). In the simulations, a tuning-free anisotropic minimum dissipation model is used to parameterise the subfilter stress tensor, while the turbine-induced forces are modelled with an actuator line technique. The LES framework is first validated in the simulation of the wake behind a model straight-bladed VAWT placed in the water channel and then used to study the wake structure downwind of a full-scale VAWT sited in the atmospheric boundary layer. In particular, the self-similarity of the wake is examined, and it is found that the wake velocity deficit can be well characterised by a two-dimensional multivariate Gaussian distribution. By assuming a self-similar Gaussian distribution of the velocity deficit, and applying mass and momentum conservation, an analytical model is developed and tested to predict the maximum velocity deficit downwind of the turbine. Also, a simple parameterisation of VAWTs for LES with very coarse grid resolutions is proposed, in which the turbine is modelled as a rectangular porous plate with the same thrust coefficient. The simulation results show that, after some downwind distance (x/D ≈ 6), both actuator line and rectangular porous plate models have similar predictions for the mean velocity deficit. These results are of particular importance in simulations of large wind farms where, due to the coarse spatial resolution, the flow around individual VAWTs is not resolved.
Adams, Dean C
2014-09-01
Phylogenetic signal is the tendency for closely related species to display similar trait values due to their common ancestry. Several methods have been developed for quantifying phylogenetic signal in univariate traits and for sets of traits treated simultaneously, and the statistical properties of these approaches have been extensively studied. However, methods for assessing phylogenetic signal in high-dimensional multivariate traits like shape are less well developed, and their statistical performance is not well characterized. In this article, I describe a generalization of the K statistic of Blomberg et al. that is useful for quantifying and evaluating phylogenetic signal in highly dimensional multivariate data. The method (K(mult)) is found from the equivalency between statistical methods based on covariance matrices and those based on distance matrices. Using computer simulations based on Brownian motion, I demonstrate that the expected value of K(mult) remains at 1.0 as trait variation among species is increased or decreased, and as the number of trait dimensions is increased. By contrast, estimates of phylogenetic signal found with a squared-change parsimony procedure for multivariate data change with increasing trait variation among species and with increasing numbers of trait dimensions, confounding biological interpretations. I also evaluate the statistical performance of hypothesis testing procedures based on K(mult) and find that the method displays appropriate Type I error and high statistical power for detecting phylogenetic signal in high-dimensional data. Statistical properties of K(mult) were consistent for simulations using bifurcating and random phylogenies, for simulations using different numbers of species, for simulations that varied the number of trait dimensions, and for different underlying models of trait covariance structure. Overall these findings demonstrate that K(mult) provides a useful means of evaluating phylogenetic signal in high-dimensional multivariate traits. Finally, I illustrate the utility of the new approach by evaluating the strength of phylogenetic signal for head shape in a lineage of Plethodon salamanders. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
MGAS: a powerful tool for multivariate gene-based genome-wide association analysis.
Van der Sluis, Sophie; Dolan, Conor V; Li, Jiang; Song, Youqiang; Sham, Pak; Posthuma, Danielle; Li, Miao-Xin
2015-04-01
Standard genome-wide association studies, testing the association between one phenotype and a large number of single nucleotide polymorphisms (SNPs), are limited in two ways: (i) traits are often multivariate, and analysis of composite scores entails loss in statistical power and (ii) gene-based analyses may be preferred, e.g. to decrease the multiple testing problem. Here we present a new method, multivariate gene-based association test by extended Simes procedure (MGAS), that allows gene-based testing of multivariate phenotypes in unrelated individuals. Through extensive simulation, we show that under most trait-generating genotype-phenotype models MGAS has superior statistical power to detect associated genes compared with gene-based analyses of univariate phenotypic composite scores (i.e. GATES, multiple regression), and multivariate analysis of variance (MANOVA). Re-analysis of metabolic data revealed 32 False Discovery Rate controlled genome-wide significant genes, and 12 regions harboring multiple genes; of these 44 regions, 30 were not reported in the original analysis. MGAS allows researchers to conduct their multivariate gene-based analyses efficiently, and without the loss of power that is often associated with an incorrectly specified genotype-phenotype models. MGAS is freely available in KGG v3.0 (http://statgenpro.psychiatry.hku.hk/limx/kgg/download.php). Access to the metabolic dataset can be requested at dbGaP (https://dbgap.ncbi.nlm.nih.gov/). The R-simulation code is available from http://ctglab.nl/people/sophie_van_der_sluis. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
Quantifying and mapping spatial variability in simulated forest plots
Gavin R. Corral; Harold E. Burkhart
2016-01-01
We used computer simulations to test the efficacy of multivariate statistical methods to detect, quantify, and map spatial variability of forest stands. Simulated stands were developed of regularly-spaced plantations of loblolly pine (Pinus taeda L.). We assumed no affects of competition or mortality, but random variability was added to individual tree characteristics...
ERIC Educational Resources Information Center
Pant, Mohan Dev
2011-01-01
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Hou, Deyi; O'Connor, David; Nathanail, Paul; Tian, Li; Ma, Yan
2017-12-01
Heavy metal soil contamination is associated with potential toxicity to humans or ecotoxicity. Scholars have increasingly used a combination of geographical information science (GIS) with geostatistical and multivariate statistical analysis techniques to examine the spatial distribution of heavy metals in soils at a regional scale. A review of such studies showed that most soil sampling programs were based on grid patterns and composite sampling methodologies. Many programs intended to characterize various soil types and land use types. The most often used sampling depth intervals were 0-0.10 m, or 0-0.20 m, below surface; and the sampling densities used ranged from 0.0004 to 6.1 samples per km 2 , with a median of 0.4 samples per km 2 . The most widely used spatial interpolators were inverse distance weighted interpolation and ordinary kriging; and the most often used multivariate statistical analysis techniques were principal component analysis and cluster analysis. The review also identified several determining and correlating factors in heavy metal distribution in soils, including soil type, soil pH, soil organic matter, land use type, Fe, Al, and heavy metal concentrations. The major natural and anthropogenic sources of heavy metals were found to derive from lithogenic origin, roadway and transportation, atmospheric deposition, wastewater and runoff from industrial and mining facilities, fertilizer application, livestock manure, and sewage sludge. This review argues that the full potential of integrated GIS and multivariate statistical analysis for assessing heavy metal distribution in soils on a regional scale has not yet been fully realized. It is proposed that future research be conducted to map multivariate results in GIS to pinpoint specific anthropogenic sources, to analyze temporal trends in addition to spatial patterns, to optimize modeling parameters, and to expand the use of different multivariate analysis tools beyond principal component analysis (PCA) and cluster analysis (CA). Copyright © 2017 Elsevier Ltd. All rights reserved.
Drunk driving detection based on classification of multivariate time series.
Li, Zhenlong; Jin, Xue; Zhao, Xiaohua
2015-09-01
This paper addresses the problem of detecting drunk driving based on classification of multivariate time series. First, driving performance measures were collected from a test in a driving simulator located in the Traffic Research Center, Beijing University of Technology. Lateral position and steering angle were used to detect drunk driving. Second, multivariate time series analysis was performed to extract the features. A piecewise linear representation was used to represent multivariate time series. A bottom-up algorithm was then employed to separate multivariate time series. The slope and time interval of each segment were extracted as the features for classification. Third, a support vector machine classifier was used to classify driver's state into two classes (normal or drunk) according to the extracted features. The proposed approach achieved an accuracy of 80.0%. Drunk driving detection based on the analysis of multivariate time series is feasible and effective. The approach has implications for drunk driving detection. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.
A new test of multivariate nonlinear causality
Bai, Zhidong; Jiang, Dandan; Lv, Zhihui; Wong, Wing-Keung; Zheng, Shurong
2018-01-01
The multivariate nonlinear Granger causality developed by Bai et al. (2010) (Mathematics and Computers in simulation. 2010; 81: 5-17) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994) (Journal of Finance. 1994; 49(5): 1639-1664), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate U-statistic. However, Bai et al. (2016) (2016; arXiv: 1701.03992) revisit the HJ test and find that the test statistic given by HJ is NOT a function of U-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power. PMID:29304085
A new test of multivariate nonlinear causality.
Bai, Zhidong; Hui, Yongchang; Jiang, Dandan; Lv, Zhihui; Wong, Wing-Keung; Zheng, Shurong
2018-01-01
The multivariate nonlinear Granger causality developed by Bai et al. (2010) (Mathematics and Computers in simulation. 2010; 81: 5-17) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994) (Journal of Finance. 1994; 49(5): 1639-1664), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate U-statistic. However, Bai et al. (2016) (2016; arXiv: 1701.03992) revisit the HJ test and find that the test statistic given by HJ is NOT a function of U-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
NASA Technical Reports Server (NTRS)
Leininger, G. G.
1981-01-01
Using nonlinear digital simulation as a representative model of the dynamic operation of the QCSEE turbofan engine, a feedback control system is designed by variable frequency design techniques. Transfer functions are generated for each of five power level settings covering the range of operation from approach power to full throttle (62.5% to 100% full power). These transfer functions are then used by an interactive control system design synthesis program to provide a closed loop feedback control using the multivariable Nyquist array and extensions to multivariable Bode diagrams and Nichols charts.
Kahlert, Daniela; Schlicht, Wolfgang
2015-08-21
Traffic safety and pedestrian friendliness are considered to be important conditions for older people's motivation to walk through their environment. This study uses an experimental study design with computer-simulated living environments to investigate the effect of micro-scale environmental factors (parking spaces and green verges with trees) on older people's perceptions of both motivational antecedents (dependent variables). Seventy-four consecutively recruited older people were randomly assigned watching one of two scenarios (independent variable) on a computer screen. The scenarios simulated a stroll on a sidewalk, as it is 'typical' for a German city. In version 'A,' the subjects take a fictive walk on a sidewalk where a number of cars are parked partially on it. In version 'B', cars are in parking spaces separated from the sidewalk by grass verges and trees. Subjects assessed their impressions of both dependent variables. A multivariate analysis of covariance showed that subjects' ratings on perceived traffic safety and pedestrian friendliness were higher for Version 'B' compared to version 'A'. Cohen's d indicates medium (d = 0.73) and large (d = 1.23) effect sizes for traffic safety and pedestrian friendliness, respectively. The study suggests that elements of the built environment might affect motivational antecedents of older people's walking behavior.
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
Moon, Jae; Manuel, Lance; Churchfield, Matthew; ...
2017-12-28
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Jae; Manuel, Lance; Churchfield, Matthew
Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC) standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES). Stochastic characteristics of these LES waked wind velocity field,more » including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR) with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study's overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.« less
Compassion Fatigue and Psychological Distress Among Social Workers: A Validation Study
Adams, Richard E.; Boscarino, Joseph A.; Figley, Charles R.
2009-01-01
Few studies have focused on caring professionals and their emotional exhaustion from working with traumatized clients, referred to as compassion fatigue (CF). The present study had 2 goals: (a) to assess the psychometric properties of a CF scale, and (b) to examine the scale's predictive validity in a multivariate model. The data came from a survey of social workers living in New York City following the September 11, 2001, terrorist attacks on the World Trade Center. Factor analyses indicated that the CF scale measured multiple dimensions. After overlapping items were eliminated, the scale measured 2 key underlying dimensions—secondary trauma and job burnout. In a multivariate model, these dimensions were related to psychological distress, even after other risk factors were controlled. The authors discuss the results in light of increasing the ability of professional caregivers to meet the emotional needs of their clients within a stressful environment without experiencing CF. PMID:16569133
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
3D visualization of ultra-fine ICON climate simulation data
NASA Astrophysics Data System (ADS)
Röber, Niklas; Spickermann, Dela; Böttinger, Michael
2016-04-01
Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
NASA Astrophysics Data System (ADS)
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
De Francesco, Davide; Leech, Robert; Sabin, Caroline A.; Winston, Alan
2018-01-01
Objective The reported prevalence of cognitive impairment remains similar to that reported in the pre-antiretroviral therapy era. This may be partially artefactual due to the methods used to diagnose impairment. In this study, we evaluated the diagnostic performance of the HIV-associated neurocognitive disorder (Frascati criteria) and global deficit score (GDS) methods in comparison to a new, multivariate method of diagnosis. Methods Using a simulated ‘normative’ dataset informed by real-world cognitive data from the observational Pharmacokinetic and Clinical Observations in PeoPle Over fiftY (POPPY) cohort study, we evaluated the apparent prevalence of cognitive impairment using the Frascati and GDS definitions, as well as a novel multivariate method based on the Mahalanobis distance. We then quantified the diagnostic properties (including positive and negative predictive values and accuracy) of each method, using bootstrapping with 10,000 replicates, with a separate ‘test’ dataset to which a pre-defined proportion of ‘impaired’ individuals had been added. Results The simulated normative dataset demonstrated that up to ~26% of a normative control population would be diagnosed with cognitive impairment with the Frascati criteria and ~20% with the GDS. In contrast, the multivariate Mahalanobis distance method identified impairment in ~5%. Using the test dataset, diagnostic accuracy [95% confidence intervals] and positive predictive value (PPV) was best for the multivariate method vs. Frascati and GDS (accuracy: 92.8% [90.3–95.2%] vs. 76.1% [72.1–80.0%] and 80.6% [76.6–84.5%] respectively; PPV: 61.2% [48.3–72.2%] vs. 29.4% [22.2–36.8%] and 33.9% [25.6–42.3%] respectively). Increasing the a priori false positive rate for the multivariate Mahalanobis distance method from 5% to 15% resulted in an increase in sensitivity from 77.4% (64.5–89.4%) to 92.2% (83.3–100%) at a cost of specificity from 94.5% (92.8–95.2%) to 85.0% (81.2–88.5%). Conclusion Our simulations suggest that the commonly used diagnostic criteria of HIV-associated cognitive impairment label a significant proportion of a normative reference population as cognitively impaired, which will likely lead to a substantial over-estimate of the true proportion in a study population, due to their lower than expected specificity. These findings have important implications for clinical research regarding cognitive health in people living with HIV. More accurate methods of diagnosis should be implemented, with multivariate techniques offering a promising solution. PMID:29641619
A General Approach for Estimating Scale Score Reliability for Panel Survey Data
ERIC Educational Resources Information Center
Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.
2009-01-01
Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess…
Evaluation of a Multivariate Syndromic Surveillance System for West Nile Virus.
Faverjon, Céline; Andersson, M Gunnar; Decors, Anouk; Tapprest, Jackie; Tritz, Pierre; Sandoz, Alain; Kutasi, Orsolya; Sala, Carole; Leblond, Agnès
2016-06-01
Various methods are currently used for the early detection of West Nile virus (WNV) but their outputs are not quantitative and/or do not take into account all available information. Our study aimed to test a multivariate syndromic surveillance system to evaluate if the sensitivity and the specificity of detection of WNV could be improved. Weekly time series data on nervous syndromes in horses and mortality in both horses and wild birds were used. Baselines were fitted to the three time series and used to simulate 100 years of surveillance data. WNV outbreaks were simulated and inserted into the baselines based on historical data and expert opinion. Univariate and multivariate syndromic surveillance systems were tested to gauge how well they detected the outbreaks; detection was based on an empirical Bayesian approach. The systems' performances were compared using measures of sensitivity, specificity, and area under receiver operating characteristic curve (AUC). When data sources were considered separately (i.e., univariate systems), the best detection performance was obtained using the data set of nervous symptoms in horses compared to those of bird and horse mortality (AUCs equal to 0.80, 0.75, and 0.50, respectively). A multivariate outbreak detection system that used nervous symptoms in horses and bird mortality generated the best performance (AUC = 0.87). The proposed approach is suitable for performing multivariate syndromic surveillance of WNV outbreaks. This is particularly relevant, given that a multivariate surveillance system performed better than a univariate approach. Such a surveillance system could be especially useful in serving as an alert for the possibility of human viral infections. This approach can be also used for other diseases for which multiple sources of evidence are available.
NASA Astrophysics Data System (ADS)
Dee, S. G.; Parsons, L. A.; Loope, G. R.; Overpeck, J. T.; Ault, T. R.; Emile-Geay, J.
2017-10-01
The spectral characteristics of paleoclimate observations spanning the last millennium suggest the presence of significant low-frequency (multi-decadal to centennial scale) variability in the climate system. Since this low-frequency climate variability is critical for climate predictions on societally-relevant scales, it is essential to establish whether General Circulation models (GCMs) are able to simulate it faithfully. Recent studies find large discrepancies between models and paleoclimate data at low frequencies, prompting concerns surrounding the ability of GCMs to predict long-term, high-magnitude variability under greenhouse forcing (Laepple and Huybers, 2014a, 2014b). However, efforts to ground climate model simulations directly in paleoclimate observations are impeded by fundamental differences between models and the proxy data: proxy systems often record a multivariate and/or nonlinear response to climate, precluding a direct comparison to GCM output. In this paper we bridge this gap via a forward proxy modeling approach, coupled to an isotope-enabled GCM. This allows us to disentangle the various contributions to signals embedded in ice cores, speleothem calcite, coral aragonite, tree-ring width, and tree-ring cellulose. The paper addresses the following questions: (1) do forward-modeled ;pseudoproxies; exhibit variability comparable to proxy data? (2) if not, which processes alter the shape of the spectrum of simulated climate variability, and are these processes broadly distinguishable from climate? We apply our method to representative case studies, and broaden these insights with an analysis of the PAGES2k database (PAGES2K Consortium, 2013). We find that current proxy system models (PSMs) can help resolve model-data discrepancies on interannual to decadal timescales, but cannot account for the mismatch in variance on multi-decadal to centennial timescales. We conclude that, specific to this set of PSMs and isotope-enabled model, the paleoclimate record may exhibit larger low-frequency variability than GCMs currently simulate, indicative of incomplete physics and/or forcings.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Orientation decoding: Sense in spirals?
Clifford, Colin W G; Mannion, Damien J
2015-04-15
The orientation of a visual stimulus can be successfully decoded from the multivariate pattern of fMRI activity in human visual cortex. Whether this capacity requires coarse-scale orientation biases is controversial. We and others have advocated the use of spiral stimuli to eliminate a potential coarse-scale bias-the radial bias toward local orientations that are collinear with the centre of gaze-and hence narrow down the potential coarse-scale biases that could contribute to orientation decoding. The usefulness of this strategy is challenged by the computational simulations of Carlson (2014), who reported the ability to successfully decode spirals of opposite sense (opening clockwise or counter-clockwise) from the pooled output of purportedly unbiased orientation filters. Here, we elaborate the mathematical relationship between spirals of opposite sense to confirm that they cannot be discriminated on the basis of the pooled output of unbiased or radially biased orientation filters. We then demonstrate that Carlson's (2014) reported decoding ability is consistent with the presence of inadvertent biases in the set of orientation filters; biases introduced by their digital implementation and unrelated to the brain's processing of orientation. These analyses demonstrate that spirals must be processed with an orientation bias other than the radial bias for successful decoding of spiral sense. Copyright © 2014 Elsevier Inc. All rights reserved.
Multi-criteria evaluation of CMIP5 GCMs for climate change impact analysis
NASA Astrophysics Data System (ADS)
Ahmadalipour, Ali; Rana, Arun; Moradkhani, Hamid; Sharma, Ashish
2017-04-01
Climate change is expected to have severe impacts on global hydrological cycle along with food-water-energy nexus. Currently, there are many climate models used in predicting important climatic variables. Though there have been advances in the field, there are still many problems to be resolved related to reliability, uncertainty, and computing needs, among many others. In the present work, we have analyzed performance of 20 different global climate models (GCMs) from Climate Model Intercomparison Project Phase 5 (CMIP5) dataset over the Columbia River Basin (CRB) in the Pacific Northwest USA. We demonstrate a statistical multicriteria approach, using univariate and multivariate techniques, for selecting suitable GCMs to be used for climate change impact analysis in the region. Univariate methods includes mean, standard deviation, coefficient of variation, relative change (variability), Mann-Kendall test, and Kolmogorov-Smirnov test (KS-test); whereas multivariate methods used were principal component analysis (PCA), singular value decomposition (SVD), canonical correlation analysis (CCA), and cluster analysis. The analysis is performed on raw GCM data, i.e., before bias correction, for precipitation and temperature climatic variables for all the 20 models to capture the reliability and nature of the particular model at regional scale. The analysis is based on spatially averaged datasets of GCMs and observation for the period of 1970 to 2000. Ranking is provided to each of the GCMs based on the performance evaluated against gridded observational data on various temporal scales (daily, monthly, and seasonal). Results have provided insight into each of the methods and various statistical properties addressed by them employed in ranking GCMs. Further; evaluation was also performed for raw GCM simulations against different sets of gridded observational dataset in the area.
Alpine Ecohydrology Across Scales: Propagating Fine-scale Heterogeneity to the Catchment and Beyond
NASA Astrophysics Data System (ADS)
Mastrotheodoros, T.; Pappas, C.; Molnar, P.; Burlando, P.; Hadjidoukas, P.; Fatichi, S.
2017-12-01
In mountainous ecosystems, complex topography and landscape heterogeneity govern ecohydrological states and fluxes. Here, we investigate topographic controls on water, energy and carbon fluxes across different climatic regimes and vegetation types representative of the European Alps. We use an ecohydrological model to perform fine-scale numerical experiments on a synthetic domain that comprises a symmetric mountain with eight catchments draining along the cardinal and intercardinal directions. Distributed meteorological model input variables are generated using observations from Switzerland. The model computes the incoming solar radiation based on the local topography. We implement a multivariate statistical framework to disentangle the impact of landscape heterogeneity (i.e., elevation, aspect, flow contributing area, vegetation type) on the simulated water, carbon, and energy dynamics. This allows us to identify the sensitivities of several ecohydrological variables (including leaf area index, evapotranspiration, snow-cover and net primary productivity) to topographic and meteorological inputs at different spatial and temporal scales. We also use an alpine catchment as a real case study to investigate how the natural variability of soil and land cover affects the idealized relationships that arise from the synthetic domain. In accordance with previous studies, our analysis shows a complex pattern of vegetation response to radiation. We find also different patterns of ecosystem sensitivity to topography-driven heterogeneity depending on the hydrological regime (i.e., wet vs. dry conditions). Our results suggest that topography-driven variability in ecohydrological variables (e.g. transpiration) at the fine spatial scale can exceed 50%, but it is substantially reduced ( 5%) when integrated at the catchment scale.
Defining critical habitats of threatened and endemic reef fishes with a multivariate approach.
Purcell, Steven W; Clarke, K Robert; Rushworth, Kelvin; Dalton, Steven J
2014-12-01
Understanding critical habitats of threatened and endemic animals is essential for mitigating extinction risks, developing recovery plans, and siting reserves, but assessment methods are generally lacking. We evaluated critical habitats of 8 threatened or endemic fish species on coral and rocky reefs of subtropical eastern Australia, by measuring physical and substratum-type variables of habitats at fish sightings. We used nonmetric and metric multidimensional scaling (nMDS, mMDS), Analysis of similarities (ANOSIM), similarity percentages analysis (SIMPER), permutational analysis of multivariate dispersions (PERMDISP), and other multivariate tools to distinguish critical habitats. Niche breadth was widest for 2 endemic wrasses, and reef inclination was important for several species, often found in relatively deep microhabitats. Critical habitats of mainland reef species included small caves or habitat-forming hosts such as gorgonian corals and black coral trees. Hard corals appeared important for reef fishes at Lord Howe Island, and red algae for mainland reef fishes. A wide range of habitat variables are required to assess critical habitats owing to varied affinities of species to different habitat features. We advocate assessments of critical habitats matched to the spatial scale used by the animals and a combination of multivariate methods. Our multivariate approach furnishes a general template for assessing the critical habitats of species, understanding how these vary among species, and determining differences in the degree of habitat specificity. © 2014 Society for Conservation Biology.
Multivariate regression model for predicting lumber grade volumes of northern red oak sawlogs
Daniel A. Yaussy; Robert L. Brisbin
1983-01-01
A multivariate regression model was developed to predict green board-foot yields for the seven common factory lumber grades processed from northern red oak (Quercus rubra L.) factory grade logs. The model uses the standard log measurements of grade, scaling diameter, length, and percent defect. It was validated with an independent data set. The model...
2017-09-01
efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components
Multivariate regression model for predicting yields of grade lumber from yellow birch sawlogs
Andrew F. Howard; Daniel A. Yaussy
1986-01-01
A multivariate regression model was developed to predict green board-foot yields for the common grades of factory lumber processed from yellow birch factory-grade logs. The model incorporates the standard log measurements of scaling diameter, length, proportion of scalable defects, and the assigned USDA Forest Service log grade. Differences in yields between band and...
Keenan, Michael R; Smentkowski, Vincent S; Ulfig, Robert M; Oltman, Edward; Larson, David J; Kelly, Thomas F
2011-06-01
We demonstrate for the first time that multivariate statistical analysis techniques can be applied to atom probe tomography data to estimate the chemical composition of a sample at the full spatial resolution of the atom probe in three dimensions. Whereas the raw atom probe data provide the specific identity of an atom at a precise location, the multivariate results can be interpreted in terms of the probabilities that an atom representing a particular chemical phase is situated there. When aggregated to the size scale of a single atom (∼0.2 nm), atom probe spectral-image datasets are huge and extremely sparse. In fact, the average spectrum will have somewhat less than one total count per spectrum due to imperfect detection efficiency. These conditions, under which the variance in the data is completely dominated by counting noise, test the limits of multivariate analysis, and an extensive discussion of how to extract the chemical information is presented. Efficient numerical approaches to performing principal component analysis (PCA) on these datasets, which may number hundreds of millions of individual spectra, are put forward, and it is shown that PCA can be computed in a few seconds on a typical laptop computer.
Sornborger, Andrew T; Lauderdale, James D
2016-11-01
Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.
Non-fragile multivariable PID controller design via system augmentation
NASA Astrophysics Data System (ADS)
Liu, Jinrong; Lam, James; Shen, Mouquan; Shu, Zhan
2017-07-01
In this paper, the issue of designing non-fragile H∞ multivariable proportional-integral-derivative (PID) controllers with derivative filters is investigated. In order to obtain the controller gains, the original system is associated with an extended system such that the PID controller design can be formulated as a static output-feedback control problem. By taking the system augmentation approach, the conditions with slack matrices for solving the non-fragile H∞ multivariable PID controller gains are established. Based on the results, linear matrix inequality -based iterative algorithms are provided to compute the controller gains. Simulations are conducted to verify the effectiveness of the proposed approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loveday, D.L.; Craggs, C.
Box-Jenkins-based multivariate stochastic modeling is carried out using data recorded from a domestic heating system. The system comprises an air-source heat pump sited in the roof space of a house, solar assistance being provided by the conventional tile roof acting as a radiation absorber. Multivariate models are presented which illustrate the time-dependent relationships between three air temperatures - at external ambient, at entry to, and at exit from, the heat pump evaporator. Using a deterministic modeling approach, physical interpretations are placed on the results of the multivariate technique. It is concluded that the multivariate Box-Jenkins approach is a suitable techniquemore » for building thermal analysis. Application to multivariate Box-Jenkins approach is a suitable technique for building thermal analysis. Application to multivariate model-based control is discussed, with particular reference to building energy management systems. It is further concluded that stochastic modeling of data drawn from a short monitoring period offers a means of retrofitting an advanced model-based control system in existing buildings, which could be used to optimize energy savings. An approach to system simulation is suggested.« less
ERIC Educational Resources Information Center
McFarland, Dennis J.
2014-01-01
Purpose: Factor analysis is a useful technique to aid in organizing multivariate data characterizing speech, language, and auditory abilities. However, knowledge of the limitations of factor analysis is essential for proper interpretation of results. The present study used simulated test scores to illustrate some characteristics of factor…
Multivariate normative comparisons using an aggregated database
Murre, Jaap M. J.; Huizenga, Hilde M.
2017-01-01
In multivariate normative comparisons, a patient’s profile of test scores is compared to those in a normative sample. Recently, it has been shown that these multivariate normative comparisons enhance the sensitivity of neuropsychological assessment. However, multivariate normative comparisons require multivariate normative data, which are often unavailable. In this paper, we show how a multivariate normative database can be constructed by combining healthy control group data from published neuropsychological studies. We show that three issues should be addressed to construct a multivariate normative database. First, the database may have a multilevel structure, with participants nested within studies. Second, not all tests are administered in every study, so many data may be missing. Third, a patient should be compared to controls of similar age, gender and educational background rather than to the entire normative sample. To address these issues, we propose a multilevel approach for multivariate normative comparisons that accounts for missing data and includes covariates for age, gender and educational background. Simulations show that this approach controls the number of false positives and has high sensitivity to detect genuine deviations from the norm. An empirical example is provided. Implications for other domains than neuropsychology are also discussed. To facilitate broader adoption of these methods, we provide code implementing the entire analysis in the open source software package R. PMID:28267796
ERIC Educational Resources Information Center
Chen, Dezhi; Hu, Bi Ying; Fan, Xitao; Li, Kejian
2014-01-01
Adapted from the Early Childhood Environment Rating Scale-Revised, the Chinese Early Childhood Program Rating Scale (CECPRS) is a culturally comparable measure for assessing the quality of early childhood education and care programs in the Chinese cultural/social contexts. In this study, 176 kindergarten classrooms were rated with CECPRS on eight…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, Amanda M.; Nelson, Gilbert L.; Casella, Amanda J.
Microfluidic devices are a growing field with significant potential for application to small scale processing of solutions. Much like large scale processing, fast, reliable, and cost effective means of monitoring the streams during processing are needed. Here we apply a novel Micro-Raman probe to the on-line monitoring of streams within a microfluidic device. For either macro or micro scale process monitoring via spectroscopic response, there is the danger of interfering or confounded bands obfuscating results. By utilizing chemometric analysis, a form of multivariate analysis, species can be accurately quantified in solution despite the presence of overlapping or confounded spectroscopic bands.more » This is demonstrated on solutions of HNO 3 and NaNO 3 within micro-flow and microfluidic devices.« less
Multivariable speed synchronisation for a parallel hybrid electric vehicle drivetrain
NASA Astrophysics Data System (ADS)
Alt, B.; Antritter, F.; Svaricek, F.; Schultalbers, M.
2013-03-01
In this article, a new drivetrain configuration of a parallel hybrid electric vehicle is considered and a novel model-based control design strategy is given. In particular, the control design covers the speed synchronisation task during a restart of the internal combustion engine. The proposed multivariable synchronisation strategy is based on feedforward and decoupled feedback controllers. The performance and the robustness properties of the closed-loop system are illustrated by nonlinear simulation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleese van Dam, Kerstin; Lansing, Carina S.; Elsethagen, Todd O.
2014-01-28
Modern workflow systems enable scientists to run ensemble simulations at unprecedented scales and levels of complexity, allowing them to study system sizes previously impossible to achieve, due to the inherent resource requirements needed for the modeling work. However as a result of these new capabilities the science teams suddenly also face unprecedented data volumes that they are unable to analyze with their existing tools and methodologies in a timely fashion. In this paper we will describe the ongoing development work to create an integrated data intensive scientific workflow and analysis environment that offers researchers the ability to easily create andmore » execute complex simulation studies and provides them with different scalable methods to analyze the resulting data volumes. The integration of simulation and analysis environments is hereby not only a question of ease of use, but supports fundamental functions in the correlated analysis of simulation input, execution details and derived results for multi-variant, complex studies. To this end the team extended and integrated the existing capabilities of the Velo data management and analysis infrastructure, the MeDICi data intensive workflow system and RHIPE the R for Hadoop version of the well-known statistics package, as well as developing a new visual analytics interface for the result exploitation by multi-domain users. The capabilities of the new environment are demonstrated on a use case that focusses on the Pacific Northwest National Laboratory (PNNL) building energy team, showing how they were able to take their previously local scale simulations to a nationwide level by utilizing data intensive computing techniques not only for their modeling work, but also for the subsequent analysis of their modeling results. As part of the PNNL research initiative PRIMA (Platform for Regional Integrated Modeling and Analysis) the team performed an initial 3 year study of building energy demands for the US Eastern Interconnect domain, which they are now planning to extend to predict the demand for the complete century. The initial study raised their data demands from a few GBs to 400GB for the 3year study and expected tens of TBs for the full century.« less
Application of the new Cross Recurrence Plots to multivariate data
NASA Astrophysics Data System (ADS)
Thiel, M.; Romano, C.; Kurths, J.
2003-04-01
We extend and then apply the method of the new Cross Recurrence Plots (XRPs) to multivariate data. After introducing the new method we carry out an analysis of spatiotemporal ecological data. We compute not only the Rényi entropies and cross entropies by XRP, that allow to draw conclusions about the coupling of the systems, but also find a prediction horizon for intermediate time scales.
Irvine, Karen-Amanda; Ferguson, Adam R.; Mitchell, Kathleen D.; Beattie, Stephanie B.; Lin, Amity; Stuck, Ellen D.; Huie, J. Russell; Nielson, Jessica L.; Talbott, Jason F.; Inoue, Tomoo; Beattie, Michael S.; Bresnahan, Jacqueline C.
2014-01-01
The IBB scale is a recently developed forelimb scale for the assessment of fine control of the forelimb and digits after cervical spinal cord injury [SCI; (1)]. The present paper describes the assessment of inter-rater reliability and face, concurrent and construct validity of this scale following SCI. It demonstrates that the IBB is a reliable and valid scale that is sensitive to severity of SCI and to recovery over time. In addition, the IBB correlates with other outcome measures and is highly predictive of biological measures of tissue pathology. Multivariate analysis using principal component analysis (PCA) demonstrates that the IBB is highly predictive of the syndromic outcome after SCI (2), and is among the best predictors of bio-behavioral function, based on strong construct validity. Altogether, the data suggest that the IBB, especially in concert with other measures, is a reliable and valid tool for assessing neurological deficits in fine motor control of the distal forelimb, and represents a powerful addition to multivariate outcome batteries aimed at documenting recovery of function after cervical SCI in rats. PMID:25071704
de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas
2018-01-23
High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.
Montalto, Alessandro; Faes, Luca; Marinazzo, Daniele
2014-01-01
A challenge for physiologists and neuroscientists is to map information transfer between components of the systems that they study at different scales, in order to derive important knowledge on structure and function from the analysis of the recorded dynamics. The components of physiological networks often interact in a nonlinear way and through mechanisms which are in general not completely known. It is then safer that the method of choice for analyzing these interactions does not rely on any model or assumption on the nature of the data and their interactions. Transfer entropy has emerged as a powerful tool to quantify directed dynamical interactions. In this paper we compare different approaches to evaluate transfer entropy, some of them already proposed, some novel, and present their implementation in a freeware MATLAB toolbox. Applications to simulated and real data are presented.
Montalto, Alessandro; Faes, Luca; Marinazzo, Daniele
2014-01-01
A challenge for physiologists and neuroscientists is to map information transfer between components of the systems that they study at different scales, in order to derive important knowledge on structure and function from the analysis of the recorded dynamics. The components of physiological networks often interact in a nonlinear way and through mechanisms which are in general not completely known. It is then safer that the method of choice for analyzing these interactions does not rely on any model or assumption on the nature of the data and their interactions. Transfer entropy has emerged as a powerful tool to quantify directed dynamical interactions. In this paper we compare different approaches to evaluate transfer entropy, some of them already proposed, some novel, and present their implementation in a freeware MATLAB toolbox. Applications to simulated and real data are presented. PMID:25314003
Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data
Aad, G.; Abbott, B.; Abdallah, J.; ...
2014-10-01
This paper presents the electron and photon energy calibration achieved with the ATLAS detector using about 25 fb -1 of LHC proton–proton collision data taken at centre-of-mass energies of √s=7 and 8 TeV. The reconstruction of electron and photon energies is optimised using multivariate algorithms. The response of the calorimeter layers is equalised in data and simulation, and the longitudinal profile of the electromagnetic showers is exploited to estimate the passive material in front of the calorimeter and reoptimise the detector simulation. After all corrections, the Z resonance is used to set the absolute energy scale. For electrons from Zmore » decays, the achieved calibration is typically accurate to 0.05 % in most of the detector acceptance, rising to 0.2 % in regions with large amounts of passive material. The remaining inaccuracy is less than 0.2–1 % for electrons with a transverse energy of 10 GeV, and is on average 0.3 % for photons. The detector resolution is determined with a relative inaccuracy of less than 10 % for electrons and photons up to 60 GeV transverse energy, rising to 40 % for transverse energies above 500 GeV.« less
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
An Analysis of Methods Used to Examine Gender Differences in Computer-Related Behavior.
ERIC Educational Resources Information Center
Kay, Robin
1992-01-01
Review of research investigating gender differences in computer-related behavior examines statistical and methodological flaws. Issues addressed include sample selection, sample size, scale development, scale quality, the use of univariate and multivariate analyses, regressional analysis, construct definition, construct testing, and the…
Kahlert, Daniela; Schlicht, Wolfgang
2015-01-01
Traffic safety and pedestrian friendliness are considered to be important conditions for older people’s motivation to walk through their environment. This study uses an experimental study design with computer-simulated living environments to investigate the effect of micro-scale environmental factors (parking spaces and green verges with trees) on older people’s perceptions of both motivational antecedents (dependent variables). Seventy-four consecutively recruited older people were randomly assigned watching one of two scenarios (independent variable) on a computer screen. The scenarios simulated a stroll on a sidewalk, as it is ‘typical’ for a German city. In version ‘A,’ the subjects take a fictive walk on a sidewalk where a number of cars are parked partially on it. In version ‘B’, cars are in parking spaces separated from the sidewalk by grass verges and trees. Subjects assessed their impressions of both dependent variables. A multivariate analysis of covariance showed that subjects’ ratings on perceived traffic safety and pedestrian friendliness were higher for Version ‘B’ compared to version ‘A’. Cohen’s d indicates medium (d = 0.73) and large (d = 1.23) effect sizes for traffic safety and pedestrian friendliness, respectively. The study suggests that elements of the built environment might affect motivational antecedents of older people’s walking behavior. PMID:26308026
Effects of land cover change on the tropical circulation in a GCM
NASA Astrophysics Data System (ADS)
Jonko, Alexandra Karolina; Hense, Andreas; Feddema, Johannes Jan
2010-09-01
Multivariate statistics are used to investigate sensitivity of the tropical atmospheric circulation to scenario-based global land cover change (LCC), with the largest changes occurring in the tropics. Three simulations performed with the fully coupled Parallel Climate Model (PCM) are compared: (1) a present day control run; (2) a simulation with present day land cover and Intergovernmental Panel on Climate Change (IPCC) Special Report on Emission Scenarios (SRES) A2 greenhouse gas (GHG) projections; and (3) a simulation with SRES A2 land cover and GHG projections. Dimensionality of PCM data is reduced by projection onto a priori specified eigenvectors, consisting of Rossby and Kelvin waves produced by a linearized, reduced gravity model of the tropical circulation. A Hotelling T 2 test is performed on projection amplitudes. Effects of LCC evaluated by this method are limited to diabatic heating. A statistically significant and recurrent signal is detected for 33% of all tests performed for various combinations of parameters. Taking into account uncertainties and limitations of the present methodology, this signal can be interpreted as a Rossby wave response to prescribed LCC. The Rossby waves are shallow, large-scale motions, trapped at the equator and most pronounced in boreal summer. Differences in mass and flow fields indicate a shift of the tropical Walker circulation patterns with an anomalous subsidence over tropical South America.
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
A Brief Critique of the TATES Procedure.
Aliev, Fazil; Salvatore, Jessica E; Agrawal, Arpana; Almasy, Laura; Chan, Grace; Edenberg, Howard J; Hesselbrock, Victor; Kuperman, Samuel; Meyers, Jacquelyn; Dick, Danielle M
2018-03-01
The Trait-based test that uses the Extended Simes procedure (TATES) was developed as a method for conducting multivariate GWAS for correlated phenotypes whose underlying genetic architecture is complex. In this paper, we provide a brief methodological critique of the TATES method using simulated examples and a mathematical proof. Our simulated examples using correlated phenotypes show that the Type I error rate is higher than expected, and that more TATES p values fall outside of the confidence interval relative to expectation. Thus the method may result in systematic inflation when used with correlated phenotypes. In a mathematical proof we further demonstrate that the distribution of TATES p values deviates from expectation in a manner indicative of inflation. Our findings indicate the need for caution when using TATES for multivariate GWAS of correlated phenotypes.
The basis of orientation decoding in human primary visual cortex: fine- or coarse-scale biases?
Maloney, Ryan T
2015-01-01
Orientation signals in human primary visual cortex (V1) can be reliably decoded from the multivariate pattern of activity as measured with functional magnetic resonance imaging (fMRI). The precise underlying source of these decoded signals (whether by orientation biases at a fine or coarse scale in cortex) remains a matter of some controversy, however. Freeman and colleagues (J Neurosci 33: 19695-19703, 2013) recently showed that the accuracy of decoding of spiral patterns in V1 can be predicted by a voxel's preferred spatial position (the population receptive field) and its coarse orientation preference, suggesting that coarse-scale biases are sufficient for orientation decoding. Whether they are also necessary for decoding remains an open question, and one with implications for the broader interpretation of multivariate decoding results in fMRI studies. Copyright © 2015 the American Physiological Society.
Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai
2017-10-01
Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.
NASA Astrophysics Data System (ADS)
Guzmán, Gema; Laguna, Ana; Cañasveras, Juan Carlos; Boulal, Hakim; Barrón, Vidal; Gómez-Macpherson, Helena; Giráldez, Juan Vicente; Gómez, José Alfonso
2015-05-01
Although soil erosion is one of the main threats to agriculture sustainability in many areas of the world, its processes are difficult to measure and still need a better characterization. The use of iron oxides as sediment tracers, combined with erosion and mixing models opens up a pathway for improving the knowledge of the erosion and redistribution of soil, determining sediment sources and sinks. In this study, magnetite and a multivariate mixing model were used in rainfall simulations at the micro-plot scale to determine the source of the sediment at different stages of a furrow-ridge system both with (+T) and without (-T) wheel tracks. At a plot scale, magnetite, hematite and goethite combined with two soil erosion models based on the kinematic wave approach were used in a sprinkler irrigation test to study trends in sediment transport and tracer dynamics along furrow lengths under a wide range of scenarios. In the absence of any stubble cover, sediment contribution from the ridges was larger than the furrow bed one, almost 90%, while an opposite trend was observed with stubble, with a smaller contribution from the ridge (32%) than that of the bed, at the micro-plot trials. Furthermore, at a plot scale, the tracer concentration analysis showed an exponentially decreasing trend with the downstream distance both for sediment detachment along furrows and soil source contribution from tagged segments. The parameters of the distributed model KINEROS2 have been estimated using the PEST Model to obtain a more accurate evaluation. Afterwards, this model was used to simulate a broad range of common scenarios of topography and rainfall from commercial farms in southern Spain. Higher slopes had a significant influence on sediment yields while long furrow distances allowed a more efficient water use. For the control of runoff, and therefore soil loss, an equilibrium between irrigation design (intensity, duration, water pattern) and hydric needs of the crops should be defined in order to establish a sustainable management strategy.
Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi
2018-02-01
The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vrac, Mathieu
2018-06-01
Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.
Probabilistic, meso-scale flood loss modelling
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2016-04-01
Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.
Neurodevelopmental Status and Adaptive Behaviors in Preschool Children with Chronic Kidney Disease
ERIC Educational Resources Information Center
Duquette, Peter J.; Hooper, Stephen R.; Icard, Phil F.; Hower, Sarah J.; Mamak, Eva G.; Wetherington, Crista E.; Gipson, Debbie S.
2009-01-01
This study examines the early neurodevelopmental function of infants and preschool children who have chronic kidney disease (CKD). Fifteen patients with CKD are compared to a healthy control group using the "Mullen Scales of Early Learning" (MSEL) and the "Vineland Adaptive Behavior Scale" (VABS). Multivariate analysis reveals…
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Krumin, Michael; Shoham, Shy
2010-01-01
Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden” Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method. PMID:20454705
A Study of Effects of MultiCollinearity in the Multivariable Analysis
Yoo, Wonsuk; Mayberry, Robert; Bae, Sejong; Singh, Karan; (Peter) He, Qinghua; Lillard, James W.
2015-01-01
A multivariable analysis is the most popular approach when investigating associations between risk factors and disease. However, efficiency of multivariable analysis highly depends on correlation structure among predictive variables. When the covariates in the model are not independent one another, collinearity/multicollinearity problems arise in the analysis, which leads to biased estimation. This work aims to perform a simulation study with various scenarios of different collinearity structures to investigate the effects of collinearity under various correlation structures amongst predictive and explanatory variables and to compare these results with existing guidelines to decide harmful collinearity. Three correlation scenarios among predictor variables are considered: (1) bivariate collinear structure as the most simple collinearity case, (2) multivariate collinear structure where an explanatory variable is correlated with two other covariates, (3) a more realistic scenario when an independent variable can be expressed by various functions including the other variables. PMID:25664257
A Study of Effects of MultiCollinearity in the Multivariable Analysis.
Yoo, Wonsuk; Mayberry, Robert; Bae, Sejong; Singh, Karan; Peter He, Qinghua; Lillard, James W
2014-10-01
A multivariable analysis is the most popular approach when investigating associations between risk factors and disease. However, efficiency of multivariable analysis highly depends on correlation structure among predictive variables. When the covariates in the model are not independent one another, collinearity/multicollinearity problems arise in the analysis, which leads to biased estimation. This work aims to perform a simulation study with various scenarios of different collinearity structures to investigate the effects of collinearity under various correlation structures amongst predictive and explanatory variables and to compare these results with existing guidelines to decide harmful collinearity. Three correlation scenarios among predictor variables are considered: (1) bivariate collinear structure as the most simple collinearity case, (2) multivariate collinear structure where an explanatory variable is correlated with two other covariates, (3) a more realistic scenario when an independent variable can be expressed by various functions including the other variables.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Zhao, Ruiying; Biswas, Asim; Zhou, Yin; Zhou, Yue; Shi, Zhou; Li, Hongyi
2018-06-23
Environmental factors have shown localized and scale-dependent controls over soil organic matter (SOM) distribution in the landscape. Previous studies have explored the relationships between SOM and individual controlling factors; however, few studies have indicated the combined control from multiple environmental factors. In this study, we compared the localized and scale-dependent univariate and multivariate controls of SOM along two long transects (northeast, NE transect and north, N transect) from China. Bivariate wavelet coherence (BWC) between SOM and individual factors and multiple wavelet coherence (MWC) between SOM and factor combinations were calculated. Average wavelet coherence (AWC) and percent area of significant coherence (PASC) were used to assess the relative dominance of individual and a combination of factors to explain SOM variations at different scales and locations. The results showed that (in BWC analysis) mean annual temperature (MAT) with the largest AWC (0.39) and PASC (16.23%) was the dominant factor in explaining SOM variations along the NE transect. The topographic wetness index (TWI) was the dominant factor (AWC = 0.39 and PASC = 20.80%) along the N transect. MWC identified the combination of Slope, net primary production (NPP) and mean annual precipitation (MAP) as the most important combination in explaining SOM variations along the NE transect with a significant increase in AWC and PASC at different scales and locations (e.g. AWC = 0.91 and PASC = 58.03% at all scales). The combination of TWI, NPP and normalized difference vegetation index (NDVI) was the most influential along the N transect (AWC = 0.83 and PASC = 32.68% at all scales). The results indicated that the combined controls of environmental factors on SOM variations at different scales and locations in a large area can be identified by MWC. This is promising for a better understanding of the multivariate controls in SOM variations at larger spatial scales and may improve the capability of digital soil mapping. Copyright © 2018 Elsevier B.V. All rights reserved.
Power and sample size for multivariate logistic modeling of unmatched case-control studies.
Gail, Mitchell H; Haneuse, Sebastien
2017-01-01
Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.
Simulation techniques for estimating error in the classification of normal patterns
NASA Technical Reports Server (NTRS)
Whitsitt, S. J.; Landgrebe, D. A.
1974-01-01
Methods of efficiently generating and classifying samples with specified multivariate normal distributions were discussed. Conservative confidence tables for sample sizes are given for selective sampling. Simulation results are compared with classified training data. Techniques for comparing error and separability measure for two normal patterns are investigated and used to display the relationship between the error and the Chernoff bound.
ERIC Educational Resources Information Center
Thissen, David; Wainer, Howard
Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…
Lie, Octavian V; van Mierlo, Pieter
2017-01-01
The visual interpretation of intracranial EEG (iEEG) is the standard method used in complex epilepsy surgery cases to map the regions of seizure onset targeted for resection. Still, visual iEEG analysis is labor-intensive and biased due to interpreter dependency. Multivariate parametric functional connectivity measures using adaptive autoregressive (AR) modeling of the iEEG signals based on the Kalman filter algorithm have been used successfully to localize the electrographic seizure onsets. Due to their high computational cost, these methods have been applied to a limited number of iEEG time-series (<60). The aim of this study was to test two Kalman filter implementations, a well-known multivariate adaptive AR model (Arnold et al. 1998) and a simplified, computationally efficient derivation of it, for their potential application to connectivity analysis of high-dimensional (up to 192 channels) iEEG data. When used on simulated seizures together with a multivariate connectivity estimator, the partial directed coherence, the two AR models were compared for their ability to reconstitute the designed seizure signal connections from noisy data. Next, focal seizures from iEEG recordings (73-113 channels) in three patients rendered seizure-free after surgery were mapped with the outdegree, a graph-theory index of outward directed connectivity. Simulation results indicated high levels of mapping accuracy for the two models in the presence of low-to-moderate noise cross-correlation. Accordingly, both AR models correctly mapped the real seizure onset to the resection volume. This study supports the possibility of conducting fully data-driven multivariate connectivity estimations on high-dimensional iEEG datasets using the Kalman filter approach.
Similar resilience attributes in lakes with different management practices
Baho, Didier L.; Drakare, Stina; Johnson, Richard K.; Allen, Craig R.; Angeler, David G.
2014-01-01
Liming has been used extensively in Scandinavia and elsewhere since the 1970s to counteract the negative effects of acidification. Communities in limed lakes usually return to acidified conditions once liming is discontinued, suggesting that liming is unlikely to shift acidified lakes to a state equivalent to pre-acidification conditions that requires no further management intervention. While this suggests a low resilience of limed lakes, attributes that confer resilience have not been assessed, limiting our understanding of the efficiency of costly management programs. In this study, we assessed community metrics (diversity, richness, evenness, biovolume), multivariate community structure and the relative resilience of phytoplankton in limed, acidified and circum-neutral lakes from 1997 to 2009, using multivariate time series modeling. We identified dominant temporal frequencies in the data, allowing us to track community change at distinct temporal scales. We assessed two attributes of relative resilience (cross-scale and within-scale structure) of the phytoplankton communities, based on the fluctuation frequency patterns identified. We also assessed species with stochastic temporal dynamics. Liming increased phytoplankton diversity and richness; however, multivariate community structure differed in limed relative to acidified and circum-neutral lakes. Cross-scale and within-scale attributes of resilience were similar across all lakes studied but the contribution of those species exhibiting stochastic dynamics was higher in the acidified and limed compared to circum-neutral lakes. From a resilience perspective, our results suggest that limed lakes comprise a particular condition of an acidified lake state. This explains why liming does not move acidified lakes out of a “degraded” basin of attraction. In addition, our study demonstrates the potential of time series modeling to assess the efficiency of restoration and management outcomes through quantification of the attributes contributing to resilience in ecosystems.
Multivariable Robust Control of a Simulated Hybrid Solid Oxide Fuel Cell Gas Turbine Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, Alex; Banta, Larry; Tucker, David
2010-08-01
This work presents a systematic approach to the multivariable robust control of a hybrid fuel cell gas turbine plant. The hybrid configuration under investigation built by the National Energy Technology Laboratory comprises a physical simulation of a 300kW fuel cell coupled to a 120kW auxiliary power unit single spool gas turbine. The public facility provides for the testing and simulation of different fuel cell models that in turn help identify the key difficulties encountered in the transient operation of such systems. An empirical model of the built facility comprising a simulated fuel cell cathode volume and balance of plant componentsmore » is derived via frequency response data. Through the modulation of various airflow bypass valves within the hybrid configuration, Bode plots are used to derive key input/output interactions in transfer function format. A multivariate system is then built from individual transfer functions, creating a matrix that serves as the nominal plant in an H{sub {infinity}} robust control algorithm. The controller’s main objective is to track and maintain hybrid operational constraints in the fuel cell’s cathode airflow, and the turbo machinery states of temperature and speed, under transient disturbances. This algorithm is then tested on a Simulink/MatLab platform for various perturbations of load and fuel cell heat effluence. As a complementary tool to the aforementioned empirical plant, a nonlinear analytical model faithful to the existing process and instrumentation arrangement is evaluated and designed in the Simulink environment. This parallel task intends to serve as a building block to scalable hybrid configurations that might require a more detailed nonlinear representation for a wide variety of controller schemes and hardware implementations.« less
Clements, Julie; Sanchez, Jessica N
2015-11-01
This research aims to validate a novel, visual body scoring system created for the Magellanic penguin (Spheniscus magellanicus) suitable for the zoo practitioner. Magellanics go through marked seasonal fluctuations in body mass gains and losses. A standardized multi-variable visual body condition guide may provide a more sensitive and objective assessment tool compared to the previously used single variable method. Accurate body condition scores paired with seasonal weight variation measurements give veterinary and keeper staff a clearer understanding of an individual's nutritional status. San Francisco Zoo staff previously used a nine-point body condition scale based on the classic bird standard of a single point of keel palpation with the bird restrained in hand, with no standard measure of reference assigned to each scoring category. We created a novel, visual body condition scoring system that does not require restraint to assesses subcutaneous fat and muscle at seven body landmarks using illustrations and descriptive terms. The scores range from one, the least robust or under-conditioned, to five, the most robust, or over-conditioned. The ratio of body weight to wing length was used as a "gold standard" index of body condition and compared to both the novel multi-variable and previously used single-variable body condition scores. The novel multi-variable scale showed improved agreement with weight:wing ratio compared to the single-variable scale, demonstrating greater accuracy, and reliability when a trained assessor uses the multi-variable body condition scoring system. Zoo staff may use this tool to manage both the colony and the individual to assist in seasonally appropriate Magellanic penguin nutrition assessment. © 2015 Wiley Periodicals, Inc.
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
Paixão, Paulo; Gouveia, Luís F; Silva, Nuno; Morais, José A G
2017-03-01
A simulation study is presented, evaluating the performance of the f 2 , the model-independent multivariate statistical distance and the f 2 bootstrap methods in the ability to conclude similarity between two dissolution profiles. Different dissolution profiles, based on the Noyes-Whitney equation and ranging from theoretical f 2 values between 100 and 40, were simulated. Variability was introduced in the dissolution model parameters in an increasing order, ranging from a situation complying with the European guidelines requirements for the use of the f 2 metric to several situations where the f 2 metric could not be used anymore. Results have shown that the f 2 is an acceptable metric when used according to the regulatory requirements, but loses its applicability when variability increases. The multivariate statistical distance presented contradictory results in several of the simulation scenarios, which makes it an unreliable metric for dissolution profile comparisons. The bootstrap f 2 , although conservative in its conclusions is an alternative suitable method. Overall, as variability increases, all of the discussed methods reveal problems that can only be solved by increasing the number of dosage form units used in the comparison, which is usually not practical or feasible. Additionally, experimental corrective measures may be undertaken in order to reduce the overall variability, particularly when it is shown that it is mainly due to the dissolution assessment instead of being intrinsic to the dosage form. Copyright © 2016. Published by Elsevier B.V.
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
NASA Astrophysics Data System (ADS)
Striepe, Scott Allen
The objectives of this research were to develop a reconstruction capability using the Program to Optimize Simulated Trajectories II (POST2), apply this capability to reconstruct the Huygens Titan probe entry, descent, and landing (EDL) trajectory, evaluate the newly developed POST2 reconstruction module, analyze the reconstructed trajectory, and assess the pre-flight simulation models used for Huygens EDL simulation. An extended Kalman filter (EKF) module was developed and integrated into POST2 to enable trajectory reconstruction (especially when using POST2-based mission specific simulations). Several validation cases, ranging from a single, constant parameter estimate to multivariable estimation cases similar to an actual mission flight, were executed to test the POST2 reconstruction module. Trajectory reconstruction of the Huygens entry probe at Titan was accomplished using accelerometer measurements taken during flight to adjust an estimated state (e.g., position, velocity, parachute drag, wind velocity, etc.) in a POST2-based simulation developed to support EDL analyses and design prior to entry. Although the main emphasis of the trajectory reconstruction was to evaluate models used in the NASA pre-entry trajectory simulation, the resulting reconstructed trajectory was also assessed to provide an independent evaluation of the ESA result. Major findings from this analysis include: Altitude profiles from this analysis agree well with other NASA and ESA results but not with Radar data, whereas a scale factor of about 0.93 would bring the radar measurements into compliance with these results; entry capsule aerodynamics predictions (axial component only) were well within 3-sigma bounds established pre-flight for most of the entry when compared to reconstructed values; Main parachute drag of 9% to 19% above ESA model was determined from the reconstructed trajectory; based on the tilt sensor and accelerometer data, the conclusion from this assessment was that the probe was tilted about 10 degrees during the Drogue parachute phase.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Graffelman, Jan; van Eeuwijk, Fred
2005-12-01
The scatter plot is a well known and easily applicable graphical tool to explore relationships between two quantitative variables. For the exploration of relations between multiple variables, generalisations of the scatter plot are useful. We present an overview of multivariate scatter plots focussing on the following situations. Firstly, we look at a scatter plot for portraying relations between quantitative variables within one data matrix. Secondly, we discuss a similar plot for the case of qualitative variables. Thirdly, we describe scatter plots for the relationships between two sets of variables where we focus on correlations. Finally, we treat plots of the relationships between multiple response and predictor variables, focussing on the matrix of regression coefficients. We will present both known and new results, where an important original contribution concerns a procedure for the inclusion of scales for the variables in multivariate scatter plots. We provide software for drawing such scales. We illustrate the construction and interpretation of the plots by means of examples on data collected in a genomic research program on taste in tomato.
Estimating correlation between multivariate longitudinal data in the presence of heterogeneity.
Gao, Feng; Philip Miller, J; Xiong, Chengjie; Luo, Jingqin; Beiser, Julia A; Chen, Ling; Gordon, Mae O
2017-08-17
Estimating correlation coefficients among outcomes is one of the most important analytical tasks in epidemiological and clinical research. Availability of multivariate longitudinal data presents a unique opportunity to assess joint evolution of outcomes over time. Bivariate linear mixed model (BLMM) provides a versatile tool with regard to assessing correlation. However, BLMMs often assume that all individuals are drawn from a single homogenous population where the individual trajectories are distributed smoothly around population average. Using longitudinal mean deviation (MD) and visual acuity (VA) from the Ocular Hypertension Treatment Study (OHTS), we demonstrated strategies to better understand the correlation between multivariate longitudinal data in the presence of potential heterogeneity. Conditional correlation (i.e., marginal correlation given random effects) was calculated to describe how the association between longitudinal outcomes evolved over time within specific subpopulation. The impact of heterogeneity on correlation was also assessed by simulated data. There was a significant positive correlation in both random intercepts (ρ = 0.278, 95% CI: 0.121-0.420) and random slopes (ρ = 0.579, 95% CI: 0.349-0.810) between longitudinal MD and VA, and the strength of correlation constantly increased over time. However, conditional correlation and simulation studies revealed that the correlation was induced primarily by participants with rapid deteriorating MD who only accounted for a small fraction of total samples. Conditional correlation given random effects provides a robust estimate to describe the correlation between multivariate longitudinal data in the presence of unobserved heterogeneity (NCT00000125).
NASA Astrophysics Data System (ADS)
Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett
2017-05-01
Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned
Three-Dimensional Visualization of Ozone Process Data.
1997-06-18
Scattered Multivariate Data. IEEE Computer Graphics & Applications. 11 (May), 47-55. Odman, M.T. and Ingram, C.L. (1996) Multiscale Air Quality Simulation...the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. MAQSIP is a modular comprehensive air quality modeling system which MCNC...photolyzed back again to nitric oxide. Finally, oxides of 6 nitrogen are terminated through loss or combination into nitric acid, organic nitrates
Knightes, Christopher D.; Golden, Heather E.; Journey, Celeste A.; Davis, Gary M.; Conrads, Paul; Marvin-DiPasquale, Mark; Brigham, Mark E.; Bradley, Paul M.
2014-01-01
Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km2) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km2 and 25 km2) and the encompassing watershed (79 km2). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport.
Determing Credibility of Regional Simulations of Future Climate
NASA Astrophysics Data System (ADS)
Mearns, L. O.
2009-12-01
Climate models have been evaluated or validated ever since they were first developed. Establishing that a climate model can reproduce (some) aspects of the current climate of the earth on various spatial and temporal scales has long been a standard procedure for providing confidence in the model's ability to simulate future climate. However, direct links between the successes and failures of models in reproducing the current climate with regard to what future climates the models simulate has been largely lacking. This is to say that the model evaluation process has been largely divorced from the projections of future climate that the models produce. This is evidenced in the separation in the Intergovernmental Panel on Climate Change (IPCC) WG1 report of the chapter on evaluation of models from the chapter on future climate projections. There has also been the assumption of 'one model, one vote, that is, that each model projection is given equal weight in any multi-model ensemble presentation of the projections of future climate. There have been various attempts at determing measures of credibility that would avoid the 'ultrademocratic' assumption of the IPCC. Simple distinctions between models were made by research such as in Giorgi and Mearns (2002), Tebaldi et al., (2005), and Greene et al., (2006). But the metrics used were rather simplistic. More ambitous means of discriminating among the quality of model simulations have been made through the production of complex multivariate metrics, but insufficent work has been produced to verify that the metrics successfully discriminate in meaningful ways. Indeed it has been suggested that we really don't know what a model must successfully model to establish confidence in its regional-scale projections (Gleckler et al., 2008). Perhaps a more process oriented regional expert judgment approach is needed to understand which errors in climate models really matter for the model's response to future forcing. Such an approach is being attempted in the North American Climate Change Assessment Program (NARCCAP) whereby multiple global models are used to drive multiple regional models for the current period and the mid-21st century over the continent. Progress in this endeavor will be reported.
Huff, G.F.
2004-01-01
The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.
Selecting climate simulations for impact studies based on multivariate patterns of climate change.
Mendlik, Thomas; Gobiet, Andreas
In climate change impact research it is crucial to carefully select the meteorological input for impact models. We present a method for model selection that enables the user to shrink the ensemble to a few representative members, conserving the model spread and accounting for model similarity. This is done in three steps: First, using principal component analysis for a multitude of meteorological parameters, to find common patterns of climate change within the multi-model ensemble. Second, detecting model similarities with regard to these multivariate patterns using cluster analysis. And third, sampling models from each cluster, to generate a subset of representative simulations. We present an application based on the ENSEMBLES regional multi-model ensemble with the aim to provide input for a variety of climate impact studies. We find that the two most dominant patterns of climate change relate to temperature and humidity patterns. The ensemble can be reduced from 25 to 5 simulations while still maintaining its essential characteristics. Having such a representative subset of simulations reduces computational costs for climate impact modeling and enhances the quality of the ensemble at the same time, as it prevents double-counting of dependent simulations that would lead to biased statistics. The online version of this article (doi:10.1007/s10584-015-1582-0) contains supplementary material, which is available to authorized users.
Clinical validation of robot simulation of toothbrushing - comparative plaque removal efficacy
2014-01-01
Background Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Methods Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33–47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33–47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. Results The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. Conclusions The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing. This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning. PMID:24996973
Clinical validation of robot simulation of toothbrushing--comparative plaque removal efficacy.
Lang, Tomas; Staufer, Sebastian; Jennes, Barbara; Gaengler, Peter
2014-07-04
Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33-47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33-47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing.This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning.
A refined method for multivariate meta-analysis and meta-regression.
Jackson, Daniel; Riley, Richard D
2014-02-20
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Huang, Shiquan; Yi, Youping; Li, Pengchuan
2011-05-01
In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.
NASA Astrophysics Data System (ADS)
Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd
2009-07-01
In hydrate-bearing sediments, the velocity and attenuation of compressional and shear waves depend primarily on the spatial distribution of hydrates in the pore space of the subsurface lithologies. Recent characterizations of gas hydrate accumulations based on seismic velocity and attenuation generally assume homogeneous sedimentary layers and neglect effects from large- and small-scale heterogeneities of hydrate-bearing sediments. We present an algorithm, based on stochastic medium theory, to construct heterogeneous multivariable models that mimic heterogeneities of hydrate-bearing sediments at the level of detail provided by borehole logging data. Using this algorithm, we model some key petrophysical properties of gas hydrates within heterogeneous sediments near the Mallik well site, Northwest Territories, Canada. The modeled density, and P and S wave velocities used in combination with a modified Biot-Gassmann theory provide a first-order estimate of the in situ volume of gas hydrate near the Mallik 5L-38 borehole. Our results suggest a range of 528 to 768 × 106 m3/km2 of natural gas trapped within hydrates, nearly an order of magnitude lower than earlier estimates which did not include effects of small-scale heterogeneities. Further, the petrophysical models are combined with a 3-D finite difference modeling algorithm to study seismic attenuation due to scattering and leaky mode propagation. Simulations of a near-offset vertical seismic profile and cross-borehole numerical surveys demonstrate that attenuation of seismic energy may not be directly related to the intrinsic attenuation of hydrate-bearing sediments but, instead, may be largely attributed to scattering from small-scale heterogeneities and highly attenuate leaky mode propagation of seismic waves through larger-scale heterogeneities in sediments.
Taylor, Sandra L; Ruhaak, L Renee; Weiss, Robert H; Kelly, Karen; Kim, Kyoungmi
2017-01-01
High through-put mass spectrometry (MS) is now being used to profile small molecular compounds across multiple biological sample types from the same subjects with the goal of leveraging information across biospecimens. Multivariate statistical methods that combine information from all biospecimens could be more powerful than the usual univariate analyses. However, missing values are common in MS data and imputation can impact between-biospecimen correlation and multivariate analysis results. We propose two multivariate two-part statistics that accommodate missing values and combine data from all biospecimens to identify differentially regulated compounds. Statistical significance is determined using a multivariate permutation null distribution. Relative to univariate tests, the multivariate procedures detected more significant compounds in three biological datasets. In a simulation study, we showed that multi-biospecimen testing procedures were more powerful than single-biospecimen methods when compounds are differentially regulated in multiple biospecimens but univariate methods can be more powerful if compounds are differentially regulated in only one biospecimen. We provide R functions to implement and illustrate our method as supplementary information CONTACT: sltaylor@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Risk assessment of flood disaster and forewarning model at different spatial-temporal scales
NASA Astrophysics Data System (ADS)
Zhao, Jun; Jin, Juliang; Xu, Jinchao; Guo, Qizhong; Hang, Qingfeng; Chen, Yaqian
2018-05-01
Aiming at reducing losses from flood disaster, risk assessment of flood disaster and forewarning model is studied. The model is built upon risk indices in flood disaster system, proceeding from the whole structure and its parts at different spatial-temporal scales. In this study, on the one hand, it mainly establishes the long-term forewarning model for the surface area with three levels of prediction, evaluation, and forewarning. The method of structure-adaptive back-propagation neural network on peak identification is used to simulate indices in prediction sub-model. Set pair analysis is employed to calculate the connection degrees of a single index, comprehensive index, and systematic risk through the multivariate connection number, and the comprehensive assessment is made by assessment matrixes in evaluation sub-model. The comparison judging method is adopted to divide warning degree of flood disaster on risk assessment comprehensive index with forewarning standards in forewarning sub-model and then the long-term local conditions for proposing planning schemes. On the other hand, it mainly sets up the real-time forewarning model for the spot, which introduces the real-time correction technique of Kalman filter based on hydrological model with forewarning index, and then the real-time local conditions for presenting an emergency plan. This study takes Tunxi area, Huangshan City of China, as an example. After risk assessment and forewarning model establishment and application for flood disaster at different spatial-temporal scales between the actual and simulated data from 1989 to 2008, forewarning results show that the development trend for flood disaster risk remains a decline on the whole from 2009 to 2013, despite the rise in 2011. At the macroscopic level, project and non-project measures are advanced, while at the microcosmic level, the time, place, and method are listed. It suggests that the proposed model is feasible with theory and application, thus offering a way for assessing and forewarning flood disaster risk.
Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma
NASA Astrophysics Data System (ADS)
Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.
2018-01-01
High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.
Taheri, Mohammadreza; Moazeni-Pourasil, Roudabeh Sadat; Sheikh-Olia-Lavasani, Majid; Karami, Ahmad; Ghassempour, Alireza
2016-03-01
Chromatographic method development for preparative targets is a time-consuming and subjective process. This can be particularly problematic because of the use of valuable samples for isolation and the large consumption of solvents in preparative scale. These processes could be improved by using statistical computations to save time, solvent and experimental efforts. Thus, contributed by ESI-MS, after applying DryLab software to gain an overview of the most effective parameters in separation of synthesized celecoxib and its co-eluted compounds, design of experiment software that relies on multivariate modeling as a chemometric approach was used to predict the optimized touching-band overloading conditions by objective functions according to the relationship between selectivity and stationary phase properties. The loadability of the method was investigated on the analytical and semi-preparative scales, and the performance of this chemometric approach was approved by peak shapes beside recovery and purity of products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model
ERIC Educational Resources Information Center
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.
2007-01-01
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.
Liu, Han; Wang, Lie; Zhao, Tuo
2015-08-01
We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.
NASA Astrophysics Data System (ADS)
Samhouri, M.; Al-Ghandoor, A.; Fouad, R. H.
2009-08-01
In this study two techniques, for modeling electricity consumption of the Jordanian industrial sector, are presented: (i) multivariate linear regression and (ii) neuro-fuzzy models. Electricity consumption is modeled as function of different variables such as number of establishments, number of employees, electricity tariff, prevailing fuel prices, production outputs, capacity utilizations, and structural effects. It was found that industrial production and capacity utilization are the most important variables that have significant effect on future electrical power demand. The results showed that both the multivariate linear regression and neuro-fuzzy models are generally comparable and can be used adequately to simulate industrial electricity consumption. However, comparison that is based on the square root average squared error of data suggests that the neuro-fuzzy model performs slightly better for future prediction of electricity consumption than the multivariate linear regression model. Such results are in full agreement with similar work, using different methods, for other countries.
Multivariate longitudinal data analysis with censored and intermittent missing responses.
Lin, Tsung-I; Lachos, Victor H; Wang, Wan-Lun
2018-05-08
The multivariate linear mixed model (MLMM) has emerged as an important analytical tool for longitudinal data with multiple outcomes. However, the analysis of multivariate longitudinal data could be complicated by the presence of censored measurements because of a detection limit of the assay in combination with unavoidable missing values arising when subjects miss some of their scheduled visits intermittently. This paper presents a generalization of the MLMM approach, called the MLMM-CM, for a joint analysis of the multivariate longitudinal data with censored and intermittent missing responses. A computationally feasible expectation maximization-based procedure is developed to carry out maximum likelihood estimation within the MLMM-CM framework. Moreover, the asymptotic standard errors of fixed effects are explicitly obtained via the information-based method. We illustrate our methodology by using simulated data and a case study from an AIDS clinical trial. Experimental results reveal that the proposed method is able to provide more satisfactory performance as compared with the traditional MLMM approach. Copyright © 2018 John Wiley & Sons, Ltd.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
SMA texture and reorientation: simulations and neutron diffraction studies
NASA Astrophysics Data System (ADS)
Gao, Xiujie; Brown, Donald W.; Brinson, L. Catherine
2005-05-01
With increased usage of shape memory alloys (SMA) for applications in various fields, it is important to understand how the material behavior is affected by factors such as texture, stress state and loading history, especially for complex multiaxial loading states. Using the in-situ neutron diffraction loading facility (SMARTS diffractometer) and ex situ inverse pole figure measurement facility (HIPPO diffractometer) at the Los Alamos Neutron Science Center (LANCE), the macroscopic mechanical behavior and texture evolution of Nickel-Titanium (Nitinol) SMAs under sequential compression in alternating directions were studied. The simplified multivariant model developed at Northwestern University was then used to simulate the macroscopic behavior and the microstructural change of Nitinol under this sequential loading. Pole figures were obtained via post-processing of the multivariant results for volume fraction evolution and compared quantitatively well to the experimental results. The experimental results can also be used to test or verify other SMA constitutive models.
Maillot, Pauline; Dommes, Aurélie; Dang, Nguyen-Thong; Vienne, Fabrice
2017-02-01
A virtual-reality training program has been developed to help older pedestrians make safer street-crossing decisions in two-way traffic situations. The aim was to develop a small-scale affordable and transportable simulation device that allowed transferring effects to a full-scale device involving actual walking. 20 younger adults and 40 older participants first participated in a pre-test phase to assess their street crossings using both full-scale and small-scale simulation devices. Then, a trained older group (20 participants) completed two 1.5-h training sessions with the small-scale device, whereas an older control group received no training (19 participants). Thereafter, the 39 older trained and untrained participants took part in a 1.5-h post-test phase again with both devices. Pre-test phase results suggested significant differences between both devices in the group of older participants only. Unlike younger participants, older participants accepted more often to cross and had more collisions on the small-scale simulation device than on the full-scale one. Post-test phase results showed that training older participants on the small-scale device allowed a significant global decrease in the percentage of accepted crossings and collisions on both simulation devices. But specific improvements regarding the way participants took into account the speed of approaching cars and vehicles in the far lane were notable only on the full-scale simulation device. The findings suggest that the small-scale simulation device triggers a greater number of unsafe decisions compared to a full-scale one that allows actual crossings. But findings reveal that such a small-scale simulation device could be a good means to improve the safety of street-crossing decisions and behaviors among older pedestrians, suggesting a transfer of learning effect between the two simulation devices, from training people with a miniature device to measuring their specific progress with a full-scale one. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, W. D.; Wehner, M. F.; Prabhat, M.; Kurth, T.; Satish, N.; Mitliagkas, I.; Zhang, J.; Racah, E.; Patwary, M.; Sundaram, N.; Dubey, P.
2017-12-01
Anthropogenically-forced climate changes in the number and character of extreme storms have the potential to significantly impact human and natural systems. Current high-performance computing enables multidecadal simulations with global climate models at resolutions of 25km or finer. Such high-resolution simulations are demonstrably superior in simulating extreme storms such as tropical cyclones than the coarser simulations available in the Coupled Model Intercomparison Project (CMIP5) and provide the capability to more credibly project future changes in extreme storm statistics and properties. The identification and tracking of storms in the voluminous model output is very challenging as it is impractical to manually identify storms due to the enormous size of the datasets, and therefore automated procedures are used. Traditionally, these procedures are based on a multi-variate set of physical conditions based on known properties of the class of storms in question. In recent years, we have successfully demonstrated that Deep Learning produces state of the art results for pattern detection in climate data. We have developed supervised and semi-supervised convolutional architectures for detecting and localizing tropical cyclones, extra-tropical cyclones and atmospheric rivers in simulation data. One of the primary challenges in the applicability of Deep Learning to climate data is in the expensive training phase. Typical networks may take days to converge on 10GB-sized datasets, while the climate science community has ready access to O(10 TB)-O(PB) sized datasets. In this work, we present the most scalable implementation of Deep Learning to date. We successfully scale a unified, semi-supervised convolutional architecture on all of the Cori Phase II supercomputer at NERSC. We use IntelCaffe, MKL and MLSL libraries. We have optimized single node MKL libraries to obtain 1-4 TF on single KNL nodes. We have developed a novel hybrid parameter update strategy to improve scaling to 9600 KNL nodes (600,000 cores). We obtain 15PF performance over the course of the training run; setting a new watermark for the HPC and Deep Learning communities. This talk will share insights on how to obtain this extreme level of performance, current gaps/challenges and implications for the climate science community.
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
Validation of nonlinear gyrokinetic simulations of L- and I-mode plasmas on Alcator C-Mod
Creely, A. J.; Howard, N. T.; Rodriguez-Fernandez, P.; ...
2017-03-02
New validation of global, nonlinear, ion-scale gyrokinetic simulations (GYRO) is carried out for L- and I-mode plasmas on Alcator C-Mod, utilizing heat fluxes, profile stiffness, and temperature fluctuations. Previous work at C-Mod found that ITG/TEM-scale GYRO simulations can match both electron and ion heat fluxes within error bars in I-mode [White PoP 2015], suggesting that multi-scale (cross-scale coupling) effects [Howard PoP 2016] may be less important in I-mode than in L-mode. New results presented here, however, show that global, nonlinear, ion-scale GYRO simulations are able to match the experimental ion heat flux, but underpredict electron heat flux (at most radii),more » electron temperature fluctuations, and perturbative thermal diffusivity in both L- and I-mode. Linear addition of electron heat flux from electron scale runs does not resolve this discrepancy. These results indicate that single-scale simulations do not sufficiently describe the I-mode core transport, and that multi-scale (coupled electron- and ion-scale) transport models are needed. In conclusion a preliminary investigation with multi-scale TGLF, however, was unable to resolve the discrepancy between ion-scale GYRO and experimental electron heat fluxes and perturbative diffusivity, motivating further work with multi-scale GYRO simulations and a more comprehensive study with multi-scale TGLF.« less
Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.
2010-01-01
This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Brian M.; Larson, Vincent E.
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boun...
Knightes, C D; Golden, H E; Journey, C A; Davis, G M; Conrads, P A; Marvin-DiPasquale, M; Brigham, M E; Bradley, P M
2014-04-01
Mercury is a ubiquitous global environmental toxicant responsible for most US fish advisories. Processes governing mercury concentrations in rivers and streams are not well understood, particularly at multiple spatial scales. We investigate how insights gained from reach-scale mercury data and model simulations can be applied at broader watershed scales using a spatially and temporally explicit watershed hydrology and biogeochemical cycling model, VELMA. We simulate fate and transport using reach-scale (0.1 km(2)) study data and evaluate applications to multiple watershed scales. Reach-scale VELMA parameterization was applied to two nested sub-watersheds (28 km(2) and 25 km(2)) and the encompassing watershed (79 km(2)). Results demonstrate that simulated flow and total mercury concentrations compare reasonably to observations at different scales, but simulated methylmercury concentrations are out-of-phase with observations. These findings suggest that intricacies of methylmercury biogeochemical cycling and transport are under-represented in VELMA and underscore the complexity of simulating mercury fate and transport. Published by Elsevier Ltd.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less
Can multivariate models based on MOAKS predict OA knee pain? Data from the Osteoarthritis Initiative
NASA Astrophysics Data System (ADS)
Luna-Gómez, Carlos D.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Galván-Tejada, Carlos E.; Celaya-Padilla, José M.
2017-03-01
Osteoarthritis is the most common rheumatic disease in the world. Knee pain is the most disabling symptom in the disease, the prediction of pain is one of the targets in preventive medicine, this can be applied to new therapies or treatments. Using the magnetic resonance imaging and the grading scales, a multivariate model based on genetic algorithms is presented. Using a predictive model can be useful to associate minor structure changes in the joint with the future knee pain. Results suggest that multivariate models can be predictive with future knee chronic pain. All models; T0, T1 and T2, were statistically significant, all p values were < 0.05 and all AUC > 0.60.
A power analysis for multivariate tests of temporal trend in species composition.
Irvine, Kathryn M; Dinger, Eric C; Sarr, Daniel
2011-10-01
Long-term monitoring programs emphasize power analysis as a tool to determine the sampling effort necessary to effectively document ecologically significant changes in ecosystems. Programs that monitor entire multispecies assemblages require a method for determining the power of multivariate statistical models to detect trend. We provide a method to simulate presence-absence species assemblage data that are consistent with increasing or decreasing directional change in species composition within multiple sites. This step is the foundation for using Monte Carlo methods to approximate the power of any multivariate method for detecting temporal trends. We focus on comparing the power of the Mantel test, permutational multivariate analysis of variance, and constrained analysis of principal coordinates. We find that the power of the various methods we investigate is sensitive to the number of species in the community, univariate species patterns, and the number of sites sampled over time. For increasing directional change scenarios, constrained analysis of principal coordinates was as or more powerful than permutational multivariate analysis of variance, the Mantel test was the least powerful. However, in our investigation of decreasing directional change, the Mantel test was typically as or more powerful than the other models.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging
Naylor, Melissa G.; Cardenas, Valerie A.; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2015-01-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remains a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. PMID:23408378
NASA Astrophysics Data System (ADS)
Hudson, Brian D.; George, Ashley R.; Ford, Martyn G.; Livingstone, David J.
1992-04-01
Molecular dynamics simulations have been performed on a number of conformationally flexible pyrethroid insecticides. The results indicate that molecular dynamics is a suitable tool for conformational searching of small molecules given suitable simulation parameters. The structures derived from the simulations are compared with the static conformation used in a previous study. Various physicochemical parameters have been calculated for a set of conformations selected from the simulations using multivariate analysis. The averaged values of the parameters over the selected set (and the factors derived from them) are compared with the single conformation values used in the previous study.
Spatial adaptive sampling in multiscale simulation
NASA Astrophysics Data System (ADS)
Rouet-Leduc, Bertrand; Barros, Kipton; Cieren, Emmanuel; Elango, Venmugil; Junghans, Christoph; Lookman, Turab; Mohd-Yusof, Jamaludin; Pavel, Robert S.; Rivera, Axel Y.; Roehm, Dominic; McPherson, Allen L.; Germann, Timothy C.
2014-07-01
In a common approach to multiscale simulation, an incomplete set of macroscale equations must be supplemented with constitutive data provided by fine-scale simulation. Collecting statistics from these fine-scale simulations is typically the overwhelming computational cost. We reduce this cost by interpolating the results of fine-scale simulation over the spatial domain of the macro-solver. Unlike previous adaptive sampling strategies, we do not interpolate on the potentially very high dimensional space of inputs to the fine-scale simulation. Our approach is local in space and time, avoids the need for a central database, and is designed to parallelize well on large computer clusters. To demonstrate our method, we simulate one-dimensional elastodynamic shock propagation using the Heterogeneous Multiscale Method (HMM); we find that spatial adaptive sampling requires only ≈ 50 ×N0.14 fine-scale simulations to reconstruct the stress field at all N grid points. Related multiscale approaches, such as Equation Free methods, may also benefit from spatial adaptive sampling.
Sherratt, Emma; Alejandrino, Alvin; Kraemer, Andrew C; Serb, Jeanne M; Adams, Dean C
2016-09-01
Directional evolution is one of the most compelling evolutionary patterns observed in macroevolution. Yet, despite its importance, detecting such trends in multivariate data remains a challenge. In this study, we evaluate multivariate evolution of shell shape in 93 bivalved scallop species, combining geometric morphometrics and phylogenetic comparative methods. Phylomorphospace visualization described the history of morphological diversification in the group; revealing that taxa with a recessing life habit were the most distinctive in shell shape, and appeared to display a directional trend. To evaluate this hypothesis empirically, we extended existing methods by characterizing the mean directional evolution in phylomorphospace for recessing scallops. We then compared this pattern to what was expected under several alternative evolutionary scenarios using phylogenetic simulations. The observed pattern did not fall within the distribution obtained under multivariate Brownian motion, enabling us to reject this evolutionary scenario. By contrast, the observed pattern was more similar to, and fell within, the distribution obtained from simulations using Brownian motion combined with a directional trend. Thus, the observed data are consistent with a pattern of directional evolution for this lineage of recessing scallops. We discuss this putative directional evolutionary trend in terms of its potential adaptive role in exploiting novel habitats. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Buttini, Francesca; Pasquali, Irene; Brambilla, Gaetano; Copelli, Diego; Alberi, Massimiliano Dagli; Balducci, Anna Giulia; Bettini, Ruggero; Sisti, Viviana
2016-03-01
The aim of this work was to evaluate the effect of two different dry powder inhalers, of the NGI induction port and Alberta throat and of the actual inspiratory profiles of asthmatic patients on in-vitro drug inhalation performances. The two devices considered were a reservoir multidose and a capsule-based inhaler. The formulation used to test the inhalers was a combination of formoterol fumarate and beclomethasone dipropionate. A breath simulator was used to mimic inhalatory patterns previously determined in vivo. A multivariate approach was adopted to estimate the significance of the effect of the investigated variables in the explored domain. Breath simulator was a useful tool to mimic in vitro the in vivo inspiratory profiles of asthmatic patients. The type of throat coupled with the impactor did not affect the aerodynamic distribution of the investigated formulation. However, the type of inhaler and inspiratory profiles affected the respirable dose of drugs. The multivariate statistical approach demonstrated that the multidose inhaler, released efficiently a high fine particle mass independently from the inspiratory profiles adopted. Differently, the single dose capsule inhaler, showed a significant decrease of fine particle mass of both drugs when the device was activated using the minimum inspiratory volume (592 mL).
NASA Astrophysics Data System (ADS)
Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao
2017-01-01
Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.
Searching for forcing signatures in decadal patterns of shoreline change
NASA Astrophysics Data System (ADS)
Burningham, H.; French, J.
2016-12-01
Analysis of shoreline position at spatial scales of the order 10 - 100 km and at a multi-decadal time-scale has the potential to reveal regional coherence (or lack of) in the primary controls on shoreline tendencies and trends. Such information is extremely valuable for the evaluation of climate forcing on coastal behaviour. Segmenting a coast into discrete behaviour units based on these types of analyses is often subjective, however, and in the context of pervasive human interventions and alongshore variability in ocean climate, determining the most important controls on shoreline dynamics can be challenging. Multivariate analyses provide one means to resolve common behaviours across shoreline position datasets, thereby underpinning a more objective evaluation of possible coupling between shorelines at different scales. In an analysis of the Suffolk coast (eastern England) we explore the use of multivariate statistics to understand and classify mesoscale coastal behaviour. Suffolk comprises a relatively linear shoreline that shifts from east-facing in the north to southeast-facing in the south. Although primarily formed of a beach foreshore backed by cliffs or shingle barrier, the shoreline is punctuated at 3 locations by narrow tidal inlets with offset entrances that imply a persistent north to south sediment transport direction. Tidal regime decreases south to north from mesotidal (3.6m STR) to microtidal (1.9m STR), and the bimodal wave climate (northeast and southwest modes) presents complex local-scale variability in nearshore conditions. Shorelines exhibit a range of decadal behaviours from rapid erosion (up to 4m/yr) to quasi-stability that cannot be directly explained by the spatial organisation of contemporary landforms or coastal defences. A multivariate statistical approach to shoreline change analysis helps to define the key modes of change and determine the most likely forcing factors.
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
A KST framework for correlation network construction from time series signals
NASA Astrophysics Data System (ADS)
Qi, Jin-Peng; Gu, Quan; Zhu, Ying; Zhang, Ping
2018-04-01
A KST (Kolmogorov-Smirnov test and T statistic) method is used for construction of a correlation network based on the fluctuation of each time series within the multivariate time signals. In this method, each time series is divided equally into multiple segments, and the maximal data fluctuation in each segment is calculated by a KST change detection procedure. Connections between each time series are derived from the data fluctuation matrix, and are used for construction of the fluctuation correlation network (FCN). The method was tested with synthetic simulations and the result was compared with those from using KS or T only for detection of data fluctuation. The novelty of this study is that the correlation analyses was based on the data fluctuation in each segment of each time series rather than on the original time signals, which would be more meaningful for many real world applications and for analysis of large-scale time signals where prior knowledge is uncertain.
Multiple sensor fault diagnosis for dynamic processes.
Li, Cheng-Chih; Jeng, Jyh-Cheng
2010-10-01
Modern industrial plants are usually large scaled and contain a great amount of sensors. Sensor fault diagnosis is crucial and necessary to process safety and optimal operation. This paper proposes a systematic approach to detect, isolate and identify multiple sensor faults for multivariate dynamic systems. The current work first defines deviation vectors for sensor observations, and further defines and derives the basic sensor fault matrix (BSFM), consisting of the normalized basic fault vectors, by several different methods. By projecting a process deviation vector to the space spanned by BSFM, this research uses a vector with the resulted weights on each direction for multiple sensor fault diagnosis. This study also proposes a novel monitoring index and derives corresponding sensor fault detectability. The study also utilizes that vector to isolate and identify multiple sensor faults, and discusses the isolatability and identifiability. Simulation examples and comparison with two conventional PCA-based contribution plots are presented to demonstrate the effectiveness of the proposed methodology. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Green lumber grade yields from factory grade logs of three oak species
Daniel A. Yaussy
1986-01-01
Multivariate regression models were developed to predict green board foot yields for the seven common factory lumber grades processed from white, black, and chestnut oak factory grade logs. These models use the standard log measurements of grade, scaling diameter, log length, and proportion of scaling defect. Any combination of lumber grades (such as 1 Common and...
ERIC Educational Resources Information Center
Frey, Jennifer R.; Elliott, Stephen N.; Kaiser, Ann P.
2014-01-01
Teachers' and parents' importance ratings of social behaviors for 95 preschoolers were examined using the "Social Skills Improvement System-Rating Scales" (Gresham & Elliott, 2008). Multivariate analyses were used to examine parents' and teachers' importance ratings at the item and subscale levels. Overall,…
Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts
2015-01-01
Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...
Insight and suicidality in psychosis: A cross-sectional study.
Massons, Carmen; Lopez-Morinigo, Javier-David; Pousa, Esther; Ruiz, Ada; Ochoa, Susana; Usall, Judith; Nieto, Lourdes; Cobo, Jesus; David, Anthony S; Dutta, Rina
2017-06-01
We aimed to test whether specific insight dimensions are associated with suicidality in patients with psychotic disorders. 143 patients with schizophrenia spectrum disorders were recruited. Suicidality was assessed by item 8 of the Calgary Depression Scale for Schizophrenia (CDSS). Insight was measured by the Scale of Unawareness of Mental Disorder (SUMD) and the Markova and Berrios Insight Scale. Bivariate analyses and multivariable logistic regression models were conducted. Those subjects aware of having a mental illness and its social consequences had higher scores on suicidality than those with poor insight. Awareness of the need for treatment was not linked with suicidality. The Markova and Berrios Insight scale total score and two specific domains (awareness of "disturbed thinking and loss of control over the situation" and "having a vague feeling that something is wrong") were related to suicidality. However, no insight dimensions survived the multivariable regression model, which found depression and previous suicidal behaviour to predict suicidality. Suicidality in psychosis was linked with some insight dimensions: awareness of mental illness and awareness of social consequences, but not compliance. Depression and previous suicidal behaviour mediated the associations with insight; thus, predicting suicidality. Copyright © 2017. Published by Elsevier B.V.
Drew, L.J.; Grunsky, E.C.; Sutphin, D.M.; Woodruff, L.G.
2010-01-01
Soils collected in 2004 along two North American continental-scale transects were subjected to geochemical and mineralogical analyses. In previous interpretations of these analyses, data were expressed in weight percent and parts per million, and thus were subject to the effect of the constant-sum phenomenon. In a new approach to the data, this effect was removed by using centered log-ratio transformations to 'open' the mineralogical and geochemical arrays. Multivariate analyses, including principal component and linear discriminant analyses, of the centered log-ratio data reveal the effects of soil-forming processes, including soil parent material, weathering, and soil age, at the continental-scale of the data arrays that were not readily apparent in the more conventionally presented data. Linear discriminant analysis of the data arrays indicates that the majority of the soil samples collected along the transects can be more successfully classified with Level 1 ecological regional-scale classification by the soil geochemistry than soil mineralogy. A primary objective of this study is to discover and describe, in a parsimonious way, geochemical processes that are both independent and inter-dependent and manifested through compositional data including estimates of the elements and corresponding mineralogy. ?? 2010.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Tawk, Rabih G; Grewal, Sanjeet S; Heckman, Michael G; Rawal, Bhupendra; Miller, David A; Edmonston, Drucilla; Ferguson, Jennifer L; Navarro, Ramon; Ng, Lauren; Brown, Benjamin L; Meschia, James F; Freeman, William D
2016-04-01
The value of neuron-specific enolase (NSE) in predicting clinical outcomes has been investigated in a variety of neurological disorders. To investigate the associations of serum NSE with severity of bleeding and functional outcomes in patients with subarachnoid hemorrhage (SAH). We retrospectively reviewed the records of patients with SAH from June 2008 to June 2012. The severity of SAH bleeding at admission was measured radiographically with the Fisher scale and clinically with the Glasgow Coma Scale, Hunt and Hess grade, and World Federation of Neurologic Surgeons scale. Outcomes were assessed with the modified Rankin Scale at discharge. We identified 309 patients with nontraumatic SAH, and 71 had NSE testing. Median age was 54 years (range, 23-87 years), and 44% were male. In multivariable analysis, increased NSE was associated with a poorer Hunt and Hess grade (P = .003), World Federation of Neurologic Surgeons scale score (P < .001), and Glasgow Coma Scale score (P = .003) and worse outcomes (modified Rankin Scale at discharge; P = .001). There was no significant association between NSE level and Fisher grade (P = .81) in multivariable analysis. We found a significant association between higher NSE levels and poorer clinical presentations and worse outcomes. Although it is still early for any relevant clinical conclusions, our results suggest that NSE holds promise as a tool for screening patients at increased risk of poor outcomes after SAH.
The contribution of cognition and spasticity to driving performance in multiple sclerosis.
Marcotte, Thomas D; Rosenthal, Theodore J; Roberts, Erica; Lampinen, Sara; Scott, J Cobb; Allen, R Wade; Corey-Bloom, Jody
2008-09-01
To examine the independent and combined impact of cognitive dysfunction and spasticity on driving tasks involving high cognitive workload and lower-limb mobility in persons with multiple sclerosis (MS). Single-visit cohort study. Clinical research center. Participants included 17 drivers with MS and 14 referent controls. The group with MS exhibited a broad range of cognitive functioning and disability. Of the 17 patients with MS, 8 had significant spasticity in the knee used to manipulate the accelerator and brake pedals (based on the Modified Ashworth Scale). Not applicable. A brief neuropsychologic test battery and 2 driving simulations. Simulation 1 required participants to maintain a constant speed and lane position while attending to a secondary task. Simulation 2 required participants to adjust their speed to accelerations and decelerations of a lead car in front of them. Patients with MS showed greater variability in lane position (effect size, g=1.30), greater difficulty in maintaining a constant speed (g=1.25), and less ability to respond to lead car speed changes (g=1.85) compared with controls. Within the MS group, in a multivariate model that included neuropsychologic and spasticity measures, cognitive functioning was the strongest predictor of difficulty in maintaining lane position during the divided attention task and poor response time to lead car speed changes, whereas spasticity was associated with reductions in accuracy of tracking the lead car movements and speed maintenance. In this preliminary study, cognitive and physical impairments associated with MS were related to deficits in specific components of simulated driving. Assessment of these factors may help guide the clinician regarding the types of driving behaviors that would put patients with MS at an increased risk for an automobile crash.
The Contribution of Cognition and Spasticity to Driving Performance in Multiple Sclerosis
Marcotte, Thomas D.; Rosenthal, Theodore J.; Roberts, Erica; Lampinen, Sara; Scott, J. Cobb; Allen, R. Wade; Corey-Bloom, Jody
2014-01-01
Objective To examine the independent and combined impact of cognitive dysfunction and spasticity on driving tasks involving high cognitive workload and lower-limb mobility in individuals with multiple sclerosis (MS). Design Single-visit cohort study. Setting Clinical research center. Participants Seventeen drivers with MS and 14 normal controls. The MS group exhibited a broad range of cognitive functioning and disability. Eight MS patients had significant spasticity in the knee proximal to the pedals (based on the Modified Ashworth Scale). Interventions Not applicable. Main Outcome Measures A brief neuropsychologic test battery and 2 driving simulations. Simulation 1 required participants to maintain a constant speed and lane position while attending to a secondary task. Simulation 2 required participants to adjust their speed to accelerations and decelerations of a lead car in front of them. Results MS patients demonstrated greater variability in lane position (effect size g=1.30), greater difficulty in maintaining a constant speed (g=1.25), and less ability to respond to lead car speed changes (g=1.85) compared with controls. Within the MS group, in a multivariate model that included neuropsychologic and spasticity measures, cognitive functioning was the strongest predictor of difficulty in maintaining lane position during the divided attention task and poor response time to lead car speed changes, whereas spasticity was associated with reductions in accuracy of tracking the lead car movements and speed maintenance. Conclusions In this preliminary study, cognitive and physical impairments associated with MS were related to deficits in specific components of simulated driving, and assessment of these factors may help guide the clinician regarding the types of driving behaviors that would put MS patients at increased risk for a crash. PMID:18760160
Estimation of the uncertainty of a climate model using an ensemble simulation
NASA Astrophysics Data System (ADS)
Barth, A.; Mathiot, P.; Goosse, H.
2012-04-01
The atmospheric forcings play an important role in the study of the ocean and sea-ice dynamics of the Southern Ocean. Error in the atmospheric forcings will inevitably result in uncertain model results. The sensitivity of the model results to errors in the atmospheric forcings are studied with ensemble simulations using multivariate perturbations of the atmospheric forcing fields. The numerical ocean model used is the NEMO-LIM in a global configuration with an horizontal resolution of 2°. NCEP reanalyses are used to provide air temperature and wind data to force the ocean model over the last 50 years. A climatological mean is used to prescribe relative humidity, cloud cover and precipitation. In a first step, the model results is compared with OSTIA SST and OSI SAF sea ice concentration of the southern hemisphere. The seasonal behavior of the RMS difference and bias in SST and ice concentration is highlighted as well as the regions with relatively high RMS errors and biases such as the Antarctic Circumpolar Current and near the ice-edge. Ensemble simulations are performed to statistically characterize the model error due to uncertainties in the atmospheric forcings. Such information is a crucial element for future data assimilation experiments. Ensemble simulations are performed with perturbed air temperature and wind forcings. A Fourier decomposition of the NCEP wind vectors and air temperature for 2007 is used to generate ensemble perturbations. The perturbations are scaled such that the resulting ensemble spread matches approximately the RMS differences between the satellite SST and sea ice concentration. The ensemble spread and covariance are analyzed for the minimum and maximum sea ice extent. It is shown that errors in the atmospheric forcings can extend to several hundred meters in depth near the Antarctic Circumpolar Current.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, Jeremiah J; Kenny, Joseph P.
2015-02-01
Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less
49 CFR 239.105 - Debriefing and critique.
Code of Federal Regulations, 2014 CFR
2014-10-01
... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. To the extent practicable, all on-board...-scale simulation shall participate in the session either: (1) In person; (2) Offsite via teleconference...
Yan, En-Rong; Yang, Xiao-Dong; Chang, Scott X; Wang, Xi-Hua
2013-01-01
Understanding how plant trait-species abundance relationships change with a range of single and multivariate environmental properties is crucial for explaining species abundance and rarity. In this study, the abundance of 94 woody plant species was examined and related to 15 plant leaf and wood traits at both local and landscape scales involving 31 plots in subtropical forests in eastern China. Further, plant trait-species abundance relationships were related to a range of single and multivariate (PCA axes) environmental properties such as air humidity, soil moisture content, soil temperature, soil pH, and soil organic matter, nitrogen (N) and phosphorus (P) contents. At the landscape scale, plant maximum height, and twig and stem wood densities were positively correlated, whereas mean leaf area (MLA), leaf N concentration (LN), and total leaf area per twig size (TLA) were negatively correlated with species abundance. At the plot scale, plant maximum height, leaf and twig dry matter contents, twig and stem wood densities were positively correlated, but MLA, specific leaf area, LN, leaf P concentration and TLA were negatively correlated with species abundance. Plant trait-species abundance relationships shifted over the range of seven single environmental properties and along multivariate environmental axes in a similar way. In conclusion, strong relationships between plant traits and species abundance existed among and within communities. Significant shifts in plant trait-species abundance relationships in a range of environmental properties suggest strong environmental filtering processes that influence species abundance and rarity in the studied subtropical forests.
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
DasPy – Open Source Multivariate Land Data Assimilation Framework with High Performance Computing
NASA Astrophysics Data System (ADS)
Han, Xujun; Li, Xin; Montzka, Carsten; Kollet, Stefan; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2015-04-01
Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. Multivariate data assimilation refers to the simultaneous assimilation of observation data for multiple model state variables into a simulation model. Our main motivation was to develop an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with C++ and Fortran language. This system has been evaluated in several soil moisture, L-band brightness temperature and land surface temperature assimilation studies. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be represented by perturbed atmospheric forcings, perturbed soil and vegetation properties and model initial conditions. The CLM4.5 (Community Land Model) was integrated as the model operator. The CMEM (Community Microwave Emission Modelling Platform), COSMIC (COsmic-ray Soil Moisture Interaction Code) and the two source formulation were integrated as observation operators for assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy is parallelized using the hybrid MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) techniques. All the input and output data flow is organized efficiently using the commonly used NetCDF file format. Online 1D and 2D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.
NASA Astrophysics Data System (ADS)
Pérez-Ruzafa, A.; Marcos, C.; Pérez-Ruzafa, I. M.; Barcala, E.; Hegazi, M. I.; Quispe, J.
2007-10-01
To detect changes in ecosystems due to human impact, experimental designs must include replicates at the appropriate scale to avoid pseudoreplication. Although coastal lagoons, with their highly variable environmental factors and biological assemblages, are relatively well-studied systems, very little is known about their natural scales of variation. In this study, we investigate the spatio-temporal scales of variability in the Mar Menor coastal lagoon (SE Spain) using structured hierarchical sampling designs, mixed and permutational multi-variate analyses of variance, and ordination multi-variate analyses applied to hydrographical parameters, nutrients, chlorophyll a and ichthyoplankton in the water column, and to macrophyte and fish benthic assemblages. Lagoon processes in the Mar Menor show heterogeneous patterns at different temporal and spatial scales. The water column characteristics (including nutrient concentration) showed small-scale spatio-temporal variability, from 10 0 to 10 1 km and from fortnightly to seasonally. Biological features (chlorophyll a concentration and ichthyoplankton assemblage descriptors) showed monthly changes and spatial patterns at the scale of 10 0 (chlorophyll a) - 10 1 km (ichthyoplankton). Benthic assemblages (macrophytes and fishes) showed significant differences between types of substrates in the same locality and between localities, according to horizontal gradients related with confinement in the lagoon, at the scale of 10 0-10 1 km. The vertical zonation of macrophyte assemblages (at scales of 10 1-10 2 cm) overlaps changes in substrata and horizontal gradients. Seasonal patterns in vegetation biomass were not significant, but the significant interaction between Locality and Season indicated that the seasons of maximum and minimum biomass depend on local environmental conditions. Benthic fish assemblages showed no significant patterns at the monthly scale but did show seasonal patterns.
Scale effects in wind tunnel modeling of an urban atmospheric boundary layer
NASA Astrophysics Data System (ADS)
Kozmar, Hrvoje
2010-03-01
Precise urban atmospheric boundary layer (ABL) wind tunnel simulations are essential for a wide variety of atmospheric studies in built-up environments including wind loading of structures and air pollutant dispersion. One of key issues in addressing these problems is a proper choice of simulation length scale. In this study, an urban ABL was reproduced in a boundary layer wind tunnel at different scales to study possible scale effects. Two full-depth simulations and one part-depth simulation were carried out using castellated barrier wall, vortex generators, and a fetch of roughness elements. Redesigned “Counihan” vortex generators were employed in the part-depth ABL simulation. A hot-wire anemometry system was used to measure mean velocity and velocity fluctuations. Experimental results are presented as mean velocity, turbulence intensity, Reynolds stress, integral length scale of turbulence, and power spectral density of velocity fluctuations. Results suggest that variations in length-scale factor do not influence the generated ABL models when using similarity criteria applied in this study. Part-depth ABL simulation compares well with two full-depth ABL simulations indicating the truncated vortex generators developed for this study can be successfully employed in urban ABL part-depth simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
DigOut: viewing differential expression genes as outliers.
Yu, Hui; Tu, Kang; Xie, Lu; Li, Yuan-Yuan
2010-12-01
With regards to well-replicated two-conditional microarray datasets, the selection of differentially expressed (DE) genes is a well-studied computational topic, but for multi-conditional microarray datasets with limited or no replication, the same task is not properly addressed by previous studies. This paper adopts multivariate outlier analysis to analyze replication-lacking multi-conditional microarray datasets, finding that it performs significantly better than the widely used limit fold change (LFC) model in a simulated comparative experiment. Compared with the LFC model, the multivariate outlier analysis also demonstrates improved stability against sample variations in a series of manipulated real expression datasets. The reanalysis of a real non-replicated multi-conditional expression dataset series leads to satisfactory results. In conclusion, a multivariate outlier analysis algorithm, like DigOut, is particularly useful for selecting DE genes from non-replicated multi-conditional gene expression dataset.
Guo, Ying; Manatunga, Amita K
2009-03-01
Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.
Everson, Naleya; Levett-Jones, Tracy; Pitt, Victoria; Lapkin, Samuel; Van Der Riet, Pamela; Rossiter, Rachel; Jones, Donovan; Gilligan, Conor; Courtney Pratt, Helen
2018-04-25
Abstract Background Empathic concern has been found to decline in health professional students. Few effective educational programs and a lack of validated scales are reported. Previous analysis of the Empathic Concern scale of the Emotional Response Questionnaire has reported both one and two latent constructs. Aim To evaluate the impact of simulation on nursing students' empathic concern and test the psychometric properties of the Empathic Concern scale. Methods The study used a one group pre-test post-test design with a convenience sample of 460 nursing students. Empathic concern was measured pre-post simulation with the Empathic Concern scale. Factor Analysis was undertaken to investigate the structure of the scale. Results There was a statistically significant increase in Empathic Concern scores between pre-simulation 5.57 (SD = 1.04) and post-simulation 6.10 (SD = 0.95). Factor analysis of the Empathic Concern scale identified one latent dimension. Conclusion Immersive simulation may promote empathic concern. The Empathic Concern scale measured a single latent construct in this cohort.
Scaling a Convection-Resolving RCM to Near-Global Scales
NASA Astrophysics Data System (ADS)
Leutwyler, D.; Fuhrer, O.; Chadha, T.; Kwasniewski, G.; Hoefler, T.; Lapillonne, X.; Lüthi, D.; Osuna, C.; Schar, C.; Schulthess, T. C.; Vogt, H.
2017-12-01
In the recent years, first decade-long kilometer-scale resolution RCM simulations have been performed on continental-scale computational domains. However, the size of the planet Earth is still an order of magnitude larger and thus the computational implications of performing global climate simulations at this resolution are challenging. We explore the gap between the currently established RCM simulations and global simulations by scaling the GPU accelerated version of the COSMO model to a near-global computational domain. To this end, the evolution of an idealized moist baroclinic wave has been simulated over the course of 10 days with a grid spacing of up to 930 m. The computational mesh employs 36'000 x 16'001 x 60 grid points and covers 98.4% of the planet's surface. The code shows perfect weak scaling up to 4'888 Nodes of the Piz Daint supercomputer and yields 0.043 simulated years per day (SYPD) which is approximately one seventh of the 0.2-0.3 SYPD required to conduct AMIP-type simulations. However, at half the resolution (1.9 km) we've observed 0.23 SYPD. Besides formation of frontal precipitating systems containing embedded explicitly-resolved convective motions, the simulations reveal a secondary instability that leads to cut-off warm-core cyclonic vortices in the cyclone's core, once the grid spacing is refined to the kilometer scale. The explicit representation of embedded moist convection and the representation of the previously unresolved instabilities exhibit a physically different behavior in comparison to coarser-resolution simulations. The study demonstrates that global climate simulations using kilometer-scale resolution are imminent and serves as a baseline benchmark for global climate model applications and future exascale supercomputing systems.
Lv, Yong; Song, Gangbing
2018-01-01
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal. PMID:29659510
Finding structure in data using multivariate tree boosting
Miller, Patrick J.; Lubke, Gitta H.; McArtor, Daniel B.; Bergeman, C. S.
2016-01-01
Technology and collaboration enable dramatic increases in the size of psychological and psychiatric data collections, but finding structure in these large data sets with many collected variables is challenging. Decision tree ensembles such as random forests (Strobl, Malley, & Tutz, 2009) are a useful tool for finding structure, but are difficult to interpret with multiple outcome variables which are often of interest in psychology. To find and interpret structure in data sets with multiple outcomes and many predictors (possibly exceeding the sample size), we introduce a multivariate extension to a decision tree ensemble method called gradient boosted regression trees (Friedman, 2001). Our extension, multivariate tree boosting, is a method for nonparametric regression that is useful for identifying important predictors, detecting predictors with nonlinear effects and interactions without specification of such effects, and for identifying predictors that cause two or more outcome variables to covary. We provide the R package ‘mvtboost’ to estimate, tune, and interpret the resulting model, which extends the implementation of univariate boosting in the R package ‘gbm’ (Ridgeway et al., 2015) to continuous, multivariate outcomes. To illustrate the approach, we analyze predictors of psychological well-being (Ryff & Keyes, 1995). Simulations verify that our approach identifies predictors with nonlinear effects and achieves high prediction accuracy, exceeding or matching the performance of (penalized) multivariate multiple regression and multivariate decision trees over a wide range of conditions. PMID:27918183
Yuan, Rui; Lv, Yong; Song, Gangbing
2018-04-16
Rolling bearings are important components in rotary machinery systems. In the field of multi-fault diagnosis of rolling bearings, the vibration signal collected from single channels tends to miss some fault characteristic information. Using multiple sensors to collect signals at different locations on the machine to obtain multivariate signal can remedy this problem. The adverse effect of a power imbalance between the various channels is inevitable, and unfavorable for multivariate signal processing. As a useful, multivariate signal processing method, Adaptive-projection has intrinsically transformed multivariate empirical mode decomposition (APIT-MEMD), and exhibits better performance than MEMD by adopting adaptive projection strategy in order to alleviate power imbalances. The filter bank properties of APIT-MEMD are also adopted to enable more accurate and stable intrinsic mode functions (IMFs), and to ease mode mixing problems in multi-fault frequency extractions. By aligning IMF sets into a third order tensor, high order singular value decomposition (HOSVD) can be employed to estimate the fault number. The fault correlation factor (FCF) analysis is used to conduct correlation analysis, in order to determine effective IMFs; the characteristic frequencies of multi-faults can then be extracted. Numerical simulations and the application of multi-fault situation can demonstrate that the proposed method is promising in multi-fault diagnoses of multivariate rolling bearing signal.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
NASA Astrophysics Data System (ADS)
Yokota, Miyo; Berglund, Larry G.; Bathalon, Gaston P.
2012-03-01
The use of thermoregulatory models for assessing physiological responses of workers in thermally stressful situations has been increasing because of the risks and costs related to human studies. In a previous study (Yokota et al. Eur J Appl Physiol 104:297-302, 2008), the effects of anthropometric variability on predicted physiological responses to heat stress in U.S. Army male soldiers were evaluated. Five somatotypes were identified in U.S. Army male multivariate anthropometric distribution. The simulated heat responses, using a thermoregulatory model, were different between somatotypes. The present study further extends this line of research to female soldiers. Anthropometric somatotypes were identified using multivariate analysis [height, weight, percent body fat (%BF)] and the predicted physiological responses to simulated exercise and heat stress using a thermoregulatory model were evaluated. The simulated conditions included walking at ~3 mph (4.8 km/h) for 300 min and wearing battle dress uniform and body armor in a 30°C, 25% relative humidity (RH) environment without solar radiation. Five major somatotypes (tall-fat, tall-lean, average, short-lean, and short-fat), identified through multivariate analysis of anthropometric distributions, showed different tolerance levels to simulated heat stress: lean women were predicted to maintain their core temperatures (Tc) lower than short-fat or tall-fat women. The measured Tc of female subjects obtained from two heat studies (data1: 30°C, 32% RH, protective garments, ~225 w·m-2 walk for 90 min; data2: 32°C, 75% RH, hot weather battle dress uniform, ~378 ± 32 w·m-2 for 30 min walk/30 min rest cycles for 120 min) were utilized for validation. Validation results agreed with the findings in this study: fat subjects tended to have higher core temperatures than medium individuals (data2) and lean subjects maintained lower core temperatures than medium subjects (data1).
Yokota, Miyo; Berglund, Larry G; Bathalon, Gaston P
2012-03-01
The use of thermoregulatory models for assessing physiological responses of workers in thermally stressful situations has been increasing because of the risks and costs related to human studies. In a previous study (Yokota et al. Eur J Appl Physiol 104:297-302, 2008), the effects of anthropometric variability on predicted physiological responses to heat stress in U.S. Army male soldiers were evaluated. Five somatotypes were identified in U.S. Army male multivariate anthropometric distribution. The simulated heat responses, using a thermoregulatory model, were different between somatotypes. The present study further extends this line of research to female soldiers. Anthropometric somatotypes were identified using multivariate analysis [height, weight, percent body fat (%BF)] and the predicted physiological responses to simulated exercise and heat stress using a thermoregulatory model were evaluated. The simulated conditions included walking at ~3 mph (4.8 km/h) for 300 min and wearing battle dress uniform and body armor in a 30°C, 25% relative humidity (RH) environment without solar radiation. Five major somatotypes (tall-fat, tall-lean, average, short-lean, and short-fat), identified through multivariate analysis of anthropometric distributions, showed different tolerance levels to simulated heat stress: lean women were predicted to maintain their core temperatures (T(c)) lower than short-fat or tall-fat women. The measured T(c) of female subjects obtained from two heat studies (data1: 30°C, 32% RH, protective garments, ~225 w·m(-2) walk for 90 min; data2: 32°C, 75% RH, hot weather battle dress uniform, ~378 ± 32 w·m(-2) for 30 min walk/30 min rest cycles for 120 min) were utilized for validation. Validation results agreed with the findings in this study: fat subjects tended to have higher core temperatures than medium individuals (data2) and lean subjects maintained lower core temperatures than medium subjects (data1).
49 CFR 239.105 - Debriefing and critique.
Code of Federal Regulations, 2013 CFR
2013-10-01
... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...
49 CFR 239.105 - Debriefing and critique.
Code of Federal Regulations, 2011 CFR
2011-10-01
... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...
49 CFR 239.105 - Debriefing and critique.
Code of Federal Regulations, 2010 CFR
2010-10-01
... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...
49 CFR 239.105 - Debriefing and critique.
Code of Federal Regulations, 2012 CFR
2012-10-01
... emergency situation or full-scale simulation to determine the effectiveness of its emergency preparedness... passenger train emergency situation or full-scale simulation. (b) Exceptions. (1) No debriefing and critique...; (2) How much time elapsed between the occurrence of the emergency situation or full-scale simulation...
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
Determining erosion relevant soil characteristics with a small-scale rainfall simulator
NASA Astrophysics Data System (ADS)
Schindewolf, M.; Schmidt, J.
2009-04-01
The use of soil erosion models is of great importance in soil and water conservation. Routine application of these models on the regional scale is not at least limited by the high parameter demands. Although the EROSION 3D simulation model is operating with a comparable low number of parameters, some of the model input variables could only be determined by rainfall simulation experiments. The existing data base of EROSION 3D was created in the mid 90s based on large-scale rainfall simulation experiments on 22x2m sized experimental plots. Up to now this data base does not cover all soil and field conditions adequately. Therefore a new campaign of experiments would be essential to produce additional information especially with respect to the effects of new soil management practices (e.g. long time conservation tillage, non tillage). The rainfall simulator used in the actual campaign consists of 30 identic modules, which are equipped with oscillating rainfall nozzles. Veejet 80/100 (Spraying Systems Co., Wheaton, IL) are used in order to ensure best possible comparability to natural rainfalls with respect to raindrop size distribution and momentum transfer. Central objectives of the small-scale rainfall simulator are - effectively application - provision of comparable results to large-scale rainfall simulation experiments. A crucial problem in using the small scale simulator is the restriction on rather small volume rates of surface runoff. Under this conditions soil detachment is governed by raindrop impact. Thus impact of surface runoff on particle detachment cannot be reproduced adequately by a small-scale rainfall simulator With this problem in mind this paper presents an enhanced small-scale simulator which allows a virtual multiplication of the plot length by feeding additional sediment loaded water to the plot from upstream. Thus is possible to overcome the plot length limited to 3m while reproducing nearly similar flow conditions as in rainfall experiments on standard plots. The simulator is extensively applied to plots of different soil types, crop types and management systems. The comparison with existing data sets obtained by large-scale rainfall simulations show that results can adequately be reproduced by the applied combination of small-scale rainfall simulator and sediment loaded water influx.
A Cyber-Attack Detection Model Based on Multivariate Analyses
NASA Astrophysics Data System (ADS)
Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi
In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Design of full-scale adsorption systems typically includes expensive and time-consuming pilot studies to simulate full-scale adsorber performance. Accordingly, the rapid small-scale column test (RSSCT) was developed and evaluated experimentally. The RSSCT can simulate months of f...
Meng, Qiong; Yang, Zheng; Wu, Yang; Xiao, Yuanyuan; Gu, Xuezhong; Zhang, Meixia; Wan, Chonghua; Li, Xiaosong
2017-05-04
The Functional Assessment of Cancer Therapy-Leukemia (FACT-Leu) scale, a leukemia-specific instrument for determining the health-related quality of life (HRQOL) in patients with leukemia, had been developed and validated, but there have been no reports on the development of a simplified Chinese version of this scale. This is a new exploration to analyze the reliability of the HRQOL measurement using multivariate generalizability theory (MGT). This study aimed to develop a Chinese version of the FACT-Leu scale and evaluate its reliability using MGT to provide evidence to support the revision and improvement of this scale. The Chinese version of the FACT-Leu scale was developed by four steps: forward translation, backward translation, cultural adaptation and pilot-testing. The HRQOL was measured for eligible inpatients with leukemia using this scale to provide data. A single-facet multivariate Generalizability Study (G-study) design was demonstrated to estimate the variance-covariance components and then several Decision Studies (D-studies) with varying numbers of items were analyzed to obtain reliability coefficients and to understand how much the measurement reliability could be vary as the number of items in MGT changes. One-hundred and one eligible inpatients diagnosed with leukemia were recruited and completed the HRQOL measurement at the time of admission to the hospital. In the G-study, the variation component of the patient-item interaction was largest while the variation component of the item was the smallest for the four of five domains, except for the leukemia-specific (LEUS) domain. In the D-study, at the level of domain, the generalizability coefficients (G) and the indexes of dependability (Ф) for four of the five domains were approximately equal to or greater than 0.80 except for the Emotional Well-being (EWB) domain (>0.70 but <0.80). For the overall scale, the composite G and composite Ф coefficients were greater than 0.90. Based on the G coefficient and Ф coefficient, two decision options for revising this scale considering the number of items were obtained: one is a 37-item version while the other is a 45-item version. The Chinese version of the FACT-Leu scale has good reliability as a whole based on the results of MGT and the implementation of MGT could lead to more informed decisions in complex questionnaire design and improvement.
Zhang, Hang; Xu, Qingyan; Liu, Baicheng
2014-01-01
The rapid development of numerical modeling techniques has led to more accurate results in modeling metal solidification processes. In this study, the cellular automaton-finite difference (CA-FD) method was used to simulate the directional solidification (DS) process of single crystal (SX) superalloy blade samples. Experiments were carried out to validate the simulation results. Meanwhile, an intelligent model based on fuzzy control theory was built to optimize the complicate DS process. Several key parameters, such as mushy zone width and temperature difference at the cast-mold interface, were recognized as the input variables. The input variables were functioned with the multivariable fuzzy rule to get the output adjustment of withdrawal rate (v) (a key technological parameter). The multivariable fuzzy rule was built, based on the structure feature of casting, such as the relationship between section area, and the delay time of the temperature change response by changing v, and the professional experience of the operator as well. Then, the fuzzy controlling model coupled with CA-FD method could be used to optimize v in real-time during the manufacturing process. The optimized process was proven to be more flexible and adaptive for a steady and stray-grain free DS process. PMID:28788535
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2016-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.
Kia, Seyed Mostafa; Vega Pons, Sandro; Weisz, Nathan; Passerini, Andrea
2017-01-01
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future. PMID:28167896
Application of multivariate statistical techniques in microbial ecology
Paliy, O.; Shankar, V.
2016-01-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large scale ecological datasets. Especially noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions, and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amounts of data, powerful statistical techniques of multivariate analysis are well suited to analyze and interpret these datasets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular dataset. In this review we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive, and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and dataset structure. PMID:26786791
A refined method for multivariate meta-analysis and meta-regression
Jackson, Daniel; Riley, Richard D
2014-01-01
Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
Friedman, David B
2012-01-01
All quantitative proteomics experiments measure variation between samples. When performing large-scale experiments that involve multiple conditions or treatments, the experimental design should include the appropriate number of individual biological replicates from each condition to enable the distinction between a relevant biological signal from technical noise. Multivariate statistical analyses, such as principal component analysis (PCA), provide a global perspective on experimental variation, thereby enabling the assessment of whether the variation describes the expected biological signal or the unanticipated technical/biological noise inherent in the system. Examples will be shown from high-resolution multivariable DIGE experiments where PCA was instrumental in demonstrating biologically significant variation as well as sample outliers, fouled samples, and overriding technical variation that would not be readily observed using standard univariate tests.
Information extraction from multivariate images
NASA Technical Reports Server (NTRS)
Park, S. K.; Kegley, K. A.; Schiess, J. R.
1986-01-01
An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.
ERIC Educational Resources Information Center
John, Lindsay Herbert
2004-01-01
The validity of a scale, from the Ontario Health Survey, measuring the subjective sense of well-being, for a large multicultural population in Metropolitan Toronto, is examined through principal components analysis with oblique rotation. Four factors are extracted. Factor 1, is a stress and strain factor, and consists of health worries, feeling…
ERIC Educational Resources Information Center
Sung, Kyung Hee; Noh, Eun Hee; Chon, Kyong Hee
2017-01-01
With increased use of constructed response items in large scale assessments, the cost of scoring has been a major consideration (Noh et al. in KICE Report RRE 2012-6, 2012; Wainer and Thissen in "Applied Measurement in Education" 6:103-118, 1993). In response to the scoring cost issues, various forms of automated system for scoring…
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Combining Correlation Matrices: Simulation Analysis of Improved Fixed-Effects Methods
ERIC Educational Resources Information Center
Hafdahl, Adam R.
2007-01-01
The originally proposed multivariate meta-analysis approach for correlation matrices--analyze Pearson correlations, with each study's observed correlations replacing their population counterparts in its conditional-covariance matrix--performs poorly. Two refinements are considered: Analyze Fisher Z-transformed correlations, and substitute better…
The Effect of Lateral Boundary Values on Atmospheric Mercury Simulations with the CMAQ Model
Simulation results from three global-scale models of atmospheric mercury have been used to define three sets of initial condition and boundary condition (IC/BC) data for regional-scale model simulations over North America using the Community Multi-scale Air Quality (CMAQ) model. ...
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Post-processing of multi-hydrologic model simulations for improved streamflow projections
NASA Astrophysics Data System (ADS)
khajehei, sepideh; Ahmadalipour, Ali; Moradkhani, Hamid
2016-04-01
Hydrologic model outputs are prone to bias and uncertainty due to knowledge deficiency in model and data. Uncertainty in hydroclimatic projections arises due to uncertainty in hydrologic model as well as the epistemic or aleatory uncertainties in GCM parameterization and development. This study is conducted to: 1) evaluate the recently developed multi-variate post-processing method for historical simulations and 2) assess the effect of post-processing on uncertainty and reliability of future streamflow projections in both high-flow and low-flow conditions. The first objective is performed for historical period of 1970-1999. Future streamflow projections are generated for 10 statistically downscaled GCMs from two widely used downscaling methods: Bias Corrected Statistically Downscaled (BCSD) and Multivariate Adaptive Constructed Analogs (MACA), over the period of 2010-2099 for two representative concentration pathways of RCP4.5 and RCP8.5. Three semi-distributed hydrologic models were employed and calibrated at 1/16 degree latitude-longitude resolution for over 100 points across the Columbia River Basin (CRB) in the pacific northwest USA. Streamflow outputs are post-processed through a Bayesian framework based on copula functions. The post-processing approach is relying on a transfer function developed based on bivariate joint distribution between the observation and simulation in historical period. Results show that application of post-processing technique leads to considerably higher accuracy in historical simulations and also reducing model uncertainty in future streamflow projections.
Applying the multivariate time-rescaling theorem to neural population models
Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon
2011-01-01
Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436
Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, M D; Cole, S; Frenk, C S
2011-02-14
We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less
Johnson, Susan L; Tabaei, Bahman P; Herman, William H
2005-02-01
To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.
Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260
A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator
Engelmann, Christian; Naughton, III, Thomas J.
2016-03-22
Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less
Multivariate Analysis of the Visual Information Processing of Numbers
ERIC Educational Resources Information Center
Levine, David M.
1977-01-01
Nonmetric multidimensional scaling and hierarchical clustering procedures are applied to a confusion matrix of numerals. Two dimensions were interpreted: straight versus curved, and locus of curvature. Four major clusters of numerals were developed. (Author/JKS)
Estimating the decomposition of predictive information in multivariate systems
NASA Astrophysics Data System (ADS)
Faes, Luca; Kugiumtzis, Dimitris; Nollo, Giandomenico; Jurysta, Fabrice; Marinazzo, Daniele
2015-03-01
In the study of complex systems from observed multivariate time series, insight into the evolution of one system may be under investigation, which can be explained by the information storage of the system and the information transfer from other interacting systems. We present a framework for the model-free estimation of information storage and information transfer computed as the terms composing the predictive information about the target of a multivariate dynamical process. The approach tackles the curse of dimensionality employing a nonuniform embedding scheme that selects progressively, among the past components of the multivariate process, only those that contribute most, in terms of conditional mutual information, to the present target process. Moreover, it computes all information-theoretic quantities using a nearest-neighbor technique designed to compensate the bias due to the different dimensionality of individual entropy terms. The resulting estimators of prediction entropy, storage entropy, transfer entropy, and partial transfer entropy are tested on simulations of coupled linear stochastic and nonlinear deterministic dynamic processes, demonstrating the superiority of the proposed approach over the traditional estimators based on uniform embedding. The framework is then applied to multivariate physiologic time series, resulting in physiologically well-interpretable information decompositions of cardiovascular and cardiorespiratory interactions during head-up tilt and of joint brain-heart dynamics during sleep.
Voxelwise multivariate analysis of multimodality magnetic resonance imaging.
Naylor, Melissa G; Cardenas, Valerie A; Tosun, Duygu; Schuff, Norbert; Weiner, Michael; Schwartzman, Armin
2014-03-01
Most brain magnetic resonance imaging (MRI) studies concentrate on a single MRI contrast or modality, frequently structural MRI. By performing an integrated analysis of several modalities, such as structural, perfusion-weighted, and diffusion-weighted MRI, new insights may be attained to better understand the underlying processes of brain diseases. We compare two voxelwise approaches: (1) fitting multiple univariate models, one for each outcome and then adjusting for multiple comparisons among the outcomes and (2) fitting a multivariate model. In both cases, adjustment for multiple comparisons is performed over all voxels jointly to account for the search over the brain. The multivariate model is able to account for the multiple comparisons over outcomes without assuming independence because the covariance structure between modalities is estimated. Simulations show that the multivariate approach is more powerful when the outcomes are correlated and, even when the outcomes are independent, the multivariate approach is just as powerful or more powerful when at least two outcomes are dependent on predictors in the model. However, multiple univariate regressions with Bonferroni correction remain a desirable alternative in some circumstances. To illustrate the power of each approach, we analyze a case control study of Alzheimer's disease, in which data from three MRI modalities are available. Copyright © 2013 Wiley Periodicals, Inc.
Mapping Informative Clusters in a Hierarchial Framework of fMRI Multivariate Analysis
Xu, Rui; Zhen, Zonglei; Liu, Jia
2010-01-01
Pattern recognition methods have become increasingly popular in fMRI data analysis, which are powerful in discriminating between multi-voxel patterns of brain activities associated with different mental states. However, when they are used in functional brain mapping, the location of discriminative voxels varies significantly, raising difficulties in interpreting the locus of the effect. Here we proposed a hierarchical framework of multivariate approach that maps informative clusters rather than voxels to achieve reliable functional brain mapping without compromising the discriminative power. In particular, we first searched for local homogeneous clusters that consisted of voxels with similar response profiles. Then, a multi-voxel classifier was built for each cluster to extract discriminative information from the multi-voxel patterns. Finally, through multivariate ranking, outputs from the classifiers were served as a multi-cluster pattern to identify informative clusters by examining interactions among clusters. Results from both simulated and real fMRI data demonstrated that this hierarchical approach showed better performance in the robustness of functional brain mapping than traditional voxel-based multivariate methods. In addition, the mapped clusters were highly overlapped for two perceptually equivalent object categories, further confirming the validity of our approach. In short, the hierarchical framework of multivariate approach is suitable for both pattern classification and brain mapping in fMRI studies. PMID:21152081
[Depressive symptoms as a risk factor for dependence in elderly people].
Avila-Funes, José Alberto; Melano-Carranza, Efrén; Payette, Hélène; Amieva, Hélène
2007-01-01
To determine the relationship between depressive symptoms and dependence in activities of daily living. Participants, aged 70 to 104 (n= 1 880), were evaluated twice (2001 and 2003). Depressive symptoms were established by a modified version of Center for Epidemiologic Studies Depression scale, whereas functional dependence was assessed with Lawton & Brody and Katz scales. Dependence implies the attendance and assistance of another person to accomplish the activity. Multivariate regression analyses were used to determine the effect of depressive symptoms on incident dependence. At baseline, 37.9% had depressive symptoms. After two years, 6.1 and 12.7% developed functional dependence for one or more ADL and IADL, respectively. Multivariate analyses showed that depressive symptoms were a risk factor to the development of functional dependence only for the instrumental activities for daily living. Depressive symptoms are a risk factor for functional dependence. Systematic screening it seems necessary in the evaluation of geriatric patients.
Scaling and pedotransfer in numerical simulations of flow and transport in soils
USDA-ARS?s Scientific Manuscript database
Flow and transport parameters of soils in numerical simulations need to be defined at the support scale of computational grid cells. Such support scale can substantially differ from the support scale in laboratory or field measurements of flow and transport parameters. The scale-dependence of flow a...
NASA Astrophysics Data System (ADS)
Cannon, Alex
2017-04-01
Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a univariate technique, and cannot incorporate information from additional covariates, for example ENSO state or physiographic controls on extreme rainfall within a region. Here, the univariate MQR model is extended to allow the use of multiple covariates. Multivariate monotone quantile regression (MMQR) is based on a single hidden-layer feedforward network with the quantile regression error function and partial monotonicity constraints. The MMQR model is demonstrated via Monte Carlo simulations and the estimation and visualization of regional trends in moderate rainfall extremes based on homogenized sub-daily precipitation data at stations in Canada.
Analysing the teleconnection systems affecting the climate of the Carpathian Basin
NASA Astrophysics Data System (ADS)
Kristóf, Erzsébet; Bartholy, Judit; Pongrácz, Rita
2017-04-01
Nowadays, the increase of the global average near-surface air temperature is unequivocal. Atmospheric low-frequency variabilities have substantial impacts on climate variables such as air temperature and precipitation. Therefore, assessing their effects is essential to improve global and regional climate model simulations for the 21st century. The North Atlantic Oscillation (NAO) is one of the best-known atmospheric teleconnection patterns affecting the Carpathian Basin in Central Europe. Besides NAO, we aim to analyse other interannual-to-decadal teleconnection patterns, which might have significant impacts on the Carpathian Basin, namely, the East Atlantic/West Russia pattern, the Scandinavian pattern, the Mediterranean Oscillation, and the North-Sea Caspian Pattern. For this purpose primarily the European Centre for Medium-Range Weather Forecasts' (ECMWF) ERA-20C atmospheric reanalysis dataset and multivariate statistical methods are used. The indices of each teleconnection pattern and their correlations with temperature and precipitation will be calculated for the period of 1961-1990. On the basis of these data first the long range (i. e. seasonal and/or annual scale) forecast ability is evaluated. Then, we aim to calculate the same indices of the relevant teleconnection patterns for the historical and future simulations of Coupled Model Intercomparison Project Phase 5 (CMIP5) models and compare them against each other using statistical methods. Our ultimate goal is to examine all available CMIP5 models and evaluate their abilities to reproduce the selected teleconnection systems. Thus, climate predictions for the 21st century for the Carpathian Basin may be improved using the best-performing models among all CMIP5 model simulations.
Heuristics to Facilitate Understanding of Discriminant Analysis.
ERIC Educational Resources Information Center
Van Epps, Pamela D.
This paper discusses the principles underlying discriminant analysis and constructs a simulated data set to illustrate its methods. Discriminant analysis is a multivariate technique for identifying the best combination of variables to maximally discriminate between groups. Discriminant functions are established on existing groups and used to…
Innovation Analysis | Energy Analysis | NREL
. New empirical methods for estimating technical and commercial impact (based on patent citations and Commercial Breakthroughs, NREL employed regression models and multivariate simulations to compare social in the marketplace and found that: Web presence may provide a better representation of the commercial
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
A global × global test for testing associations between two large sets of variables.
Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J
2017-01-01
In high-dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high-dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
Shen, Yanna; Cooper, Gregory F
2012-09-01
This paper investigates Bayesian modeling of known and unknown causes of events in the context of disease-outbreak detection. We introduce a multivariate Bayesian approach that models multiple evidential features of every person in the population. This approach models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities and (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities. We report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A contribution of this paper is that it introduces a multivariate Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has general applicability in domains where the space of known causes is incomplete. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Efficient Global Aerodynamic Modeling from Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2012-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
User Selection Criteria of Airspace Designs in Flexible Airspace Management
NASA Technical Reports Server (NTRS)
Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung
2011-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
Keiski, Michelle A; Shore, Douglas L; Hamilton, Joanna M; Malec, James F
2015-04-01
The purpose of this study was to characterize the operating characteristics of the Personality Assessment Inventory (PAI) validity scales in distinguishing simulators feigning symptoms of traumatic brain injury (TBI) while completing the PAI (n = 84) from a clinical sample of patients with TBI who achieved adequate scores on performance validity tests (n = 112). The simulators were divided into two groups: (a) Specific Simulators feigning cognitive and somatic symptoms only or (b) Global Simulators feigning cognitive, somatic, and psychiatric symptoms. The PAI overreporting scales were indeed sensitive to the simulation of TBI symptoms in this analogue design. However, these scales were less sensitive to the feigning of somatic and cognitive TBI symptoms than the feigning of a broad range of cognitive, somatic, and emotional symptoms often associated with TBI. The relationships of TBI simulation to consistency and underreporting scales are also explored. © The Author(s) 2014.
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
Dislocation dynamics simulations of plasticity at small scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Caizhi
2010-01-01
As metallic structures and devices are being created on a dimension comparable to the length scales of the underlying dislocation microstructures, the mechanical properties of them change drastically. Since such small structures are increasingly common in modern technologies, there is an emergent need to understand the critical roles of elasticity, plasticity, and fracture in small structures. Dislocation dynamics (DD) simulations, in which the dislocations are the simulated entities, offer a way to extend length scales beyond those of atomistic simulations and the results from DD simulations can be directly compared with the micromechanical tests. The primary objective of this researchmore » is to use 3-D DD simulations to study the plastic deformation of nano- and micro-scale materials and understand the correlation between dislocation motion, interactions and the mechanical response. Specifically, to identify what critical events (i.e., dislocation multiplication, cross-slip, storage, nucleation, junction and dipole formation, pinning etc.) determine the deformation response and how these change from bulk behavior as the system decreases in size and correlate and improve our current knowledge of bulk plasticity with the knowledge gained from the direct observations of small-scale plasticity. Our simulation results on single crystal micropillars and polycrystalline thin films can march the experiment results well and capture the essential features in small-scale plasticity. Furthermore, several simple and accurate models have been developed following our simulation results and can reasonably predict the plastic behavior of small scale materials.« less
NASA Astrophysics Data System (ADS)
Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.
2018-01-01
This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Bo; Fox, Rodney O.; Feng, Heng
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
Kong, Bo; Fox, Rodney O.; Feng, Heng; ...
2017-02-16
An Euler–Euler anisotropic Gaussian approach (EE-AG) for simulating gas–particle flows, in which particle velocities are assumed to follow a multivariate anisotropic Gaussian distribution, is used to perform mesoscale simulations of homogeneous cluster-induced turbulence (CIT). A three-dimensional Gauss–Hermite quadrature formulation is used to calculate the kinetic flux for 10 velocity moments in a finite-volume framework. The particle-phase volume-fraction and momentum equations are coupled with the Eulerian solver for the gas phase. This approach is implemented in an open-source CFD package, OpenFOAM, and detailed simulation results are compared with previous Euler–Lagrange simulations in a domain size study of CIT. Here, these resultsmore » demonstrate that the proposed EE-AG methodology is able to produce comparable results to EL simulations, and this moment-based methodology can be used to perform accurate mesoscale simulations of dilute gas–particle flows.« less
NASA Astrophysics Data System (ADS)
Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.
2018-03-01
This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.
Development and validation of the Simulation Learning Effectiveness Scale for nursing students.
Pai, Hsiang-Chu
2016-11-01
To develop and validate the Simulation Learning Effectiveness Scale, which is based on Bandura's social cognitive theory. A simulation programme is a significant teaching strategy for nursing students. Nevertheless, there are few evidence-based instruments that validate the effectiveness of simulation learning in Taiwan. This is a quantitative descriptive design. In Study 1, a nonprobability convenience sample of 151 student nurses completed the Simulation Learning Effectiveness Scale. Exploratory factor analysis was used to examine the factor structure of the instrument. In Study 2, which involved 365 student nurses, confirmatory factor analysis and structural equation modelling were used to analyse the construct validity of the Simulation Learning Effectiveness Scale. In Study 1, exploratory factor analysis yielded three components: self-regulation, self-efficacy and self-motivation. The three factors explained 29·09, 27·74 and 19·32% of the variance, respectively. The final 12-item instrument with the three factors explained 76·15% of variance. Cronbach's alpha was 0·94. In Study 2, confirmatory factor analysis identified a second-order factor termed Simulation Learning Effectiveness Scale. Goodness-of-fit indices showed an acceptable fit overall with the full model (χ 2 /df (51) = 3·54, comparative fit index = 0·96, Tucker-Lewis index = 0·95 and standardised root-mean-square residual = 0·035). In addition, teacher's competence was found to encourage learning, and self-reflection and insight were significantly and positively associated with Simulation Learning Effectiveness Scale. Teacher's competence in encouraging learning also was significantly and positively associated with self-reflection and insight. Overall, theses variable explained 21·9% of the variance in the student's learning effectiveness. The Simulation Learning Effectiveness Scale is a reliable and valid means to assess simulation learning effectiveness for nursing students. The Simulation Learning Effectiveness Scale can be used to examine nursing students' learning effectiveness and serve as a basis to improve student's learning efficiency through simulation programmes. Future implementation research that focuses on the relationship between learning effectiveness and nursing competence in nursing students is recommended. © 2016 John Wiley & Sons Ltd.
Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A
2014-08-01
Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.
Yan, En-Rong; Yang, Xiao-Dong; Chang, Scott X.; Wang, Xi-Hua
2013-01-01
Understanding how plant trait-species abundance relationships change with a range of single and multivariate environmental properties is crucial for explaining species abundance and rarity. In this study, the abundance of 94 woody plant species was examined and related to 15 plant leaf and wood traits at both local and landscape scales involving 31 plots in subtropical forests in eastern China. Further, plant trait-species abundance relationships were related to a range of single and multivariate (PCA axes) environmental properties such as air humidity, soil moisture content, soil temperature, soil pH, and soil organic matter, nitrogen (N) and phosphorus (P) contents. At the landscape scale, plant maximum height, and twig and stem wood densities were positively correlated, whereas mean leaf area (MLA), leaf N concentration (LN), and total leaf area per twig size (TLA) were negatively correlated with species abundance. At the plot scale, plant maximum height, leaf and twig dry matter contents, twig and stem wood densities were positively correlated, but MLA, specific leaf area, LN, leaf P concentration and TLA were negatively correlated with species abundance. Plant trait-species abundance relationships shifted over the range of seven single environmental properties and along multivariate environmental axes in a similar way. In conclusion, strong relationships between plant traits and species abundance existed among and within communities. Significant shifts in plant trait-species abundance relationships in a range of environmental properties suggest strong environmental filtering processes that influence species abundance and rarity in the studied subtropical forests. PMID:23560114
Bryan, Craig J; Kanzler, Kathryn E; Grieser, Emily; Martinez, Annette; Allison, Sybil; McGeary, Donald
2017-03-01
Research in psychiatric outpatient and inpatient populations supports the utility of the Suicide Cognitions Scale (SCS) as an indicator of current and future risk for suicidal thoughts and behaviors. Designed to assess suicide-specific thoughts and beliefs, the SCS has yet to be evaluated among chronic pain patients, a group with elevated risk for suicide. The purpose of the present study was to develop and test a shortened version of the SCS (the SCS-S). A total of 228 chronic pain patients completed a battery of self-report surveys before or after a scheduled appointment. Three outpatient medical clinics (pain medicine, orofacial pain, and clinical health psychology). Confirmatory factor analysis, multivariate regression, and graded item response theory model analyses. Results of the CFAs suggested that a 3-factor solution was optimal. A shortened 9-item scale was identified based on the results of graded item response theory model analyses. Correlation and multivariate analyses supported the construct and incremental validity of the SCS-S. Results support the reliability and validity of the SCS-S among chronic pain patients, and suggest the scale may be a useful method for identifying high-risk patients in medical settings. © 2016 World Institute of Pain.
NASA Technical Reports Server (NTRS)
Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.
1980-01-01
NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.
2010-12-01
computers in 1953. HIL motion simulators were also built for the dynamic testing of vehicle com- ponents (e.g. suspensions, bodies ) with hydraulic or...complex, comprehensive mechanical systems can be simulated in real-time by parallel computers; examples include multi- body sys- tems, brake systems...hard constraints in a multivariable control framework. And the third aspect is the ability to perform online optimization. These aspects results in
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud
2016-08-01
The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.
Multivariable bio-inspired photonic sensors for non-condensable gases
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Karker, Nicholas; Carpenter, Michael A.; Minnick, Andrew
2018-02-01
Existing gas sensors often lose their measurement accuracy in practical field applications. To mitigate this significant problem, here, we report a demonstration of fabricated multivariable photonic sensors inspired by a known nanostructure of Morpho butterfly scales for detection of exemplary non-condensable gases such as H2, CO, and CO2. We fabricated bio-inspired nanostructures using conventional photolithography and chemical etching and detected individual gases that were difficult or unrealistic to detect using natural Morpho nanostructures. Such bio-inspired gas sensors are the critical step in the development of new sensors with improved accuracy for diverse operational scenarios. While this report is our initial demonstration of responses of fabricated multivariable sensors to individual gases in pristine laboratory conditions, it is a significant milestone in understanding the next steps toward field tests and practical applications of these sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Liu, Siwei; Molenaar, Peter C M
2014-12-01
This article introduces iVAR, an R program for imputing missing data in multivariate time series on the basis of vector autoregressive (VAR) models. We conducted a simulation study to compare iVAR with three methods for handling missing data: listwise deletion, imputation with sample means and variances, and multiple imputation ignoring time dependency. The results showed that iVAR produces better estimates for the cross-lagged coefficients than do the other three methods. We demonstrate the use of iVAR with an empirical example of time series electrodermal activity data and discuss the advantages and limitations of the program.
NASA Technical Reports Server (NTRS)
Balakrishna, S.; Goglia, G. L.
1979-01-01
The details of the efforts to synthesize a control-compatible multivariable model of a liquid nitrogen cooled, gaseous nitrogen operated, closed circuit, cryogenic pressure tunnel are presented. The synthesized model was transformed into a real-time cryogenic tunnel simulator, and this model is validated by comparing the model responses to the actual tunnel responses of the 0.3 m transonic cryogenic tunnel, using the quasi-steady-state and the transient responses of the model and the tunnel. The global nature of the simple, explicit, lumped multivariable model of a closed circuit cryogenic tunnel is demonstrated.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek
2012-01-01
As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.
Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek
2013-01-01
As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495
Simulation of Mesoscale Cellular Convection in Marine Stratocumulus. Part I: Drizzling Conditions
Zhou, Xiaoli; Ackerman, Andrew S.; Fridlind, Ann M.; ...
2018-01-01
This study uses eddy-permitting simulations to investigate the mechanisms that promote mesoscale variability of moisture in drizzling stratocumulus-topped marine boundary layers. Simulations show that precipitation tends to increase horizontal scales. Analysis of terms in the prognostic equation for total water mixing ratio variance indicates that moisture stratification plays a leading role in setting horizontal scales. This result is supported by simulations in which horizontal mean thermodynamic profiles are strongly nudged to their initial well-mixed state, which limits cloud scales. It is found that the spatial variability of subcloud moist cold pools surprisingly tends to respond to, rather than determine, themore » mesoscale variability, which may distinguish them from dry cold pools associated with deeper convection. Finally, simulations also indicate that moisture stratification increases cloud scales specifically by increasing latent heating within updrafts, which increases updraft buoyancy and favors greater horizontal scales.« less
Simulation of Mesoscale Cellular Convection in Marine Stratocumulus. Part I: Drizzling Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xiaoli; Ackerman, Andrew S.; Fridlind, Ann M.
This study uses eddy-permitting simulations to investigate the mechanisms that promote mesoscale variability of moisture in drizzling stratocumulus-topped marine boundary layers. Simulations show that precipitation tends to increase horizontal scales. Analysis of terms in the prognostic equation for total water mixing ratio variance indicates that moisture stratification plays a leading role in setting horizontal scales. This result is supported by simulations in which horizontal mean thermodynamic profiles are strongly nudged to their initial well-mixed state, which limits cloud scales. It is found that the spatial variability of subcloud moist cold pools surprisingly tends to respond to, rather than determine, themore » mesoscale variability, which may distinguish them from dry cold pools associated with deeper convection. Finally, simulations also indicate that moisture stratification increases cloud scales specifically by increasing latent heating within updrafts, which increases updraft buoyancy and favors greater horizontal scales.« less
Propulsion simulator for magnetically-suspended wind tunnel models
NASA Technical Reports Server (NTRS)
Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.
1991-01-01
The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.
Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams
NASA Astrophysics Data System (ADS)
Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping
2018-06-01
A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).
Scaling of hydrologic and erosion parameters derived from rainfall simulation
NASA Astrophysics Data System (ADS)
Sheridan, Gary; Lane, Patrick; Noske, Philip; Sherwin, Christopher
2010-05-01
Rainfall simulation experiments conducted at the temporal scale of minutes and the spatial scale of meters are often used to derive parameters for erosion and water quality models that operate at much larger temporal and spatial scales. While such parameterization is convenient, there has been little effort to validate this approach via nested experiments across these scales. In this paper we first review the literature relevant to some of these long acknowledged issues. We then present rainfall simulation and erosion plot data from a range of sources, including mining, roading, and forestry, to explore the issues associated with the scaling of parameters such as infiltration properties and erodibility coefficients.
Multi-scale gyrokinetic simulation of Alcator C-Mod tokamak discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, N. T., E-mail: nthoward@psfc.mit.edu; White, A. E.; Greenwald, M.
2014-03-15
Alcator C-Mod tokamak discharges have been studied with nonlinear gyrokinetic simulation simultaneously spanning both ion and electron spatiotemporal scales. These multi-scale simulations utilized the gyrokinetic model implemented by GYRO code [J. Candy and R. E. Waltz, J. Comput. Phys. 186, 545 (2003)] and the approximation of reduced electron mass (μ = (m{sub D}/m{sub e}){sup .5} = 20.0) to qualitatively study a pair of Alcator C-Mod discharges: a low-power discharge, previously demonstrated (using realistic mass, ion-scale simulation) to display an under-prediction of the electron heat flux and a high-power discharge displaying agreement with both ion and electron heat flux channels [N. T. Howard et al.,more » Nucl. Fusion 53, 123011 (2013)]. These multi-scale simulations demonstrate the importance of electron-scale turbulence in the core of conventional tokamak discharges and suggest it is a viable candidate for explaining the observed under-prediction of electron heat flux. In this paper, we investigate the coupling of turbulence at the ion (k{sub θ}ρ{sub s}∼O(1.0)) and electron (k{sub θ}ρ{sub e}∼O(1.0)) scales for experimental plasma conditions both exhibiting strong (high-power) and marginally stable (low-power) low-k (k{sub θ}ρ{sub s} < 1.0) turbulence. It is found that reduced mass simulation of the plasma exhibiting marginally stable low-k turbulence fails to provide even qualitative insight into the turbulence present in the realistic plasma conditions. In contrast, multi-scale simulation of the plasma condition exhibiting strong turbulence provides valuable insight into the coupling of the ion and electron scales.« less
A large meteorological wind tunnel was used to simulate a suburban atmospheric boundary layer. The model-prototype scale was 1:300 and the roughness length was approximately 1.0 m full scale. The model boundary layer simulated full scale dispersion from ground-level and elevated ...
Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo
2017-03-15
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
TATES: Efficient Multivariate Genotype-Phenotype Analysis for Genome-Wide Association Studies
van der Sluis, Sophie; Posthuma, Danielle; Dolan, Conor V.
2013-01-01
To date, the genome-wide association study (GWAS) is the primary tool to identify genetic variants that cause phenotypic variation. As GWAS analyses are generally univariate in nature, multivariate phenotypic information is usually reduced to a single composite score. This practice often results in loss of statistical power to detect causal variants. Multivariate genotype–phenotype methods do exist but attain maximal power only in special circumstances. Here, we present a new multivariate method that we refer to as TATES (Trait-based Association Test that uses Extended Simes procedure), inspired by the GATES procedure proposed by Li et al (2011). For each component of a multivariate trait, TATES combines p-values obtained in standard univariate GWAS to acquire one trait-based p-value, while correcting for correlations between components. Extensive simulations, probing a wide variety of genotype–phenotype models, show that TATES's false positive rate is correct, and that TATES's statistical power to detect causal variants explaining 0.5% of the variance can be 2.5–9 times higher than the power of univariate tests based on composite scores and 1.5–2 times higher than the power of the standard MANOVA. Unlike other multivariate methods, TATES detects both genetic variants that are common to multiple phenotypes and genetic variants that are specific to a single phenotype, i.e. TATES provides a more complete view of the genetic architecture of complex traits. As the actual causal genotype–phenotype model is usually unknown and probably phenotypically and genetically complex, TATES, available as an open source program, constitutes a powerful new multivariate strategy that allows researchers to identify novel causal variants, while the complexity of traits is no longer a limiting factor. PMID:23359524
NASA Astrophysics Data System (ADS)
Hussein, Rafid M.; Chandrashekhara, K.
2017-11-01
A multi-scale modeling approach is presented to simulate and validate thermo-oxidation shrinkage and cracking damage of a high temperature polymer composite. The multi-scale approach investigates coupled transient diffusion-reaction and static structural at macro- to micro-scale. The micro-scale shrinkage deformation and cracking damage are simulated and validated using 2D and 3D simulations. Localized shrinkage displacement boundary conditions for the micro-scale simulations are determined from the respective meso- and macro-scale simulations, conducted for a cross-ply laminate. The meso-scale geometrical domain and the micro-scale geometry and mesh are developed using the object oriented finite element (OOF). The macro-scale shrinkage and weight loss are measured using unidirectional coupons and used to build the macro-shrinkage model. The cross-ply coupons are used to validate the macro-shrinkage model by the shrinkage profiles acquired using scanning electron images at the cracked surface. The macro-shrinkage model deformation shows a discrepancy when the micro-scale image-based cracking is computed. The local maximum shrinkage strain is assumed to be 13 times the maximum macro-shrinkage strain of 2.5 × 10-5, upon which the discrepancy is minimized. The microcrack damage of the composite is modeled using a static elastic analysis with extended finite element and cohesive surfaces by considering the modulus spatial evolution. The 3D shrinkage displacements are fed to the model using node-wise boundary/domain conditions of the respective oxidized region. Microcrack simulation results: length, meander, and opening are closely matched to the crack in the area of interest for the scanning electron images.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-12-01
One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales. However, as the focus shifts towards higher frequency variability, decadal or multidecadal, the need for larger simulation ensembles becomes more evident. Nevertheless,the comparison at these time scales may expose some lines of research on the origin of multidecadal regional climate variability.
Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...
2015-01-20
Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less
van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.
2018-01-01
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620
DOT National Transportation Integrated Search
2013-09-01
Recent advances in multivariate methodology provide an opportunity to further the assessment of service offerings in public transportation for work commuting. We offer methodologies that are alternative to direct rating scale and have advantages in t...
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
Wavelet-based surrogate time series for multiscale simulation of heterogeneous catalysis
Savara, Aditya Ashi; Daw, C. Stuart; Xiong, Qingang; ...
2016-01-28
We propose a wavelet-based scheme that encodes the essential dynamics of discrete microscale surface reactions in a form that can be coupled with continuum macroscale flow simulations with high computational efficiency. This makes it possible to simulate the dynamic behavior of reactor-scale heterogeneous catalysis without requiring detailed concurrent simulations at both the surface and continuum scales using different models. Our scheme is based on the application of wavelet-based surrogate time series that encodes the essential temporal and/or spatial fine-scale dynamics at the catalyst surface. The encoded dynamics are then used to generate statistically equivalent, randomized surrogate time series, which canmore » be linked to the continuum scale simulation. As a result, we illustrate an application of this approach using two different kinetic Monte Carlo simulations with different characteristic behaviors typical for heterogeneous chemical reactions.« less
On theoretical and experimental modeling of metabolism forming in prebiotic systems
NASA Astrophysics Data System (ADS)
Bartsev, S. I.; Mezhevikin, V. V.
Recently searching for extraterrestrial life attracts more and more attention However the searching hardly can be effective without sufficiently universal concept of life origin which incidentally tackles a problem of origin of life on the Earth A concept of initial stages of life origin including origin of prebiotic metabolism is stated in the paper Suggested concept eliminates key difficulties in the problem of life origin and allows experimental verification of it According to the concept the predecessor of living beings has to be sufficiently simple to provide non-zero probability of self-assembling during short in geological or cosmic scale time In addition the predecessor has to be capable of autocatalysis and further complication evolution A possible scenario of initial stage of life origin which can be realized both on other planets and inside experimental facility is considered In the scope of the scenario a theoretical model of multivariate oligomeric autocatalyst coupled with phase-separated particle is presented Results of computer simulation of possible initial stage of chemical evolution are shown Conducted estimations show the origin of autocatalytic oligomeric phase-separated system is possible at reasonable values of kinetic parameters of involved chemical reactions in a small-scale flow reactor Accepted statements allowing to eliminate key problems of life origin imply important consequence -- organisms emerged out of the Earth or inside a reactor have to be based on another different from terrestrial biochemical
Nagashima, Hiroaki; Watari, Akiko; Shinoda, Yasuharu; Okamoto, Hiroshi; Takuma, Shinya
2013-12-01
This case study describes the application of Quality by Design elements to the process of culturing Chinese hamster ovary cells in the production of a monoclonal antibody. All steps in the cell culture process and all process parameters in each step were identified by using a cause-and-effect diagram. Prospective risk assessment using failure mode and effects analysis identified the following four potential critical process parameters in the production culture step: initial viable cell density, culture duration, pH, and temperature. These parameters and lot-to-lot variability in raw material were then evaluated by process characterization utilizing a design of experiments approach consisting of a face-centered central composite design integrated with a full factorial design. Process characterization was conducted using a scaled down model that had been qualified by comparison with large-scale production data. Multivariate regression analysis was used to establish statistical prediction models for performance indicators and quality attributes; with these, we constructed contour plots and conducted Monte Carlo simulation to clarify the design space. The statistical analyses, especially for raw materials, identified set point values, which were most robust with respect to the lot-to-lot variability of raw materials while keeping the product quality within the acceptance criteria. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.
Goovaerts, P; Albuquerque, Teresa; Antunes, Margarida
2016-11-01
This paper describes a multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration, with an application to an abandoned sedimentary gold mining region in Portugal. The main challenge was the existence of only a dozen gold measurements confined to the grounds of the old gold mines, which precluded the application of traditional interpolation techniques, such as cokriging. The analysis could, however, capitalize on 376 stream sediment samples that were analyzed for twenty two elements. Gold (Au) was first predicted at all 376 locations using linear regression (R 2 =0.798) and four metals (Fe, As, Sn and W), which are known to be mostly associated with the local gold's paragenesis. One hundred realizations of the spatial distribution of gold content were generated using sequential indicator simulation and a soft indicator coding of regression estimates, to supplement the hard indicator coding of gold measurements. Each simulated map then underwent a local cluster analysis to identify significant aggregates of low or high values. The one hundred classified maps were processed to derive the most likely classification of each simulated node and the associated probability of occurrence. Examining the distribution of the hot-spots and cold-spots reveals a clear enrichment in Au along the Erges River downstream from the old sedimentary mineralization.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Correlation between the Truax and the Carkhuff Scales for Measurement of Empathy.
ERIC Educational Resources Information Center
Engram, Barbara E.; Vandergoot, David
1978-01-01
Assessed correspondence between the Carkhuff and the Truax scales for empathy. Undergraduates responded to nine simulated client statements. An overall correlation of .89 was found. Interrater reliabilities for both scales were high. Degree of correspondence between scales varied with the content-affect characteristics of the simulated client…
Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott
2017-11-01
Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Geostatistics and petroleum geology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohn, M.E.
1988-01-01
This book examines purpose and use of geostatistics in exploration and development of oil and gas with an emphasis on appropriate and pertinent case studies. It present an overview of geostatistics. Topics covered include: The semivariogram; Linear estimation; Multivariate geostatistics; Nonlinear estimation; From indicator variables to nonparametric estimation; and More detail, less certainty; conditional simulation.
Update and review of accuracy assessment techniques for remotely sensed data
NASA Technical Reports Server (NTRS)
Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.
1983-01-01
Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.
Use of Multivariate Linkage Analysis for Dissection of a Complex Cognitive Trait
Marlow, Angela J.; Fisher, Simon E.; Francks, Clyde; MacPhie, I. Laurence; Cherny, Stacey S.; Richardson, Alex J.; Talcott, Joel B.; Stein, John F.; Monaco, Anthony P.; Cardon, Lon R.
2003-01-01
Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits. PMID:12587094
NASA Astrophysics Data System (ADS)
Bucholz, Eric W.
In the field of tribology, the ability to predict, and ultimately control, frictional performance is of critical importance for the optimization of tribological systems. As such, understanding the specific mechanisms involved in the lubrication processes for different materials is a fundamental step in tribological system design. In this work, a combination of computational and experimental methods that include classical molecular dynamics (MD) simulations, atomic force microscopy (AFM) experiments, and multivariate statistical analyses provides fundamental insight into the tribological and mechanical properties of carbon-based and inorganic nanostructures, lamellar materials, and inorganic ceramic compounds. One class of materials of modern interest for tribological applications is nanoparticles, which can be employed either as solid lubricating films or as lubricant additives. In experimental systems, however, it is often challenging to attain the in situ observation of tribological interfaces necessary to identify the atomic-level mechanisms involved during lubrication and response to mechanical deformation. Here, classical MD simulations establish the mechanisms occurring during the friction and compression of several types of nanoparticles including carbon nano-onions, amorphous carbon nanoparticles, and inorganic fullerene-like MoS2 nanoparticles. Specifically, the effect of a nanoparticle's structural properties on the lubrication mechanisms of rolling, sliding, and lamellar exfoliation is indicated; the findings quantify the relative impact of each mechanism on the tribological and mechanical properties of these nanoparticles. Beyond identifying the lubrication mechanisms of known lubricating materials, the continual advancement of modern technology necessitates the identification of new candidate materials for use in tribological applications. To this effect, atomic-scale AFM friction experiments on the aluminosilicate mineral pyrophyllite demonstrate that pyrophyllite provides a low friction coefficient and low shear stresses as well as a high threshold to interfacial wear; this suggests the potential for use of pyrophyllite as a lubricious material under specific conditions. Also, a robust and accurate model for estimating the friction coefficients of inorganic ceramic materials that is based on the fundamental relationships between material properties is presented, which was developed using multivariate data mining algorithms. These findings provide the tribological community with a new means of quickly identifying candidate materials that may provide specific frictional properties for desired applications.
Cell nuclei and cytoplasm joint segmentation using the sliding band filter.
Quelhas, Pedro; Marcuzzo, Monica; Mendonça, Ana Maria; Campilho, Aurélio
2010-08-01
Microscopy cell image analysis is a fundamental tool for biological research. In particular, multivariate fluorescence microscopy is used to observe different aspects of cells in cultures. It is still common practice to perform analysis tasks by visual inspection of individual cells which is time consuming, exhausting and prone to induce subjective bias. This makes automatic cell image analysis essential for large scale, objective studies of cell cultures. Traditionally the task of automatic cell analysis is approached through the use of image segmentation methods for extraction of cells' locations and shapes. Image segmentation, although fundamental, is neither an easy task in computer vision nor is it robust to image quality changes. This makes image segmentation for cell detection semi-automated requiring frequent tuning of parameters. We introduce a new approach for cell detection and shape estimation in multivariate images based on the sliding band filter (SBF). This filter's design makes it adequate to detect overall convex shapes and as such it performs well for cell detection. Furthermore, the parameters involved are intuitive as they are directly related to the expected cell size. Using the SBF filter we detect cells' nucleus and cytoplasm location and shapes. Based on the assumption that each cell has the same approximate shape center in both nuclei and cytoplasm fluorescence channels, we guide cytoplasm shape estimation by the nuclear detections improving performance and reducing errors. Then we validate cell detection by gathering evidence from nuclei and cytoplasm channels. Additionally, we include overlap correction and shape regularization steps which further improve the estimated cell shapes. The approach is evaluated using two datasets with different types of data: a 20 images benchmark set of simulated cell culture images, containing 1000 simulated cells; a 16 images Drosophila melanogaster Kc167 dataset containing 1255 cells, stained for DNA and actin. Both image datasets present a difficult problem due to the high variability of cell shapes and frequent cluster overlap between cells. On the Drosophila dataset our approach achieved a precision/recall of 95%/69% and 82%/90% for nuclei and cytoplasm detection respectively and an overall accuracy of 76%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less
Multi-Scale Modeling of Liquid Phase Sintering Affected by Gravity: Preliminary Analysis
NASA Technical Reports Server (NTRS)
Olevsky, Eugene; German, Randall M.
2012-01-01
A multi-scale simulation concept taking into account impact of gravity on liquid phase sintering is described. The gravity influence can be included at both the micro- and macro-scales. At the micro-scale, the diffusion mass-transport is directionally modified in the framework of kinetic Monte-Carlo simulations to include the impact of gravity. The micro-scale simulations can provide the values of the constitutive parameters for macroscopic sintering simulations. At the macro-scale, we are attempting to embed a continuum model of sintering into a finite-element framework that includes the gravity forces and substrate friction. If successful, the finite elements analysis will enable predictions relevant to space-based processing, including size and shape and property predictions. Model experiments are underway to support the models via extraction of viscosity moduli versus composition, particle size, heating rate, temperature and time.
On the Scaling Laws and Similarity Spectra for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Kandula, Max
2008-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are reviewed with regard to their applicability to deduce full-scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full- scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. New results are presented showing the dependence of overall sound power level on the jet temperature ratio at various jet Mach numbers. A generalized similarity spectrum is also proposed, which accounts for convective Mach number and angle to the jet axis.
Parents' and Adolescents' Attitudes about Parental Involvement in Clinical Research.
Rosenthal, Susan L; de Roche, Ariel M; Catallozzi, Marina; Breitkopf, Carmen Radecki; Ipp, Lisa S; Chang, Jane; Francis, Jenny K R; Hu, Mei-Chen
2016-08-01
To understand parent and adolescent attitudes toward parental involvement during clinical trials and factors related to those attitudes. As part of a study on willingness to participate in a hypothetical microbicide study, adolescents and their parents were interviewed separately. Adolescent medicine clinics in New York City. There were 301 dyads of adolescents (ages 14-17 years; 62% female; 72% Hispanic) and their parents. None. The interview included questions on demographic characteristics, sexual history, and family environment (subscales of the Family Environment Scale) that were associated with attitudes about parental involvement. Factor analysis of the parental involvement scale yielded 2 factors: LEARN, reflecting gaining knowledge about study test results and behaviors (4 items) and PROCEDURE, reflecting enrollment and permissions (4 items). Adolescents endorsed significantly fewer items on the LEARN scale and the PROCEDURE scale indicating that adolescents believed in less parental involvement. There was no significant concordance between adolescents and their own parents on the LEARN scale and the PROCEDURE scale. In final multivariate models predicting attitudes, adolescents who were female and had sexual contact beyond kissing, and non-Hispanic parents had lower LEARN scores. Adolescents who were older, had previous research experience, and reported less moral or religious emphasis in their family had lower PROCEDURE scores; there were no significant predictors for parents in the multivariate analyses. Parents wanted greater involvement in the research process than adolescents. Recruitment and retention might be enhanced by managing these differing expectations. Copyright © 2016 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Goh, Yu-Ra; Choi, Ja Young; Kim, Seon Ah; Park, Jieun; Park, Eun Sook
2018-01-01
This study aimed to investigate the relationships between various classification systems assessing the severity of oropharyngeal dysphagia and communication function and other functional profiles in children with cerebral palsy (CP). This is a prospective, cross-sectional, study in a university-affiliated, tertiary-care hospital. We recruited 151 children with CP (mean age 6.11 years, SD 3.42, range 3-18yr). The Eating and Drinking Ability Classification System (EDACS) and the dysphagia scales of Functional Oral Intake Scale (FOIS), Swallow Function Scales (SFS), and Food Intake Level Scale (FILS) were used. The Communication Function Classification System (CFCS) and Viking Speech Scale (VSS) were employed to classify communication function and speech intelligibility, respectively. The Pediatric Evaluation of Disability Inventory (PEDI) with the Gross Motor Function Classification System (GFMCS) and the Manual Ability Classification System (MACS) level were also assessed. Spearman correlation analysis to investigate the associations between measures and univariate and multivariate logistic regression models to identify significant factors were used. Median GMFCS level of participants was III (interquartile range II-IV). Significant dysphagia based on EDACS level III-V was noted in 23 children (15.2%). There were strong to very strong relationships between the EDACS level with the dysphagia scales. The EDACS presented strong associations with MACS, CFCS, and VSS, a moderate association with GMFCS level, and a moderate to strong association with each domain of the PEDI. In multivariate analysis, poor functioning in EDACS were associated with poor functioning in gross motor and communication functions. Copyright © 2017. Published by Elsevier Ltd.
Liao, Qiuyan; Wong, Wing Sze; Fielding, Richard
2013-01-01
Background Risk perception is a reported predictor of vaccination uptake, but which measures of risk perception best predict influenza vaccination uptake remain unclear. Methodology During the main influenza seasons (between January and March) of 2009 (Wave 1) and 2010 (Wave 2),505 Chinese students and employees from a Hong Kong university completed an online survey. Multivariate logistic regression models were conducted to assess how well different risk perceptions measures in Wave 1 predicted vaccination uptake against seasonal influenza in Wave 2. Principal Findings The results of the multivariate logistic regression models showed that feeling at risk (β = 0.25, p = 0.021) was the better predictor compared with probability judgment while probability judgment (β = 0.25, p = 0.029 ) was better than beliefs about risk in predicting subsequent influenza vaccination uptake. Beliefs about risk and feeling at risk seemed to predict the same aspect of subsequent vaccination uptake because their associations with vaccination uptake became insignificant when paired into the logistic regression model. Similarly, to compare the four scales for assessing probability judgment in predicting vaccination uptake, the 7-point verbal scale remained a significant and stronger predictor for vaccination uptake when paired with other three scales; the 6-point verbal scale was a significant and stronger predictor when paired with the percentage scale or the 2-point verbal scale; and the percentage scale was a significant and stronger predictor only when paired with the 2-point verbal scale. Conclusions/Significance Beliefs about risk and feeling at risk are not well differentiated by Hong Kong Chinese people. Feeling at risk, an affective-cognitive dimension of risk perception predicts subsequent vaccination uptake better than do probability judgments. Among the four scales for assessing risk probability judgment, the 7-point verbal scale offered the best predictive power for subsequent vaccination uptake. PMID:23894292
Liao, Qiuyan; Wong, Wing Sze; Fielding, Richard
2013-01-01
Risk perception is a reported predictor of vaccination uptake, but which measures of risk perception best predict influenza vaccination uptake remain unclear. During the main influenza seasons (between January and March) of 2009 (Wave 1) and 2010 (Wave 2),505 Chinese students and employees from a Hong Kong university completed an online survey. Multivariate logistic regression models were conducted to assess how well different risk perceptions measures in Wave 1 predicted vaccination uptake against seasonal influenza in Wave 2. The results of the multivariate logistic regression models showed that feeling at risk (β = 0.25, p = 0.021) was the better predictor compared with probability judgment while probability judgment (β = 0.25, p = 0.029 ) was better than beliefs about risk in predicting subsequent influenza vaccination uptake. Beliefs about risk and feeling at risk seemed to predict the same aspect of subsequent vaccination uptake because their associations with vaccination uptake became insignificant when paired into the logistic regression model. Similarly, to compare the four scales for assessing probability judgment in predicting vaccination uptake, the 7-point verbal scale remained a significant and stronger predictor for vaccination uptake when paired with other three scales; the 6-point verbal scale was a significant and stronger predictor when paired with the percentage scale or the 2-point verbal scale; and the percentage scale was a significant and stronger predictor only when paired with the 2-point verbal scale. Beliefs about risk and feeling at risk are not well differentiated by Hong Kong Chinese people. Feeling at risk, an affective-cognitive dimension of risk perception predicts subsequent vaccination uptake better than do probability judgments. Among the four scales for assessing risk probability judgment, the 7-point verbal scale offered the best predictive power for subsequent vaccination uptake.
Simulant Development for LAWPS Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, Renee L.; Schonewill, Philip P.; Burns, Carolyn A.
2017-05-23
This report describes simulant development work that was conducted to support the technology maturation of the LAWPS facility. Desired simulant physical properties (density, viscosity, solids concentration, solid particle size), sodium concentrations, and general anion identifications were provided by WRPS. The simulant recipes, particularly a “nominal” 5.6M Na simulant, are intended to be tested at several scales, ranging from bench-scale (500 mL) to full-scale. Each simulant formulation was selected to be chemically representative of the waste streams anticipated to be fed to the LAWPS system, and used the current version of the LAWPS waste specification as a formulation basis. After simulantmore » development iterations, four simulants of varying sodium concentration (5.6M, 6.0M, 4.0M, and 8.0M) were prepared and characterized. The formulation basis, development testing, and final simulant recipes and characterization data for these four simulants are presented in this report.« less
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
NASA Astrophysics Data System (ADS)
Han, X.; Li, X.; He, G.; Kumbhar, P.; Montzka, C.; Kollet, S.; Miyoshi, T.; Rosolem, R.; Zhang, Y.; Vereecken, H.; Franssen, H.-J. H.
2015-08-01
Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. Multivariate data assimilation refers to the simultaneous assimilation of observation data from multiple model state variables into a simulation model. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. We developed an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with the C++ and Fortran programming languages. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be introduced by perturbed atmospheric forcing data, and represented by perturbed soil and vegetation parameters and model initial conditions. The Community Land Model (CLM) was integrated as the model operator. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. The Community Microwave Emission Modelling platform (CMEM), COsmic-ray Soil Moisture Interaction Code (COSMIC) and the Two-Source Formulation (TSF) were integrated as observation operators for the assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy has been evaluated in several assimilation studies of neutron count intensity (soil moisture), L-band brightness temperature and land surface temperature. DasPy is parallelized using the hybrid Message Passing Interface and Open Multi-Processing techniques. All the input and output data flows are organized efficiently using the commonly used NetCDF file format. Online 1-D and 2-D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.
The Australian Computational Earth Systems Simulator
NASA Astrophysics Data System (ADS)
Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.
2001-12-01
Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic behaviour of earth systems. ACcESS represents a part of Australia's contribution to the APEC Cooperation for Earthquake Simulation (ACES) international initiative. Together with other national earth systems science initiatives including the Japanese Earth Simulator and US General Earthquake Model projects, ACcESS aims to provide a driver for scientific advancement and technological breakthroughs including: quantum leaps in understanding of earth evolution at global, crustal, regional and microscopic scales; new knowledge of the physics of crustal fault systems required to underpin the grand challenge of earthquake prediction; new understanding and predictive capabilities of geological processes such as tectonics and mineralisation.
Relativistic initial conditions for N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fidler, Christian; Tram, Thomas; Crittenden, Robert
2017-06-01
Initial conditions for (Newtonian) cosmological N-body simulations are usually set by re-scaling the present-day power spectrum obtained from linear (relativistic) Boltzmann codes to the desired initial redshift of the simulation. This back-scaling method can account for the effect of inhomogeneous residual thermal radiation at early times, which is absent in the Newtonian simulations. We analyse this procedure from a fully relativistic perspective, employing the recently-proposed Newtonian motion gauge framework. We find that N-body simulations for ΛCDM cosmology starting from back-scaled initial conditions can be self-consistently embedded in a relativistic space-time with first-order metric potentials calculated using a linear Boltzmann code.more » This space-time coincides with a simple ''N-body gauge'' for z < 50 for all observable modes. Care must be taken, however, when simulating non-standard cosmologies. As an example, we analyse the back-scaling method in a cosmology with decaying dark matter, and show that metric perturbations become large at early times in the back-scaling approach, indicating a breakdown of the perturbative description. We suggest a suitable ''forwards approach' for such cases.« less
2015-07-01
grained simulations of the formation of meso-segregated microstructure and its interaction with the shockwave is analyzed in the present work. It is...help identify these phenomena and processes, meso-scale coarse-grained simulations of the formation of meso-segregated microstructure and its...of shockwave-induced hard-domain densification. Keywords: Polyurea; Meso-scale; Coarse-grained simulations ; Shockwave attenuation; shockwave
Exact-Differential Large-Scale Traffic Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios
2015-01-01
Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less
McNett, Molly M; Amato, Shelly; Philippbar, Sue Ann
2016-01-01
The aim of this study was to compare predictive ability of hospital Glasgow Coma Scale (GCS) scores and scores obtained using a novel coma scoring tool (the Full Outline of Unresponsiveness [FOUR] scale) on long-term outcomes among patients with traumatic brain injury. Preliminary research of the FOUR scale suggests that it is comparable with GCS for predicting mortality and functional outcome at hospital discharge. No research has investigated relationships between coma scores and outcome 12 months postinjury. This is a prospective cohort study. Data were gathered on adult patients with traumatic brain injury admitted to urban level I trauma center. GCS and FOUR scores were assigned at 24 and 72 hours and at hospital discharge. Glasgow Outcome Scale scores were assigned at 6 and 12 months. The sample size was n = 107. Mean age was 53.5 (SD = ±21, range = 18-91) years. Spearman correlations were comparable and strongest among discharge GCS and FOUR scores and 12-month outcome (r = .73, p < .000; r = .72, p < .000). Multivariate regression models indicate that age and discharge GCS were the strongest predictors of outcome. Areas under the curve were similar for GCS and FOUR scores, with discharge scores occupying the largest areas. GCS and FOUR scores were comparable in bivariate associations with long-term outcome. Discharge coma scores performed best for both tools, with GCS discharge scores predictive in multivariate models.
An AD100 implementation of a real-time STOVL aircraft propulsion system
NASA Technical Reports Server (NTRS)
Ouzts, Peter J.; Drummond, Colin K.
1990-01-01
A real-time dynamic model of the propulsion system for a Short Take-Off and Vertical Landing (STOVL) aircraft was developed for the AD100 simulation environment. The dynamic model was adapted from a FORTRAN based simulation using the dynamic programming capabilities of the AD100 ADSIM simulation language. The dynamic model includes an aerothermal representation of a turbofan jet engine, actuator and sensor models, and a multivariable control system. The AD100 model was tested for agreement with the FORTRAN model and real-time execution performance. The propulsion system model was also linked to an airframe dynamic model to provide an overall STOVL aircraft simulation for the purposes of integrated flight and propulsion control studies. An evaluation of the AD100 system for use as an aircraft simulation environment is included.
Quantifying uncertainty and computational complexity for pore-scale simulations
NASA Astrophysics Data System (ADS)
Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.
2016-12-01
Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
Multiscale entropy analysis of biological signals: a fundamental bi-scaling law
Gao, Jianbo; Hu, Jing; Liu, Feiyan; Cao, Yinhe
2015-01-01
Since introduced in early 2000, multiscale entropy (MSE) has found many applications in biosignal analysis, and been extended to multivariate MSE. So far, however, no analytic results for MSE or multivariate MSE have been reported. This has severely limited our basic understanding of MSE. For example, it has not been studied whether MSE estimated using default parameter values and short data set is meaningful or not. Nor is it known whether MSE has any relation with other complexity measures, such as the Hurst parameter, which characterizes the correlation structure of the data. To overcome this limitation, and more importantly, to guide more fruitful applications of MSE in various areas of life sciences, we derive a fundamental bi-scaling law for fractal time series, one for the scale in phase space, the other for the block size used for smoothing. We illustrate the usefulness of the approach by examining two types of physiological data. One is heart rate variability (HRV) data, for the purpose of distinguishing healthy subjects from patients with congestive heart failure, a life-threatening condition. The other is electroencephalogram (EEG) data, for the purpose of distinguishing epileptic seizure EEG from normal healthy EEG. PMID:26082711
Kayes, Nicola M; McPherson, Kathryn M; Schluter, Philip; Taylor, Denise; Leete, Marta; Kolt, Gregory S
2011-01-01
To explore the relationship that cognitive behavioural and other previously identified variables have with physical activity engagement in people with multiple sclerosis (MS). This study adopted a cross-sectional questionnaire design. Participants were 282 individuals with MS. Outcome measures included the Physical Activity Disability Survey--Revised, Cognitive and Behavioural Responses to Symptoms Questionnaire, Barriers to Health Promoting Activities for Disabled Persons Scale, Multiple Sclerosis Self-efficacy Scale, Self-Efficacy for Chronic Diseases Scales and Chalder Fatigue Questionnaire. Multivariable stepwise regression analyses found that greater self-efficacy, greater reported mental fatigue and lower number of perceived barriers to physical activity accounted for a significant proportion of variance in physical activity behaviour, over that accounted for by illness-related variables. Although fear-avoidance beliefs accounted for a significant proportion of variance in the initial analyses, its effect was explained by other factors in the final multivariable analyses. Self-efficacy, mental fatigue and perceived barriers to physical activity are potentially modifiable variables which could be incorporated into interventions designed to improve physical activity engagement. Future research should explore whether a measurement tool tailored to capture beliefs about physical activity identified by people with MS would better predict participation in physical activity.
[Violence and post-traumatic stress disorder in childhood].
Ximenes, Liana Furtado; de Oliveira, Raquel de Vasconcelos Carvalhães; de Assis, Simone Gonçalves
2009-01-01
This study presents the prevalence of symptoms of Posttraumatic Stress Disorder (PTSD) in 500 schoolchildren (6-13 years old) in São Gonçalo, Rio de Janeiro. It also investigates the association between PTSD, violence and other adverse events in the lives of these children. The multi-stage cluster sampling strategy involved three selection stages. Parents were interviewed about their children's behavior. The instrument used to screen symptoms of PTSD was the Child Behavior Checklist-Posttraumatic Stress Disorder Scale (CBCL-PTSD). Conflict Tactics Scales (CTS) were applied to evaluate family violence and other scales to investigate the socioeconomic profile, familiar relationship, characteristics and adverse events in the lives of the children. Multivariate analysis was performed using a hierarchical model with a significance level of 5%. The prevalence of clinical symptoms of PTSD was of 6.5%. The multivariate analysis suggested an explanation model of PTSD characterized by 18 variables, such as the child's characteristics; specific life events; family violence; and other family factors. The results reveal that it is necessary to work with the child in particularly difficult moments of his/her life in order to prevent or minimize the impact of adverse events on their mental and social functioning.
Genetic association of impulsivity in young adults: a multivariate study
Khadka, S; Narayanan, B; Meda, S A; Gelernter, J; Han, S; Sawyer, B; Aslanzadeh, F; Stevens, M C; Hawkins, K A; Anticevic, A; Potenza, M N; Pearlson, G D
2014-01-01
Impulsivity is a heritable, multifaceted construct with clinically relevant links to multiple psychopathologies. We assessed impulsivity in young adult (N~2100) participants in a longitudinal study, using self-report questionnaires and computer-based behavioral tasks. Analysis was restricted to the subset (N=426) who underwent genotyping. Multivariate association between impulsivity measures and single-nucleotide polymorphism data was implemented using parallel independent component analysis (Para-ICA). Pathways associated with multiple genes in components that correlated significantly with impulsivity phenotypes were then identified using a pathway enrichment analysis. Para-ICA revealed two significantly correlated genotype–phenotype component pairs. One impulsivity component included the reward responsiveness subscale and behavioral inhibition scale of the Behavioral-Inhibition System/Behavioral-Activation System scale, and the second impulsivity component included the non-planning subscale of the Barratt Impulsiveness Scale and the Experiential Discounting Task. Pathway analysis identified processes related to neurogenesis, nervous system signal generation/amplification, neurotransmission and immune response. We identified various genes and gene regulatory pathways associated with empirically derived impulsivity components. Our study suggests that gene networks implicated previously in brain development, neurotransmission and immune response are related to impulsive tendencies and behaviors. PMID:25268255
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
Yang, James J; Williams, L Keoki; Buu, Anne
2017-08-24
A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.
Multi-scale Modeling of Arctic Clouds
NASA Astrophysics Data System (ADS)
Hillman, B. R.; Roesler, E. L.; Dexheimer, D.
2017-12-01
The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.
Javaherchi, Teymour
2016-06-08
Attached are the .cas and .dat files along with the required User Defined Functions (UDFs) and look-up table of lift and drag coefficients for the Reynolds Averaged Navier-Stokes (RANS) simulation of three coaxially located lab-scaled DOE RM1 turbine implemented in ANSYS FLUENT CFD-package. The lab-scaled DOE RM1 is a re-design geometry, based of the full scale DOE RM1 design, producing same power output as the full scale model, while operating at matched Tip Speed Ratio values at reachable laboratory Reynolds number (see attached paper). In this case study the flow field around and in the wake of the lab-scaled DOE RM1 turbines in a coaxial array is simulated using Blade Element Model (a.k.a Virtual Blade Model) by solving RANS equations coupled with k-\\omega turbulence closure model. It should be highlighted that in this simulation the actual geometry of the rotor blade is not modeled. The effect of turbine rotating blades are modeled using the Blade Element Theory. This simulation provides an accurate estimate for the performance of each device and structure of their turbulent far wake. The results of these simulations were validated against the developed in-house experimental data. Simulations for other turbine configurations are available upon request.
NASA Astrophysics Data System (ADS)
Matsui, H.; Buffett, B. A.
2017-12-01
The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.
NASA Astrophysics Data System (ADS)
Cannon, Alex J.
2018-01-01
Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.
Multimodel Simulation of Water Flow: Uncertainty Analysis
USDA-ARS?s Scientific Manuscript database
Simulations of soil water flow require measurements of soil hydraulic properties which are particularly difficult at the field scale. Laboratory measurements provide hydraulic properties at scales finer than the field scale, whereas pedotransfer functions (PTFs) integrate information on hydraulic pr...
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
Bolinger, Elizabeth; Reese, Caitlin; Suhr, Julie; Larrabee, Glenn J
2014-02-01
We examined the effect of simulated head injury on scores on the Neurological Complaints (NUC) and Cognitive Complaints (COG) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF). Young adults with a history of mild head injury were randomly assigned to simulate head injury or give their best effort on a battery of neuropsychological tests, including the MMPI-2-RF. Simulators who also showed poor effort on performance validity tests (PVTs) were compared with controls who showed valid performance on PVTs. Results showed that both scales, but especially NUC, are elevated in individuals simulating head injury, with medium to large effect sizes. Although both scales were highly correlated with all MMPI-2-RF over-reporting validity scales, the relationship of Response Bias Scale to both NUC and COG was much stronger in the simulators than controls. Even accounting for over-reporting on the MMPI-2-RF, NUC was related to general somatic complaints regardless of group membership, whereas COG was related to both psychological distress and somatic complaints in the control group only. Neither scale was related to actual neuropsychological performance, regardless of group membership. Overall, results provide further evidence that self-reported cognitive symptoms can be due to many causes, not necessarily cognitive impairment, and can be exaggerated in a non-credible manner.
NASA Astrophysics Data System (ADS)
Wahl, Thomas; Jensen, Jürgen; Mudersbach, Christoph
2010-05-01
Storm surges along the German North Sea coastline led to major damages in the past and the risk of inundation is expected to increase in the course of an ongoing climate change. The knowledge of the characteristics of possible storm surges is essential for the performance of integrated risk analyses, e.g. based on the source-pathway-receptor concept. The latter includes the storm surge simulation/analyses (source), modelling of dike/dune breach scenarios (pathway) and the quantification of potential losses (receptor). In subproject 1b of the German joint research project XtremRisK (www.xtremrisk.de), a stochastic storm surge generator for the south-eastern North Sea area is developed. The input data for the multivariate model are high resolution sea level observations from tide gauges during extreme events. Based on 25 parameters (19 sea level parameters and 6 time parameters) observed storm surge hydrographs consisting of three tides are parameterised. Followed by the adaption of common parametric probability distributions and a large number of Monte-Carlo-Simulations, the final reconstruction leads to a set of 100.000 (default) synthetic storm surge events with a one-minute resolution. Such a data set can potentially serve as the basis for a large number of applications. For risk analyses, storm surges with peak water levels exceeding the design water levels are of special interest. The occurrence probabilities of the simulated extreme events are estimated based on multivariate statistics, considering the parameters "peak water level" and "fullness/intensity". In the past, most studies considered only the peak water levels during extreme events, which might not be the most important parameter in any cases. Here, a 2D-Archimedian copula model is used for the estimation of the joint probabilities of the selected parameters, accounting for the structures of dependence overlooking the margins. In coordination with subproject 1a, the results will be used as the input for the XtremRisK subprojects 2 to 4. The project is funded by the German Federal Ministry of Education and Research (BMBF) (Project No. 03 F 0483 B).
Ford, Jon J; Richards BPhysio, Matt C; Surkitt BPhysio, Luke D; Chan BPhysio, Alexander Yp; Slater, Sarah L; Taylor, Nicholas F; Hahne, Andrew J
2018-05-28
To identify predictors for back pain, leg pain and activity limitation in patients with early persistent low back disorders. Prospective inception cohort study; Setting: primary care private physiotherapy clinics in Melbourne, Australia. 300 adults aged 18-65 years with low back and/or referred leg pain of ≥6-weeks and ≤6-months duration. Not applicable. Numerical rating scales for back pain and leg pain as well as the Oswestry Disability Scale. Prognostic factors included sociodemographics, treatment related factors, subjective/physical examination, subgrouping factors and standardized questionnaires. Univariate analysis followed by generalized estimating equations were used to develop a multivariate prognostic model for back pain, leg pain and activity limitation. Fifty-eight prognostic factors progressed to the multivariate stage where 15 showed significant (p<0.05) associations with at least one of the three outcomes. There were five indicators of positive outcome (two types of low back disorder subgroups, paresthesia below waist, walking as an easing factor and low transversus abdominis tone) and 10 indicators of negative outcome (both parents born overseas, deep leg symptoms, longer sick leave duration, high multifidus tone, clinically determined inflammation, higher back and leg pain severity, lower lifting capacity, lower work capacity and higher pain drawing percentage coverage). The preliminary model identifying predictors of low back disorders explained up to 37% of the variance in outcome. This study evaluated a comprehensive range of prognostic factors reflective of both the biomedical and psychosocial domains of low back disorders. The preliminary multivariate model requires further validation before being considered for clinical use. Copyright © 2018. Published by Elsevier Inc.
Hierarchical coarse-graining strategy for protein-membrane systems to access mesoscopic scales
Ayton, Gary S.; Lyman, Edward
2014-01-01
An overall multiscale simulation strategy for large scale coarse-grain simulations of membrane protein systems is presented. The protein is modeled as a heterogeneous elastic network, while the lipids are modeled using the hybrid analytic-systematic (HAS) methodology, where in both cases atomistic level information obtained from molecular dynamics simulation is used to parameterize the model. A feature of this approach is that from the outset liposome length scales are employed in the simulation (i.e., on the order of ½ a million lipids plus protein). A route to develop highly coarse-grained models from molecular-scale information is proposed and results for N-BAR domain protein remodeling of a liposome are presented. PMID:20158037
Tan, Chao; Zhao, Jia; Dong, Feng
2015-03-01
Flow behavior characterization is important to understand gas-liquid two-phase flow mechanics and further establish its description model. An Electrical Resistance Tomography (ERT) provides information regarding flow conditions at different directions where the sensing electrodes implemented. We extracted the multivariate sample entropy (MSampEn) by treating ERT data as a multivariate time series. The dynamic experimental results indicate that the MSampEn is sensitive to complexity change of flow patterns including bubbly flow, stratified flow, plug flow and slug flow. MSampEn can characterize the flow behavior at different direction of two-phase flow, and reveal the transition between flow patterns when flow velocity changes. The proposed method is effective to analyze two-phase flow pattern transition by incorporating information of different scales and different spatial directions. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
Detecting the global and regional effects of sulphate aerosol geoengineering
NASA Astrophysics Data System (ADS)
Lo, Eunice; Charlton-Perez, Andrew; Highwood, Ellie
2017-04-01
Climate warming is unequivocal. In addition to carbon dioxide emission mitigation, some geoengineering ideas have been proposed to reduce future surface temperature rise. One of these proposals involves injecting sulphate aerosols into the stratosphere to increase the planet's albedo. Monitoring the effectiveness of sulphate aerosol injection (SAI) would require us to be able to distinguish and detect its cooling effect from the climate system's internal variability and other externally forced temperature changes. This research uses optimal fingerprinting techniques together with simulations from the GeoMIP data base to estimate the number of years of observations that would be needed to detect SAI's cooling signal in near-surface air temperature, should 5 Tg of sulphur dioxide be injected into the stratosphere per year on top of RCP4.5 from 2020-2070. The first part of the research compares the application of two detection methods that have different null hypotheses to SAI detection in global mean near-surface temperature. The first method assumes climate noise to be dominated by unforced climate variability and attempts to detect the SAI cooling signal and greenhouse gas driven warming signal in the "observations" simultaneously against this noise. The second method considers greenhouse gas driven warming to be a non-stationary background climate and attempts to detect the net cooling effect of SAI against this background. Results from this part of the research show that the conventional multi-variate detection method that has been extensively used to attribute climate warming to anthropogenic sources could also be applied for geoengineering detection. The second part of the research investigates detection of geoengineering effects on the regional scale. The globe is divided into various sub-continental scale regions and the cooling effect of SAI is looked for in the temperature time series in each of these regions using total least squares multi-variate detection. Results show that surface temperature observations would be most useful for SAI detection in the Northern Hemisphere mid-latitudes, especially in East Asia. This can be used to indicate the optimal observational network for monitoring the effectiveness of SAI in the future, should that be needed.
Validation of mathematical model for CZ process using small-scale laboratory crystal growth furnace
NASA Astrophysics Data System (ADS)
Bergfelds, Kristaps; Sabanskis, Andrejs; Virbulis, Janis
2018-05-01
The present material is focused on the modelling of small-scale laboratory NaCl-RbCl crystal growth furnace. First steps towards fully transient simulations are taken in the form of stationary simulations that deal with the optimization of material properties to match the model to experimental conditions. For this purpose, simulation software primarily used for the modelling of industrial-scale silicon crystal growth process was successfully applied. Finally, transient simulations of the crystal growth are presented, giving a sufficient agreement to experimental results.
NASA Astrophysics Data System (ADS)
Wang, Audrey; Price, David T.
2007-03-01
A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.
Zhang, J D; Berntenis, N; Roth, A; Ebeling, M
2014-06-01
Gene signatures of drug-induced toxicity are of broad interest, but they are often identified from small-scale, single-time point experiments, and are therefore of limited applicability. To address this issue, we performed multivariate analysis of gene expression, cell-based assays, and histopathological data in the TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation system) database. Data mining highlights four genes-EGR1, ATF3, GDF15 and FGF21-that are induced 2 h after drug administration in human and rat primary hepatocytes poised to eventually undergo cytotoxicity-induced cell death. Modelling and simulation reveals that these early stress-response genes form a functional network with evolutionarily conserved structure and intrinsic dynamics. This is underlined by the fact that early induction of this network in vivo predicts drug-induced liver and kidney pathology with high accuracy. Our findings demonstrate the value of early gene-expression signatures in predicting and understanding compound-induced toxicity. The identified network can empower first-line tests that reduce animal use and costs of safety evaluation.
Comparison of Penalty Functions for Sparse Canonical Correlation Analysis
Chalise, Prabhakar; Fridley, Brooke L.
2011-01-01
Canonical correlation analysis (CCA) is a widely used multivariate method for assessing the association between two sets of variables. However, when the number of variables far exceeds the number of subjects, such in the case of large-scale genomic studies, the traditional CCA method is not appropriate. In addition, when the variables are highly correlated the sample covariance matrices become unstable or undefined. To overcome these two issues, sparse canonical correlation analysis (SCCA) for multiple data sets has been proposed using a Lasso type of penalty. However, these methods do not have direct control over sparsity of solution. An additional step that uses Bayesian Information Criterion (BIC) has also been suggested to further filter out unimportant features. In this paper, a comparison of four penalty functions (Lasso, Elastic-net, SCAD and Hard-threshold) for SCCA with and without the BIC filtering step have been carried out using both real and simulated genotypic and mRNA expression data. This study indicates that the SCAD penalty with BIC filter would be a preferable penalty function for application of SCCA to genomic data. PMID:21984855
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Glacial legacies on interglacial vegetation at the Pliocene-Pleistocene transition in NE Asia
Herzschuh, Ulrike; Birks, H. John B.; Laepple, Thomas; Andreev, Andrei; Melles, Martin; Brigham-Grette, Julie
2016-01-01
Broad-scale climate control of vegetation is widely assumed. Vegetation-climate lags are generally thought to have lasted no more than a few centuries. Here our palaeoecological study challenges this concept over glacial–interglacial timescales. Through multivariate analyses of pollen assemblages from Lake El'gygytgyn, Russian Far East and other data we show that interglacial vegetation during the Plio-Pleistocene transition mainly reflects conditions of the preceding glacial instead of contemporary interglacial climate. Vegetation–climate disequilibrium may persist for several millennia, related to the combined effects of permafrost persistence, distant glacial refugia and fire. In contrast, no effects from the preceding interglacial on glacial vegetation are detected. We propose that disequilibrium was stronger during the Plio-Pleistocene transition than during the Mid-Pliocene Warm Period when, in addition to climate, herbivory was important. By analogy to the past, we suggest today's widespread larch ecosystem on permafrost is not in climate equilibrium. Vegetation-based reconstructions of interglacial climates used to assess atmospheric CO2–temperature relationships may thus yield misleading simulations of past global climate sensitivity. PMID:27338025
New agreement measures based on survival processes
Guo, Ying; Li, Ruosha; Peng, Limin; Manatunga, Amita K.
2013-01-01
Summary The need to assess agreement arises in many scenarios in biomedical sciences when measurements were taken by different methods on the same subjects. When the endpoints are survival outcomes, the study of agreement becomes more challenging given the special characteristics of time-to-event data. In this paper, we propose a new framework for assessing agreement based on survival processes that can be viewed as a natural representation of time-to-event outcomes. Our new agreement measure is formulated as the chance-corrected concordance between survival processes. It provides a new perspective for studying the relationship between correlated survival outcomes and offers an appealing interpretation as the agreement between survival times on the absolute distance scale. We provide a multivariate extension of the proposed agreement measure for multiple methods. Furthermore, the new framework enables a natural extension to evaluate time-dependent agreement structure. We develop nonparametric estimation of the proposed new agreement measures. Our estimators are shown to be strongly consistent and asymptotically normal. We evaluate the performance of the proposed estimators through simulation studies and then illustrate the methods using a prostate cancer data example. PMID:23844617
Performance of b-jet identification in the ATLAS experiment
Aad, G; Abbott, B; Abdallah, J; ...
2016-04-04
The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag ratemore » for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.« less
State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI.
Solo, Victor
2016-05-01
The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability.
State-Space Analysis of Granger-Geweke Causality Measures with Application to fMRI
Solo, Victor
2017-01-01
The recent interest in the dynamics of networks and the advent, across a range of applications, of measuring modalities that operate on different temporal scales have put the spotlight on some significant gaps in the theory of multivariate time series. Fundamental to the description of network dynamics is the direction of interaction between nodes, accompanied by a measure of the strength of such interactions. Granger causality and its associated frequency domain strength measures (GEMs) (due to Geweke) provide a framework for the formulation and analysis of these issues. In pursuing this setup, three significant unresolved issues emerge. First, computing GEMs involves computing submodels of vector time series models, for which reliable methods do not exist. Second, the impact of filtering on GEMs has never been definitively established. Third, the impact of downsampling on GEMs has never been established. In this work, using state-space methods, we resolve all these issues and illustrate the results with some simulations. Our analysis is motivated by some problems in (fMRI) brain imaging, to which we apply it, but it is of general applicability. PMID:26942749
Probabilistic flood damage modelling at the meso-scale
NASA Astrophysics Data System (ADS)
Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno
2014-05-01
Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.
Preference-based Health status in a German outpatient cohort with multiple sclerosis
2013-01-01
Background To prospectively determine health status and health utility and its predictors in patients with multiple sclerosis (MS). Methods A total of 144 MS patients (mean age: 41.0 ±11.3y) with different subtypes (patterns of progression) and severities of MS were recruited in an outpatient university clinic in Germany. Patients completed a questionnaire at baseline (n = 144), 6 months (n = 65) and 12 months (n = 55). Health utilities were assessed using the EuroQol instrument (EQ-5D, EQ VAS). Health status was assessed by several scales (Expanded Disability Severity Scale (EDSS), Modified Fatigue Impact Scale (M-FIS), Functional Assessment of MS (FAMS), Beck Depression Inventory (BDI-II) and Multiple Sclerosis Functional Composite (MSFC)). Additionally, demographic and socioeconomic parameters were assessed. Multivariate linear and logistic regressions were applied to reveal independent predictors of health status. Results Health status is substantially diminished in MS patients and the EQ VAS was considerably lower than that of the general German population. No significant change in health-status parameters was observed over a 12-months period. Multivariate analyses revealed M-FIS, BDI-II, MSFC, and EDSS to be significant predictors of reduced health status. Socioeconomic and socio-demographic parameters such as working status, family status, number of household inhabitants, age, and gender did not prove significant in multivariate analyses. Conclusion MS considerably impairs patients’ health status. Guidelines aiming to improve self-reported health status should include treatment options for depression and fatigue. Physicians should be aware of depression and fatigue as co-morbidities. Future studies should consider the minimal clinical difference when health status is a primary outcome. PMID:24089999
Hot-bench simulation of the active flexible wing wind-tunnel model
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Houck, Jacob A.
1990-01-01
Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.
A Group Simulation of the Development of the Geologic Time Scale.
ERIC Educational Resources Information Center
Bennington, J. Bret
2000-01-01
Explains how to demonstrate to students that the relative dating of rock layers is redundant. Uses two column diagrams to simulate stratigraphic sequences from two different geological time scales and asks students to complete the time scale. (YDS)
Spiking neural network simulation: memory-optimal synaptic event scheduling.
Stewart, Robert D; Gurney, Kevin N
2011-06-01
Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.
If You've Got It, Use It (Simulation, That Is...)
NASA Technical Reports Server (NTRS)
Frost, Chad; Tucker, George
2006-01-01
This viewgraph presentation reviews the Rotorcraft Aircrew Systems Concept Airborne Laboratory (RASCAL) UH-60 in-flight simulator, the use of simulation in support of safety monitor design specification development, the development of a failure/recovery (F/R) rating scale, the use of F/R Rating Scale as a common element between simulation and flight evaluation, and the expansion of the flight envelope without benefit of simulation.
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale
NASA Astrophysics Data System (ADS)
Barrios, M. I.
2013-12-01
The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.
78 FR 71785 - Passenger Train Emergency Systems II
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-29
... in debriefing and critique sessions following emergency situations and full-scale simulations. DATES... Session Following Emergency Situations and Full-Scale Simulations V. Section-by-Section Analysis A... and simulations. As part of these amendments, FRA is incorporating by reference three American Public...
NASA Astrophysics Data System (ADS)
Kumar, S.; Jasinski, M. F.; Mocko, D. M.; Rodell, M.; Borak, J.; Li, B.; Beaudoing, H. K.; Peters-Lidard, C. D.
2017-12-01
This presentation will describe one of the first successful examples of multisensor, multivariate land data assimilation, encompassing a large suite of soil moisture, snow depth, snow cover and irrigation intensity environmental data records (EDRs) from Scanning Multi-channel Microwave Radiometer (SMMR), the Special Sensor Microwave Imager (SSM/I), the Advanced Scatterometer (ASCAT), the Moderate-Resolution Imaging Spectroradiometer (MODIS), the Advanced Microwave Scanning Radiometer (AMSR-E and AMSR2), the Soil Moisture Ocean Salinity (SMOS) mission and the Soil Moisture Active Passive (SMAP) mission. The analysis is performed using the NASA Land Information System (LIS) as an enabling tool for the U.S. National Climate Assessment (NCA). The performance of NCA Land Data Assimilation System (NCA-LDAS) is evaluated by comparing to a number of hydrological reference data products. Results indicate that multivariate assimilation provides systematic improvements in simulated soil moisture and snow depth, with marginal effects on the accuracy of simulated streamflow and ET. An important conclusion is that across all evaluated variables, assimilation of data from increasingly more modern sensors (e.g. SMOS, SMAP, AMSR2, ASCAT) produces more skillful results than assimilation of data from older sensors (e.g. SMMR, SSM/I, AMSR-E). The evaluation also indicates high skill of NCA-LDAS when compared with other land analysis products. Further, drought indicators based on NCA-LDAS output suggest a trend of longer and more severe droughts over parts of Western U.S. during 1979-2015, particularly in the Southwestern U.S.
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
NASA Astrophysics Data System (ADS)
Teixeira, Filipe; Melo, André; Cordeiro, M. Natália D. S.
2010-09-01
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
Teixeira, Filipe; Melo, André; Cordeiro, M Natália D S
2010-09-21
A linear least-squares methodology was used to determine the vibrational scaling factors for the X3LYP density functional. Uncertainties for these scaling factors were calculated according to the method devised by Irikura et al. [J. Phys. Chem. A 109, 8430 (2005)]. The calibration set was systematically partitioned according to several of its descriptors and the scaling factors for X3LYP were recalculated for each subset. The results show that the scaling factors are only significant up to the second digit, irrespective of the calibration set used. Furthermore, multivariate statistical analysis allowed us to conclude that the scaling factors and the associated uncertainties are independent of the size of the calibration set and strongly suggest the practical impossibility of obtaining vibrational scaling factors with more than two significant digits.
NASA Astrophysics Data System (ADS)
Stone, T. W.; Horstemeyer, M. F.
2012-09-01
The objective of this study is to illustrate and quantify the length scale effects related to interparticle friction under compaction. Previous studies have shown as the length scale of a specimen decreases, the strength of a single crystal metal or ceramic increases. The question underlying this research effort continues the thought—If there is a length scale parameter related to the strength of a material, is there a length scale parameter related to friction? To explore the length scale effects of friction, molecular dynamics (MD) simulations using an embedded atom method potential were performed to analyze the compression of two spherical FCC nickel nanoparticles at different contact angles. In the MD model study, we applied a macroscopic plastic contact formulation to determine the normal plastic contact force at the particle interfaces and used the average shear stress from the MD simulations to determine the tangential contact forces. Combining this information with the Coulomb friction law, we quantified the MD interparticle coefficient of friction and showed good agreement with experimental studies and a Discrete Element Method prediction as a function of contact angle. Lastly, we compared our MD simulation friction values to the tribological predictions of Bhushan and Nosonovsky (BN), who developed a friction scaling model based on strain gradient plasticity and dislocation-assisted sliding that included a length scale parameter. The comparison revealed that the BN elastic friction scaling model did a much better job than the BN plastic scaling model of predicting the coefficient of friction values obtained from the MD simulations.
Sajadi, Seyede Fateme; Arshadi, Nasrin; Zargar, Yadolla; Mehrabizade Honarmand, Mahnaz; Hajjari, Zahra
2015-06-01
Numerous studies have demonstrated that early maladaptive schemas, emotional dysregulation are supposed to be the defining core of borderline personality disorder. Many studies have also found a strong association between the diagnosis of borderline personality and the occurrence of suicide ideation and dissociative symptoms. The present study was designed to investigate the relationship between borderline personality features and schema, emotion regulation, dissociative experiences and suicidal ideation among high school students in Shiraz City, Iran. In this descriptive correlational study, 300 students (150 boys and 150 girls) were selected from the high schools in Shiraz, Iran, using the multi-stage random sampling. Data were collected using some instruments including borderline personality feature scale for children, young schema questionnaire-short form, difficulties in emotion-regulation scale (DERS), dissociative experience scale and beck suicide ideation scale. Data were analyzed using the Pearson correlation coefficient and multivariate regression analysis. The results showed a significant positive correlation between schema, emotion regulation, dissociative experiences and suicide ideation with borderline personality features. Moreover, the results of multivariate regression analysis suggested that among the studied variables, schema was the most effective predicting variable of borderline features (P < 0.001). The findings of this study are in accordance with findings from previous studies, and generally show a meaningful association between schema, emotion regulation, dissociative experiences, and suicide ideation with borderline personality features.
Schultz, Arthur L.; Malcolm, Hamish A.; Bucher, Daniel J.; Linklater, Michelle; Smith, Stephen D. A.
2014-01-01
Where biological datasets are spatially limited, abiotic surrogates have been advocated to inform objective planning for Marine Protected Areas. However, this approach assumes close correlation between abiotic and biotic patterns. The Solitary Islands Marine Park, northern NSW, Australia, currently uses a habitat classification system (HCS) to assist with planning, but this is based only on data for reefs. We used Baited Remote Underwater Videos (BRUVs) to survey fish assemblages of unconsolidated substrata at different depths, distances from shore, and across an along-shore spatial scale of 10 s of km (2 transects) to examine how well the HCS works for this dominant habitat. We used multivariate regression modelling to examine the importance of these, and other environmental factors (backscatter intensity, fine-scale bathymetric variation and rugosity), in structuring fish assemblages. There were significant differences in fish assemblages across depths, distance from shore, and over the medium spatial scale of the study: together, these factors generated the optimum model in multivariate regression. However, marginal tests suggested that backscatter intensity, which itself is a surrogate for sediment type and hardness, might also influence fish assemblages and needs further investigation. Species richness was significantly different across all factors: however, total MaxN only differed significantly between locations. This study demonstrates that the pre-existing abiotic HCS only partially represents the range of fish assemblages of unconsolidated habitats in the region. PMID:24824998
Power analysis to detect treatment effects in longitudinal clinical trials for Alzheimer's disease.
Huang, Zhiyue; Muniz-Terrera, Graciela; Tom, Brian D M
2017-09-01
Assessing cognitive and functional changes at the early stage of Alzheimer's disease (AD) and detecting treatment effects in clinical trials for early AD are challenging. Under the assumption that transformed versions of the Mini-Mental State Examination, the Clinical Dementia Rating Scale-Sum of Boxes, and the Alzheimer's Disease Assessment Scale-Cognitive Subscale tests'/components' scores are from a multivariate linear mixed-effects model, we calculated the sample sizes required to detect treatment effects on the annual rates of change in these three components in clinical trials for participants with mild cognitive impairment. Our results suggest that a large number of participants would be required to detect a clinically meaningful treatment effect in a population with preclinical or prodromal Alzheimer's disease. We found that the transformed Mini-Mental State Examination is more sensitive for detecting treatment effects in early AD than the transformed Clinical Dementia Rating Scale-Sum of Boxes and Alzheimer's Disease Assessment Scale-Cognitive Subscale. The use of optimal weights to construct powerful test statistics or sensitive composite scores/endpoints can reduce the required sample sizes needed for clinical trials. Consideration of the multivariate/joint distribution of components' scores rather than the distribution of a single composite score when designing clinical trials can lead to an increase in power and reduced sample sizes for detecting treatment effects in clinical trials for early AD.
Space construction base control system
NASA Technical Reports Server (NTRS)
Kaczynski, R. F.
1979-01-01
Several approaches for an attitude control system are studied and developed for a large space construction base that is structurally flexible. Digital simulations were obtained using the following techniques: (1) the multivariable Nyquist array method combined with closed loop pole allocation, (2) the linear quadratic regulator method. Equations for the three-axis simulation using the multilevel control method were generated and are presented. Several alternate control approaches are also described. A technique is demonstrated for obtaining the dynamic structural properties of a vehicle which is constructed of two or more submodules of known dynamic characteristics.
Computer simulation of a single pilot flying a modern high-performance helicopter
NASA Technical Reports Server (NTRS)
Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.
1988-01-01
Presented is a computer simulation of a human response pilot model able to execute operational flight maneuvers and vehicle stabilization of a modern high-performance helicopter. Low-order, single-variable, human response mechanisms, integrated to form a multivariable pilot structure, provide a comprehensive operational control over the vehicle. Evaluations of the integrated pilot were performed by direct insertion into a nonlinear, total-force simulation environment provided by NASA Lewis. Comparisons between the integrated pilot structure and single-variable pilot mechanisms are presented. Static and dynamically alterable configurations of the pilot structure are introduced to simulate pilot activities during vehicle maneuvers. These configurations, in conjunction with higher level, decision-making processes, are considered for use where guidance and navigational procedures, operational mode transfers, and resource sharing are required.
A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS
Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...
Assessing Cultural Competence in Graduating Students
ERIC Educational Resources Information Center
Kohli, Hermeet K.; Kohli, Amarpreet S.; Huber, Ruth; Faul, Anna C.
2010-01-01
Twofold purpose of this study was to develop a framework to understand cultural competence in graduating social work students, and test that framework for appropriateness and predictability using multivariate statistics. Scale and predictor variables were collected using an online instrument from a nationwide convenience sample of graduating…
Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge
Holland, C.; Howard, N. T.; Grierson, B. A.
2017-05-08
New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k yρ s < 1) and short wavelength electron-scale (k yρ s > 1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. We observe significant nonlinear cross-scale couplings in the multiscalemore » simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E x B shearing rate γ E x B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. And while the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Though computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.« less
Gyrokinetic predictions of multiscale transport in a DIII-D ITER baseline discharge
NASA Astrophysics Data System (ADS)
Holland, C.; Howard, N. T.; Grierson, B. A.
2017-06-01
New multiscale gyrokinetic simulations predict that electron energy transport in a DIII-D ITER baseline discharge with dominant electron heating and low input torque is multiscale in nature, with roughly equal amounts of the electron energy flux Q e coming from long wavelength ion-scale (k y ρ s < 1) and short wavelength electron-scale (k y ρ s > 1) fluctuations when the gyrokinetic results match independent power balance calculations. Corresponding conventional ion-scale simulations are able to match the power balance ion energy flux Q i, but systematically underpredict Q e when doing so. Significant nonlinear cross-scale couplings are observed in the multiscale simulations, but the exact simulation predictions are found to be extremely sensitive to variations of model input parameters within experimental uncertainties. Most notably, depending upon the exact value of the equilibrium E × B shearing rate γ E×B used, either enhancement or suppression of the long-wavelength turbulence and transport levels in the multiscale simulations is observed relative to what is predicted by ion-scale simulations. While the enhancement of the long wavelength fluctuations by inclusion of the short wavelength turbulence was previously observed in similar multiscale simulations of an Alcator C-Mod L-mode discharge, these new results show for the first time a complete suppression of long-wavelength turbulence in a multiscale simulation, for parameters at which conventional ion-scale simulation predicts small but finite levels of low-k turbulence and transport consistent with the power balance Q i. Although computational resource limitations prevent a fully rigorous validation assessment of these new results, they provide significant new evidence that electron energy transport in burning plasmas is likely to have a strong multiscale character, with significant nonlinear cross-scale couplings that must be fully understood to predict the performance of those plasmas with confidence.
Towards European-scale convection-resolving climate simulations with GPUs: a study with COSMO 4.19
NASA Astrophysics Data System (ADS)
Leutwyler, David; Fuhrer, Oliver; Lapillonne, Xavier; Lüthi, Daniel; Schär, Christoph
2016-09-01
The representation of moist convection in climate models represents a major challenge, due to the small scales involved. Using horizontal grid spacings of O(1km), convection-resolving weather and climate models allows one to explicitly resolve deep convection. However, due to their extremely demanding computational requirements, they have so far been limited to short simulations and/or small computational domains. Innovations in supercomputing have led to new hybrid node designs, mixing conventional multi-core hardware and accelerators such as graphics processing units (GPUs). One of the first atmospheric models that has been fully ported to these architectures is the COSMO (Consortium for Small-scale Modeling) model.Here we present the convection-resolving COSMO model on continental scales using a version of the model capable of using GPU accelerators. The verification of a week-long simulation containing winter storm Kyrill shows that, for this case, convection-parameterizing simulations and convection-resolving simulations agree well. Furthermore, we demonstrate the applicability of the approach to longer simulations by conducting a 3-month-long simulation of the summer season 2006. Its results corroborate the findings found on smaller domains such as more credible representation of the diurnal cycle of precipitation in convection-resolving models and a tendency to produce more intensive hourly precipitation events. Both simulations also show how the approach allows for the representation of interactions between synoptic-scale and meso-scale atmospheric circulations at scales ranging from 1000 to 10 km. This includes the formation of sharp cold frontal structures, convection embedded in fronts and small eddies, or the formation and organization of propagating cold pools. Finally, we assess the performance gain from using heterogeneous hardware equipped with GPUs relative to multi-core hardware. With the COSMO model, we now use a weather and climate model that has all the necessary modules required for real-case convection-resolving regional climate simulations on GPUs.
O'Connor, N; Milosavljević, V; Daniels, S
2011-08-01
In this paper we present the development and application of a real time atmospheric pressure discharge monitoring diagnostic. The software based diagnostic is designed to extract latent electrical and optical information associated with the operation of an atmospheric pressure dielectric barrier discharge (APDBD) over long time scales. Given that little is known about long term temporal effects in such discharges, the diagnostic methodology is applied to the monitoring of an APDBD in helium and helium with both 0.1% nitrogen and 0.1% oxygen gas admixtures over periods of tens of minutes. Given the large datasets associated with the experiments, it is shown that this process is much expedited through the novel application of multivariate correlations between the electrical and optical parameters of the corresponding chemistries which, in turn, facilitates comparisons between each individual chemistry also. The results of these studies show that the electrical and optical parameters of the discharge in helium and upon the addition of gas admixtures evolve over time scales far longer than the gas residence time and have been compared to current modelling works. It is envisaged that the diagnostic together with the application of multivariate correlations will be applied to rapid system identification and prototyping in both experimental and industrial APDBD systems in the future.
Hemakom, Apit; Powezka, Katarzyna; Goverdovsky, Valentin; Jaffer, Usman; Mandic, Danilo P
2017-12-01
A highly localized data-association measure, termed intrinsic synchrosqueezing transform (ISC), is proposed for the analysis of coupled nonlinear and non-stationary multivariate signals. This is achieved based on a combination of noise-assisted multivariate empirical mode decomposition and short-time Fourier transform-based univariate and multivariate synchrosqueezing transforms. It is shown that the ISC outperforms six other combinations of algorithms in estimating degrees of synchrony in synthetic linear and nonlinear bivariate signals. Its advantage is further illustrated in the precise identification of the synchronized respiratory and heart rate variability frequencies among a subset of bass singers of a professional choir, where it distinctly exhibits better performance than the continuous wavelet transform-based ISC. We also introduce an extension to the intrinsic phase synchrony (IPS) measure, referred to as nested intrinsic phase synchrony (N-IPS), for the empirical quantification of physically meaningful and straightforward-to-interpret trends in phase synchrony. The N-IPS is employed to reveal physically meaningful variations in the levels of cooperation in choir singing and performing a surgical procedure. Both the proposed techniques successfully reveal degrees of synchronization of the physiological signals in two different aspects: (i) precise localization of synchrony in time and frequency (ISC), and (ii) large-scale analysis for the empirical quantification of physically meaningful trends in synchrony (N-IPS).
Changes in Concurrent Risk of Warm and Dry Years under Impact of Climate Change
NASA Astrophysics Data System (ADS)
Sarhadi, A.; Wiper, M.; Touma, D. E.; Ausín, M. C.; Diffenbaugh, N. S.
2017-12-01
Anthropogenic global warming has changed the nature and the risk of extreme climate phenomena. The changing concurrence of multiple climatic extremes (warm and dry years) may result in intensification of undesirable consequences for water resources, human and ecosystem health, and environmental equity. The present study assesses how global warming influences the probability that warm and dry years co-occur in a global scale. In the first step of the study a designed multivariate Mann-Kendall trend analysis is used to detect the areas in which the concurrence of warm and dry years has increased in the historical climate records and also climate models in the global scale. The next step investigates the concurrent risk of the extremes under dynamic nonstationary conditions. A fully generalized multivariate risk framework is designed to evolve through time under dynamic nonstationary conditions. In this methodology, Bayesian, dynamic copulas are developed to model the time-varying dependence structure between the two different climate extremes (warm and dry years). The results reveal an increasing trend in the concurrence risk of warm and dry years, which are in agreement with the multivariate trend analysis from historical and climate models. In addition to providing a novel quantification of the changing probability of compound extreme events, the results of this study can help decision makers develop short- and long-term strategies to prepare for climate stresses now and in the future.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
Multivariate Heteroscedasticity Models for Functional Brain Connectivity.
Seiler, Christof; Holmes, Susan
2017-01-01
Functional brain connectivity is the co-occurrence of brain activity in different areas during resting and while doing tasks. The data of interest are multivariate timeseries measured simultaneously across brain parcels using resting-state fMRI (rfMRI). We analyze functional connectivity using two heteroscedasticity models. Our first model is low-dimensional and scales linearly in the number of brain parcels. Our second model scales quadratically. We apply both models to data from the Human Connectome Project (HCP) comparing connectivity between short and conventional sleepers. We find stronger functional connectivity in short than conventional sleepers in brain areas consistent with previous findings. This might be due to subjects falling asleep in the scanner. Consequently, we recommend the inclusion of average sleep duration as a covariate to remove unwanted variation in rfMRI studies. A power analysis using the HCP data shows that a sample size of 40 detects 50% of the connectivity at a false discovery rate of 20%. We provide implementations using R and the probabilistic programming language Stan.
The WEPP Model Application in a Small Watershed in the Loess Plateau
Han, Fengpeng; Ren, Lulu; Zhang, Xingchang; Li, Zhanbin
2016-01-01
In the Loess Plateau, soil erosion has not only caused serious ecological and environmental problems but has also impacted downstream areas. Therefore, a model is needed to guide the comprehensive control of soil erosion. In this study, we introduced the WEPP model to simulate soil erosion both at the slope and watershed scales. Our analyses showed that: the simulated values at the slope scale were very close to the measured. However, both the runoff and soil erosion simulated values at the watershed scale were higher than the measured. At the slope scale, under different coverage, the simulated erosion was slightly higher than the measured. When the coverage is 40%, the simulated results of both runoff and erosion are the best. At the watershed scale, the actual annual runoff of the Liudaogou watershed is 83m3; sediment content is 0.097 t/m3, annual erosion sediment 8.057t and erosion intensity 0.288 t ha-1 yr-1. Both the simulated values of soil erosion and runoff are higher than the measured, especially the runoff. But the simulated erosion trend is relatively accurate after the farmland is returned to grassland. We concluded that the WEPP model can be used to establish a reasonable vegetation restoration model and guide the vegetation restoration of the Loess Plateau. PMID:26963704
Ruano, M V; Ribes, J; Seco, A; Ferrer, J
2011-01-01
This paper presents a computer tool called DSC (Simulation based Controllers Design) that enables an easy design of control systems and strategies applied to wastewater treatment plants. Although the control systems are developed and evaluated by simulation, this tool aims to facilitate the direct implementation of the designed control system to the PC of the full-scale WWTP (wastewater treatment plants). The designed control system can be programmed in a dedicated control application and can be connected to either the simulation software or the SCADA of the plant. To this end, the developed DSC incorporates an OPC server (OLE for process control) which facilitates an open-standard communication protocol for different industrial process applications. The potential capabilities of the DSC tool are illustrated through the example of a full-scale application. An aeration control system applied to a nutrient removing WWTP was designed, tuned and evaluated with the DSC tool before its implementation in the full scale plant. The control parameters obtained by simulation were suitable for the full scale plant with only few modifications to improve the control performance. With the DSC tool, the control systems performance can be easily evaluated by simulation. Once developed and tuned by simulation, the control systems can be directly applied to the full-scale WWTP.
NASA Astrophysics Data System (ADS)
Gong, Jinnan; Wang, Ben; Jia, Xin; Feng, Wei; Zha, Tianshan; Kellomäki, Seppo; Peltola, Heli
2018-01-01
We used process-based modelling to investigate the roles of carbon-flux (C-flux) components and plant-interspace heterogeneities in regulating soil CO2 exchanges (FS) in a dryland ecosystem with sparse vegetation. To simulate the diurnal and seasonal dynamics of FS, the modelling considered simultaneously the CO2 production, transport and surface exchanges (e.g. biocrust photosynthesis, respiration and photodegradation). The model was parameterized and validated with multivariate data measured during the years 2013-2014 in a semiarid shrubland ecosystem in Yanchi, northwestern China. The model simulation showed that soil rewetting could enhance CO2 dissolution and delay the emission of CO2 produced from rooting zone. In addition, an ineligible fraction of respired CO2 might be removed from soil volumes under respiration chambers by lateral water flows and root uptakes. During rewetting, the lichen-crusted soil could shift temporally from net CO2 source to sink due to the activated photosynthesis of biocrust but the restricted CO2 emissions from subsoil. The presence of plant cover could decrease the root-zone CO2 production and biocrust C sequestration but increase the temperature sensitivities of these fluxes. On the other hand, the sensitivities of root-zone emissions to water content were lower under canopy, which may be due to the advection of water flows from the interspace to canopy. To conclude, the complexity and plant-interspace heterogeneities of soil C processes should be carefully considered to extrapolate findings from chamber to ecosystem scales and to predict the ecosystem responses to climate change and extreme climatic events. Our model can serve as a useful tool to simulate the soil CO2 efflux dynamics in dryland ecosystems.
Evaluation of WRF Model Against Satellite and Field Measurements During ARM March 2000 IOP
NASA Astrophysics Data System (ADS)
Wu, J.; Zhang, M.
2003-12-01
Meso-scale WRF model is employed to simulate the organization of clouds related with the cyclogenesis occurred during March 1-4, 2000 over ARM SGP CART site. Qualitative comparisons of simulated clouds with GOES8 satellite images show that the WRF model can capture the main features of clouds related with the cyclogenesis. The simulated precipitation patterns also match the Radar reflectivity images well. Further evaluation of the simulated features on GCM grid-scale is conducted against ARM field measurements. The evaluation shows that the evolutions of the simulated state fields such as temperature and moisture, the simulated wind fields and the derived large-scale temperature and moisture tendencies closely follow the observed patterns. These results encourages us to use meso-scale WRF model as a tool to verify the performance of GCMs in simulating cloud feedback processes related with the frontal clouds such that we can test and validate the current cloud parameterizations in climate models, and make possible improvements to different components of current cloud parameterizations in GCMs.
Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models
ERIC Educational Resources Information Center
Li, Ying; Rupp, Andre A.
2011-01-01
This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…
Effects of Missing Data Methods in SEM under Conditions of Incomplete and Nonnormal Data
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2017-01-01
Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…
ERIC Educational Resources Information Center
Schneider, W. Joel; Roman, Zachary
2018-01-01
We used data simulations to test whether composites consisting of cohesive subtest scores are more accurate than composites consisting of divergent subtest scores. We demonstrate that when multivariate normality holds, divergent and cohesive scores are equally accurate. Furthermore, excluding divergent scores results in biased estimates of…
ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.
ERIC Educational Resources Information Center
Vale, C. David; Gialluca, Kathleen A.
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
Simulation analysis of adaptive cruise prediction control
NASA Astrophysics Data System (ADS)
Zhang, Li; Cui, Sheng Min
2017-09-01
Predictive control is suitable for multi-variable and multi-constraint system control.In order to discuss the effect of predictive control on the vehicle longitudinal motion, this paper establishes the expected spacing model by combining variable pitch spacing and the of safety distance strategy. The model predictive control theory and the optimization method based on secondary planning are designed to obtain and track the best expected acceleration trajectory quickly. Simulation models are established including predictive and adaptive fuzzy control. Simulation results show that predictive control can realize the basic function of the system while ensuring the safety. The application of predictive and fuzzy adaptive algorithm in cruise condition indicates that the predictive control effect is better.
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
NASA Astrophysics Data System (ADS)
Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco
2017-10-01
Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
Preliminary Evaluation of Altitude Scaling for Turbofan Engine Ice Crystal Icing
NASA Technical Reports Server (NTRS)
Tsao, Jen-Ching
2017-01-01
Preliminary evaluation of altitude scaling for turbofan engine ice crystal icing simulation was conducted during the 2015 LF11 engine icing test campaign in PSL.The results showed that a simplified approach for altitude scaling to simulate the key reference engine ice growth feature and associated icing effects to the engine is possible. But special considerations are needed to address the facility operation limitation for lower altitude engine icing simulation.
NASA Technical Reports Server (NTRS)
Kriegler, F. J.; Gordon, M. F.; Mclaughlin, R. H.; Marshall, R. E.
1975-01-01
The MIDAS (Multivariate Interactive Digital Analysis System) processor is a high-speed processor designed to process multispectral scanner data (from Landsat, EOS, aircraft, etc.) quickly and cost-effectively to meet the requirements of users of remote sensor data, especially from very large areas. MIDAS consists of a fast multipipeline preprocessor and classifier, an interactive color display and color printer, and a medium scale computer system for analysis and control. The system is designed to process data having as many as 16 spectral bands per picture element at rates of 200,000 picture elements per second into as many as 17 classes using a maximum likelihood decision rule.
Quasi-coarse-grained dynamics: modelling of metallic materials at mesoscales
NASA Astrophysics Data System (ADS)
Dongare, Avinash M.
2014-12-01
A computationally efficient modelling method called quasi-coarse-grained dynamics (QCGD) is developed to expand the capabilities of molecular dynamics (MD) simulations to model behaviour of metallic materials at the mesoscales. This mesoscale method is based on solving the equations of motion for a chosen set of representative atoms from an atomistic microstructure and using scaling relationships for the atomic-scale interatomic potentials in MD simulations to define the interactions between representative atoms. The scaling relationships retain the atomic-scale degrees of freedom and therefore energetics of the representative atoms as would be predicted in MD simulations. The total energetics of the system is retained by scaling the energetics and the atomic-scale degrees of freedom of these representative atoms to account for the missing atoms in the microstructure. This scaling of the energetics renders improved time steps for the QCGD simulations. The success of the QCGD method is demonstrated by the prediction of the structural energetics, high-temperature thermodynamics, deformation behaviour of interfaces, phase transformation behaviour, plastic deformation behaviour, heat generation during plastic deformation, as well as the wave propagation behaviour, as would be predicted using MD simulations for a reduced number of representative atoms. The reduced number of atoms and the improved time steps enables the modelling of metallic materials at the mesoscale in extreme environments.
Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems
NASA Astrophysics Data System (ADS)
Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo
With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.
NASA Astrophysics Data System (ADS)
Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.
2013-04-01
The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.
Gyrokinetic simulations of DIII-D near-edge L-mode plasmas
NASA Astrophysics Data System (ADS)
Neiser, Tom; Jenko, Frank; Carter, Troy; Schmitz, Lothar; Merlo, Gabriele; Told, Daniel; Banon Navarro, Alejandro; McKee, George; Yan, Zheng
2017-10-01
In order to understand the L-H transition, a good understanding of the L-mode edge region is necessary. We perform nonlinear gyrokinetic simulations of a DIII-D L-mode discharge with the GENE code in the near-edge, which we define as ρtor >= 0.8 . At ρ = 0.9 , ion-scale simulations reproduce experimental heat fluxes within the uncertainty of the experiment. At ρ = 0 . 8 , electron-scale simulations reproduce the experimental electron heat flux while ion-scale simulations do not reproduce the respective ion heat flux due to a strong poloidal zonal flow. However, we reproduce both electron and ion heat fluxes by increasing the local ion temperature gradient by 80 % . Local fitting to the CER data in the domain 0.7 <= ρ <= 0.9 is compatible with such an increase in ion temperature gradient within the error bars. Ongoing multi-scale simulations are investigating whether radial electron streamers could dampen the poloidal zonal flows at ρ = 0.8 and increase the radial ion-scale flux. Supported by U.S. DOE under Contract Numbers DE-FG02-08ER54984, DE-FC02-04ER54698, and DE-AC02-05CH11231.
Ng, Jonathan; Huang, Yi -Min; Hakim, Ammar; ...
2015-11-05
As modeling of collisionless magnetic reconnection in most space plasmas with realistic parameters is beyond the capability of today's simulations, due to the separation between global and kinetic length scales, it is important to establish scaling relations in model problems so as to extrapolate to realistic scales. Furthermore, large scale particle-in-cell simulations of island coalescence have shown that the time averaged reconnection rate decreases with system size, while fluid systems at such large scales in the Hall regime have not been studied. Here, we perform the complementary resistive magnetohydrodynamic (MHD), Hall MHD, and two fluid simulations using a ten-moment modelmore » with the same geometry. In contrast to the standard Harris sheet reconnection problem, Hall MHD is insufficient to capture the physics of the reconnection region. Additionally, motivated by the results of a recent set of hybrid simulations which show the importance of ion kinetics in this geometry, we evaluate the efficacy of the ten-moment model in reproducing such results.« less
On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow
NASA Technical Reports Server (NTRS)
Vu, Bruce; Kandula, Max
2003-01-01
The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.
Scale-Dependent Rates of Uranyl Surface Complexation Reaction in Sediments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chongxuan; Shang, Jianying; Kerisit, Sebastien N.
Scale-dependency of uranyl[U(VI)] surface complexation rates was investigated in stirred flow-cell and column systems using a U(VI)-contaminated sediment from the US Department of Energy, Hanford site, WA. The experimental results were used to estimate the apparent rate of U(VI) surface complexation at the grain-scale and in porous media. Numerical simulations using molecular, pore-scale, and continuum models were performed to provide insights into and to estimate the rate constants of U(VI) surface complexation at the different scales. The results showed that the grain-scale rate constant of U(VI) surface complexation was over 3 to 10 orders of magnitude smaller, dependent on themore » temporal scale, than the rate constant calculated using the molecular simulations. The grain-scale rate was faster initially and slower with time, showing the temporal scale-dependency. The largest rate constant at the grain-scale decreased additional 2 orders of magnitude when the rate was scaled to the porous media in the column. The scaling effect from the grain-scale to the porous media became less important for the slower sorption sites. Pore-scale simulations revealed the importance of coupled mass transport and reactions in both intragranular and inter-granular domains, which caused both spatial and temporal dependence of U(VI) surface complexation rates in the sediment. Pore-scale simulations also revealed a new rate-limiting mechanism in the intragranular porous domains that the rate of coupled diffusion and surface complexation reaction was slower than either process alone. The results provided important implications for developing models to scale geochemical/biogeochemical reactions.« less
Probabilistic simulation of multi-scale composite behavior
NASA Technical Reports Server (NTRS)
Liaw, D. G.; Shiao, M. C.; Singhal, S. N.; Chamis, Christos C.
1993-01-01
A methodology is developed to computationally assess the probabilistic composite material properties at all composite scale levels due to the uncertainties in the constituent (fiber and matrix) properties and in the fabrication process variables. The methodology is computationally efficient for simulating the probability distributions of material properties. The sensitivity of the probabilistic composite material property to each random variable is determined. This information can be used to reduce undesirable uncertainties in material properties at the macro scale of the composite by reducing the uncertainties in the most influential random variables at the micro scale. This methodology was implemented into the computer code PICAN (Probabilistic Integrated Composite ANalyzer). The accuracy and efficiency of this methodology are demonstrated by simulating the uncertainties in the material properties of a typical laminate and comparing the results with the Monte Carlo simulation method. The experimental data of composite material properties at all scales fall within the scatters predicted by PICAN.