NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
The development of a multi-dimensional gambling accessibility scale.
Hing, Nerilee; Haw, John
2009-12-01
The aim of the current study was to develop a scale of gambling accessibility that would have theoretical significance to exposure theory and also serve to highlight the accessibility risk factors for problem gambling. Scale items were generated from the Productivity Commission's (Australia's Gambling Industries: Report No. 10. AusInfo, Canberra, 1999) recommendations and tested on a group with high exposure to the gambling environment. In total, 533 gaming venue employees (aged 18-70 years; 67% women) completed a questionnaire that included six 13-item scales measuring accessibility across a range of gambling forms (gaming machines, keno, casino table games, lotteries, horse and dog racing, sports betting). Also included in the questionnaire was the Problem Gambling Severity Index (PGSI) along with measures of gambling frequency and expenditure. Principal components analysis indicated that a common three factor structure existed across all forms of gambling and these were labelled social accessibility, physical accessibility and cognitive accessibility. However, convergent validity was not demonstrated with inconsistent correlations between each subscale and measures of gambling behaviour. These results are discussed in light of exposure theory and the further development of a multi-dimensional measure of gambling accessibility. PMID:19626427
Proc. VLDB 02 Multi-Dimensional Regression Analysis of
Dong, Guozhu
is that the dynamic one relies heavily on regression and trend analysis instead of simple, static aggregatesProc. VLDB 02 Multi-Dimensional Regression Analysis of Time-Series Data Streams #3; Yixin Chen 1-dimensional analysis and data mining of such data to alert people about dramatic changes of situations and to initiate
Multi-dimensional data scaling dynamical cascade Milan Jovovic, Geoffrey Fox
University ABSTRACT In this report a multi-dimensional data scaling approach is proposed in data mining of data is used in the segmentation process in the hierarchical scale computation of feature vectorsMulti-dimensional data scaling Â dynamical cascade approach Milan Jovovic, Geoffrey Fox Indiana
Exploring perceptually similar cases with multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Wang, Juan; Yang, Yongyi; Wernick, Miles N.; Nishikawa, Robert M.
2014-03-01
Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.
Development of a Multi-Dimensional Scale for PDD and ADHD
ERIC Educational Resources Information Center
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-06-08
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
An amino acid map of inter-residue contact energies using metric multi-dimensional scaling
Sourav Rakshit; G. K. Ananthasuresh
2008-01-01
We present an amino map based on their inter-residue contact energies using the Miyazawa–Jernigan matrix. This work is based on the method of metric multi-dimensional scaling (MMDS). The MMDS map shows, among other things, that the MJ contact energies imply the hydrophobic–hydrophilic nature of the amino acid residues. With the help of the map we are able to compare and
M. Ishii; S. T. Revankar; T. Leonardi; R. Dowlati; M. L. Bertodano; I. Babelli; W. Wang; H. Pokharna; V. H. Ransom; R. Viskanta; J. T. Han
1998-01-01
The three-level scaling approach was developed for the scientific design of an integral test facility and then it was applied to the design of the scaled facility known as the Purdue University Multi-Dimensional Integral Test Assembly (PUMA). The NRC Technical Program Group for severe accident scaling developed the conceptual framework for this scaling methodology. The present scaling method consists of
AstroMD: A Multi Dimensional Visualization and Analysis Toolkit for Astrophysics
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Gheller, C.; Calori, L.; Buonomo, F.; Imboden, S.
2010-10-01
Over the past few years, the role of visualization for scientific purpose has grown up enormously. Astronomy makes an extended use of visualization techniques to analyze data, and scientific visualization has became a fundamental part of modern researches in Astronomy. With the evolution of high performance computers, numerical simulations have assumed a great role in the scientific investigation, allowing the user to run simulation with higher and higher resolution. Data produced in these simulations are often multi-dimensional arrays with several physical quantities. These data are very hard to manage and to analyze efficiently. Consequently the data analysis and visualization tools must follow the new requirements of the research. AstroMD is a tool for data analysis and visualization of astrophysical data and can manage different physical quantities and multi-dimensional data sets. The tool uses virtual reality techniques by which the user has the impression of travelling through a computer-based multi-dimensional model.
An LMI Framework for Analysis and Design of Multi-dimensional Haptic Systems
Bianchini, Gianni
on passivity and Linear Matrix Inequalities (LMIs) for stability analysis and controller design for haptic and controller (virtual coupling) design also in the presence of non-passive virtual environmentsAn LMI Framework for Analysis and Design of Multi-dimensional Haptic Systems Gianni Bianchini
ERIC Educational Resources Information Center
Chiou, Guo-Li; Anderson, O. Roger
2010-01-01
This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…
M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security
level of continuous, situational awareness in a more efficient method than is available with the currentM&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security Objective We developed graph-theoretic models to characterize an complex cyber system at multiple scales. The models
AstroMD. A multi-dimensional data analysis tool for astrophysical simulations
U. Becciani; V. Antonuccio-Delogu; C. Gheller; L. Calori; F. Buonomo; S. Imboden
2000-06-28
Over the past few years, the role of visualization for scientific purpose has grown up enormously. Astronomy makes an extended use of visualization techniques to analyze data, and scientific visualization has became a fundamental part of modern researches in Astronomy. With the evolution of high performance computers, numerical simulations have assumed a great role in the scientific investigation, allowing the user to run simulation with higher and higher resolution. Data produced in these simulations are often multi-dimensional arrays with several physical quantities. These data are very hard to manage and to analyze efficiently. Consequently the data analysis and visualization tools must follow the new requirements of the research. AstroMD is a tool for data analysis and visualization of astrophysical data and can manage different physical quantities and multi-dimensional data sets. The tool uses virtual reality techniques by which the user has the impression of travelling through a computer-based multi-dimensional model. AstroMD is a freely available tool for the whole astronomical community (http://www.cineca.it/astromd/).
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
2012-05-01
This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.
Spectral analysis of multi-dimensional self-similar Markov processes
NASA Astrophysics Data System (ADS)
Modarresi, N.; Rezakhah, S.
2010-03-01
In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R+} with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points ?k, k in W, where ? is obtained by the equality l = ?T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(sdot) with the parameter space {?k, k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R+} is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {RHj(1), RjH(0), j = 0, 1, ..., T - 1}, where RHj(?) is the covariance function of jth and (j + ?)th observations of the process.
Effective use of metadata in the integration and analysis of multi-dimensional optical data
NASA Astrophysics Data System (ADS)
Pastorello, G. Z.; Gamon, J. A.
2012-12-01
Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.
NASA Astrophysics Data System (ADS)
Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue
2011-12-01
As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e
The use of multi-dimensional flow and morphodynamic models for restoration design analysis
NASA Astrophysics Data System (ADS)
McDonald, R.; Nelson, J. M.
2013-12-01
River restoration projects with the goal of restoring a wide range of morphologic and ecologic channel processes and functions have become common. The complex interactions between flow and sediment-transport make it challenging to design river channels that are both self-sustaining and improve ecosystem function. The relative immaturity of the field of river restoration and shortcomings in existing methodologies for evaluating channel designs contribute to this problem, often leading to project failures. The call for increased monitoring of constructed channels to evaluate which restoration techniques do and do not work is ubiquitous and may lead to improved channel restoration projects. However, an alternative approach is to detect project flaws before the channels are built by using numerical models to simulate hydraulic and sediment-transport processes and habitat in the proposed channel (Restoration Design Analysis). Multi-dimensional models provide spatially distributed quantities throughout the project domain that may be used to quantitatively evaluate restoration designs for such important metrics as (1) the change in water-surface elevation which can affect the extent and duration of floodplain reconnection, (2) sediment-transport and morphologic change which can affect the channel stability and long-term maintenance of the design; and (3) habitat changes. These models also provide an efficient way to evaluate such quantities over a range of appropriate discharges including low-probability events which often prove the greatest risk to the long-term stability of restored channels. Currently there are many free and open-source modeling frameworks available for such analysis including iRIC, Delft3D, and TELEMAC. In this presentation we give examples of Restoration Design Analysis for each of the metrics above from projects on the Russian River, CA and the Kootenai River, ID. These examples demonstrate how detailed Restoration Design Analysis can be used to guide design elements and how this method can point out potential stability problems or other risks before designs proceed to the construction phase.
NASA Astrophysics Data System (ADS)
Yuen, D. A.; Dzwinel, W.; Bollig, E. F.; Kadlec, B. F.; Ben-Zion, Y.; Yoshioka, S.
2003-12-01
We have developed an interactive web-based scheme for data-mining the spatio-temporal patterns of many earthquakes. This novel technique is based on cluster analysis of the multi-resolutional structures of earthquakes. The interactive scheme is based on a client-server paradigm in which we have used the off-screen rendering technique to facilitate the visual interrogation. A powerful 3-D visualization package Amira ( www.amiravis.com ) is also used to visualize the complex clusteral patte nrs in a reduced dimensional space. We have applied our method to observed and synthetic tic seismic catalogs. The observed data represent seismic activities situated around the Japanese islands in the 1997-2003 time interval. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by quasi-analytical 3-D elastic dislocation models .At the highest resolution, we analyze the local cluster structure in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches corresponding to neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space of the earthquake parameters. Using a non-hierarchical clustering algorithm and multi-dimensional scaling, we explore the multitudinous earthquakes by real-time 3-D visualization and inspection of multivariate clusters. At the resolutions characteristic of the earthquake parameters, all of the ongoing seismicity before and after largest events accumulate to a global structure consisting of a few separate clusters in the feature space . We show that by combining the clustering results from low and high resolution spaces, we can recognize precursory events more precisely. We will discuss how this WEB-IS ( Web-Interrrogative system ) would work. One can also access this by going to the URL http://boy.msi.umn.edu/web-is/. Its implementation and deployment in light of future GRID-computing will be discussed in terms of the recently developed Narada-Brokering (distributed messaging ) system of publishing and subscribing . This will provide a scalable infrastructure for several applications involving a set of nodes communicating with each other. .
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.
Thomas J. Power; Stefan C. Dombrowski; Marley W. Watkins; Jennifer A. Mautone; John W. Eagle
2007-01-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students
NASA Astrophysics Data System (ADS)
Nishimura, Kohei; Muramatsu, Chisako; Oiwa, Mikinao; Shiraiwa, Misaki; Endo, Tokiko; Doi, Kunio; Fujita, Hiroshi
2013-02-01
For retrieving reference images which may be useful to radiologists in their diagnosis, it is necessary to determine a reliable similarity measure which would agree with radiologists' subjective impression. In this study, we propose a new similarity measure for retrieval of similar images, which may assist radiologists in the distinction between benign and malignant masses on mammograms, and investigated its usefulness. In our previous study, to take into account the subjective impression, the psychophysical similarity measure was determined by use of an artificial neural network (ANN), which was employed to learn the relationship between radiologists' subjective similarity ratings and image features. In this study, we propose a psychophysical similarity measure based on multi-dimensional scaling (MDS) in order to improve the accuracy in retrieval of similar images. Twenty-seven images of masses, 3 each from 9 different pathologic groups, were selected, and the subjective similarity ratings for all possible 351 pairs were determined by 8 expert physicians. MDS was applied using the average subjective ratings, and the relationship between each output axis and image features was modeled by the ANN. The MDS-based psychophysical measures were determined by the distance in the modeled space. With a leave-one-out test method, the conventional psychophysical similarity measure was moderately correlated with subjective similarity ratings (r=0.68), whereas the psychophysical measure based on MDS was highly correlated (r=0.81). The result indicates that a psychophysical similarity measure based on MDS would be useful in the retrieval of similar images.
Dual scaling of comparison and reference stimuli in multi-dimensional psychological space
Jun Zhang
2004-01-01
Dzhafarov and Colonius (Psychol. Bull. Rev. 6 (1999)239; J. Math. Psychol. 45(2001)670) proposed a theory of Fechnerian scaling of the stimulus space based on the psychometric (discrimination probability) function of a human subject in a same–different comparison task. Here, we investigate a related but different paradigm, namely, referent–probe comparison task, in which the pair of stimuli (x and y) under
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Multi-Dimensional Scaling and MODELLER-Based Evolutionary Algorithms for Protein Model Refinement
Chen, Yan; Shang, Yi; Xu, Dong
2015-01-01
Protein structure prediction, i.e., computationally predicting the three-dimensional structure of a protein from its primary sequence, is one of the most important and challenging problems in bioinformatics. Model refinement is a key step in the prediction process, where improved structures are constructed based on a pool of initially generated models. Since the refinement category was added to the biennial Critical Assessment of Structure Prediction (CASP) in 2008, CASP results show that it is a challenge for existing model refinement methods to improve model quality consistently. This paper presents three evolutionary algorithms for protein model refinement, in which multidimensional scaling(MDS), the MODELLER software, and a hybrid of both are used as crossover operators, respectively. The MDS-based method takes a purely geometrical approach and generates a child model by combining the contact maps of multiple parents. The MODELLER-based method takes a statistical and energy minimization approach, and uses the remodeling module in MODELLER program to generate new models from multiple parents. The hybrid method first generates models using the MDS-based method and then run them through the MODELLER-based method, aiming at combining the strength of both. Promising results have been obtained in experiments using CASP datasets. The MDS-based method improved the best of a pool of predicted models in terms of the global distance test score (GDT-TS) in 9 out of 16test targets.
2014-01-01
Background Lack of social support is an important risk factor for antenatal depression and anxiety in low- and middle-income countries. We translated, adapted and validated the Multi-dimensional Scale of Perceived Social Support (MSPSS) in order to study the relationship between perceived social support, intimate partner violence and antenatal depression in Malawi. Methods The MSPSS was translated and adapted into Chichewa and Chiyao. Five hundred and eighty-three women attending an antenatal clinic were administered the MSPSS, depression screening measures, and a risk factor questionnaire including questions about intimate partner violence. A sub-sample of participants (n?=?196) were interviewed using the Structured Clinical Interview for DSM-IV to diagnose major depressive episode. Validity of the MSPSS was evaluated by assessment of internal consistency, factor structure, and correlation with Self Reporting Questionnaire (SRQ) score and major depressive episode. We investigated associations between perception of support from different sources (significant other, family, and friends) and major depressive episode, and whether intimate partner violence was a moderator of these associations. Results In both Chichewa and Chiyao, the MSPSS had high internal consistency for the full scale and significant other, family, and friends subscales. MSPSS full scale and subscale scores were inversely associated with SRQ score and major depression diagnosis. Using principal components analysis, the MSPSS had the expected 3-factor structure in analysis of the whole sample. On confirmatory factor analysis, goodness–of-fit indices were better for a 3-factor model than for a 2-factor model, and met standard criteria when correlation between items was allowed. Lack of support from a significant other was the only MSPSS subscale that showed a significant association with depression on multivariate analysis, and this association was moderated by experience of intimate partner violence. Conclusions The MSPSS is a valid measure of perceived social support in Malawi. Lack of support by a significant other is associated with depression in pregnant women who have experienced intimate partner violence in this setting. PMID:24938124
J. I. Brand; M. S. Hallbeck; S. M. Ryan
2001-01-18
Multi-dimensional meta-analysis (MDMA) is an innovative technique for investigating complex scientific problems influenced by "external" factors, such as social, medical, economic, political or climatic trends. MDMA extends traditional meta-analysis by identifying significant data from diverse and independent disciplines ("orthogonal dimensions") and incorporating truth tables and non-parametric analysis methods in the interpretation protocol. In this paper, we outline the methodology of MDMA. We then demonstrates how to apply the method to a specific problem: the relationship between asthma and air particulates. The conclusions from the example show that the further reduction of atmospheric particulate levels is not necessarily the answer to the increasing asthma incidence. This example also demonstrates the strength of this method of analysis for complex problems.
Integrative analysis of multi-dimensional imaging genomics data for Alzheimer's disease prediction
Zhang, Ziming; Huang, Heng; Shen, Dinggang
2014-01-01
In this paper, we explore the effects of integrating multi-dimensional imaging genomics data for Alzheimer's disease (AD) prediction using machine learning approaches. Precisely, we compare our three recent proposed feature selection methods [i.e., multiple kernel learning (MKL), high-order graph matching based feature selection (HGM-FS), sparse multimodal learning (SMML)] using four widely-used modalities [i.e., magnetic resonance imaging (MRI), positron emission tomography (PET), cerebrospinal fluid (CSF), and genetic modality single-nucleotide polymorphism (SNP)]. This study demonstrates the performance of each method using these modalities individually or integratively, and may be valuable to clinical tests in practice. Our experimental results suggest that for AD prediction, in general, (1) in terms of accuracy, PET is the best modality; (2) Even though the discriminant power of genetic SNP features is weak, adding this modality to other modalities does help improve the classification accuracy; (3) HGM-FS works best among the three feature selection methods; (4) Some of the selected features are shared by all the feature selection methods, which may have high correlation with the disease. Using all the modalities on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the best accuracies, described as (mean ± standard deviation)%, among the three methods are (76.2 ± 11.3)% for AD vs. MCI, (94.8 ± 7.3)% for AD vs. HC, (76.5 ± 11.1)% for MCI vs. HC, and (71.0 ± 8.4)% for AD vs. MCI vs. HC, respectively. PMID:25368574
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
NASA Astrophysics Data System (ADS)
Miltin Mboh, Cho; Montzka, Carsten; Baatz, Roland; Vereecken, Harry
2014-05-01
The integration of satellite data with physically based models can enable the characterization of earth systems and lead to improved management of natural resources at the catchment and regional scales. The reliability of simulations from physically based models depends on the accuracy of the forcing data and the model parameters. Forcing data obtained from satellites or other sources are often plagued with uncertainties and the model parameters require updates to capture the ever-changing environmental conditions. Although comprehensive data assimilation schemes for dual state and parameter updating have been proposed for improving the reliability of model simulations, their computational cost is sometimes prohibitively high. In this contribution, we propose a cost-effective and efficient alternative to handling complex multi-dimensional parameter and state improvement at the catchment scale. Our approach demystifies the complex multi-dimensional parameter estimation and state improvement problem by combining 1-dimensional exhaustive gridding with sensitivity-pushing, Newton-Raphson based guided random sampling and feedback from historical inverse estimates. In a numerical case study in the joint Rur and Erft Catchments in Germany, we apply our novel partial grid search approach to the estimation of soil surface roughness and vegetation opacity from disaggregated SMOS (Soil Moisture and Ocean Salinity Satellite) brightness temperature using the Community Microwave Emission Modeling platform (CMEM). Besides plausibly good estimates of the soil surface roughness and vegetation opacity at the catchment scale, our method also leads to improvement of the system states like soil surface moisture and soil temperature profile. Our method therefore has data assimilation capabilities without the associated computational cost incurred in ensemble-based data assimilation approaches. The partial grid search approach to parameter estimation is therefore a promising tool for multi-dimensional parameter estimation and state improvement in earth systems.
Merritt, Cullen
2014-05-31
This study specifies and tests a multi-dimensional model of publicness, building upon extant literature in this area. Publicness represents the degree to which an organization has "public" ties. An organization's degree ...
Fast, Multi-Dimensional and Simultaneous Kymograph-Like Particle Dynamics (SkyPad) Analysis
Cadot, Bruno; Gache, Vincent; Gomes, Edgar R.
2014-01-01
Background Kymograph analysis is a method widely used by researchers to analyze particle dynamics in one dimensional (1D) trajectories. Results Here we provide a Visual Basic-coded algorithm to use as a Microsoft Excel add-in that automatically analyzes particles in 2D trajectories with all the advantages of kymograph analysis. Conclusions This add-in, which we named SkyPad, leads to significant time saving and higher accuracy of particle analysis. Finally, SkyPad can also be used for 3D trajectories analysis. PMID:24586511
Error and data coding in the multi-dimensional analysis of human movement signals.
Loslever, P
1993-01-01
This paper discusses two main problems of human motion data: their uncertainty and analysis. Considering the first point, a simulating method is proposed to assess the error. This approach is applied to a joint angle, computed from the positions of points obtained through a three-dimensional video-computer system. Considering the second point, a multi-variate methodology based on appropriate data coding and the correspondence factor analysis method is proposed. The outcomes of this allow the relations within the time windows of the variable set, the distances within the observation set and the correspondences between these two sets to be shown graphically. To illustrate this approach, two examples are considered: the analysis of the low back-pelvis angle in an ergonomical study about the sitting posture and the analysis of joint angles in the gait. PMID:8280310
Zheng, Xiwei; Yoo, Michelle J.; Hage, David S.
2013-01-01
A multi-dimensional chromatographic approach was developed to measure the free fractions of drug enantiomers in samples that also contained a binding protein or serum. This method, which combined ultrafast affinity extraction with a chiral stationary phase, was demonstrated using the drug warfarin and the protein human serum albumin. PMID:23979112
Gordon, Scott M; Deng, Jingyuan; Tomann, Alex B; Shah, Amy S; Lu, L Jason; Davidson, W Sean
2013-11-01
The distribution of circulating lipoprotein particles affects the risk for cardiovascular disease (CVD) in humans. Lipoproteins are historically defined by their density, with low-density lipoproteins positively and high-density lipoproteins (HDLs) negatively associated with CVD risk in large populations. However, these broad definitions tend to obscure the remarkable heterogeneity within each class. Evidence indicates that each class is composed of physically (size, density, charge) and compositionally (protein and lipid) distinct subclasses exhibiting unique functionalities and differing effects on disease. HDLs in particular contain upward of 85 proteins of widely varying function that are differentially distributed across a broad range of particle diameters. We hypothesized that the plasma lipoproteins, particularly HDL, represent a continuum of phospholipid platforms that facilitate specific protein-protein interactions. To test this idea, we separated normal human plasma using three techniques that exploit different lipoprotein physicochemical properties (gel filtration chromatography, ionic exchange chromatography, and preparative isoelectric focusing). We then tracked the co-separation of 76 lipid-associated proteins via mass spectrometry and applied a summed correlation analysis to identify protein pairs that may co-reside on individual lipoproteins. The analysis produced 2701 pairing scores, with the top hits representing previously known protein-protein interactions as well as numerous unknown pairings. A network analysis revealed clusters of proteins with related functions, particularly lipid transport and complement regulation. The specific co-separation of protein pairs or clusters suggests the existence of stable lipoprotein subspecies that may carry out distinct functions. Further characterization of the composition and function of these subspecies may point to better targeted therapeutics aimed at CVD or other diseases. PMID:23882025
NASA Astrophysics Data System (ADS)
Tyobeka, Bismark Mzubanzi
A coupled neutron transport thermal-hydraulics code system with both diffusion and transport theory capabilities is presented. At the heart of the coupled code is a powerful neutronics solver, based on a neutron transport theory approach, powered by the time-dependent extension of the well known DORT code, DORT-TD. DORT-TD uses a fully implicit time integration scheme and is coupled via a general interface to the thermal-hydraulics code THERMIX-DIREKT, an HTR-specific two dimensional core thermal-hydraulics code. Feedback is accounted for by interpolating multigroup cross sections from pre-generated libraries which are structured for user specified discrete sets of thermal-hydraulic parameters e.g. fuel and moderator temperatures. The coupled code system is applied to two HTGR designs, the PBMR 400MW and the PBMR 268MW. Steady-state and several design basis transients are modeled in an effort to discern with the adequacy of using neutron diffusion theory as against the more accurate but yet computationally expensive neutron transport theory. It turns out that there are small but significant differences in the results from using either of the two theories. It is concluded that diffusion theory can be used with a higher degree of confidence in the PBMR as long as more than two energy groups are used and that the result must be checked against lower order transport solution, especially for safety analysis purposes. The end product of this thesis is a high fidelity, state-of-the-art computer code system, with multiple capabilities to analyze all PBMR safety related transients in an accurate and efficient manner.
Multi-dimensional edge detection operators
NASA Astrophysics Data System (ADS)
Youn, Sungwook; Lee, Chulhee
2014-05-01
In remote sensing, modern sensors produce multi-dimensional images. For example, hyperspectral images contain hundreds of spectral images. In many image processing applications, segmentation is an important step. Traditionally, most image segmentation and edge detection methods have been developed for one-dimensional images. For multidimensional images, the output images of spectral band images are typically combined under certain rules or using decision fusions. In this paper, we proposed a new edge detection algorithm for multi-dimensional images using secondorder statistics. First, we reduce the dimension of input images using the principal component analysis. Then we applied multi-dimensional edge detection operators that utilize second-order statistics. Experimental results show promising results compared to conventional one-dimensional edge detectors such as Sobel filter.
Towards Parallel Access of Multi-dimensional, Multi-resolution Scientific Data
Utah, University of
such as digital photography [3] and visualization of large scientific data [2] and is promising for analysisTowards Parallel Access of Multi-dimensional, Multi- resolution Scientific Data Sidharth Kumar Abstract-- Large scale scientific simulations routinely produce data of increasing resolution. Analyzing
Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine
2013-06-01
Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic Development Scale (KEDS) designed to assess some of the core affective, cognitive and behavioural components of empathy concurrently. The KEDS assesses responses to picture scenarios depicting a range of individual and interpersonal situations differing in social complexity. Results from 220 children indicate the KEDS measures three related but distinct aspects of empathy that are also related to existing measures of empathy and cognitive development. Scores on the KEDS show age and some gender-related differences in the expected direction. PMID:23659893
Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis
Jeong, Myong K [ORNL; Kong, Seong G [ORNL; Omitaomu, Olufemi A [ORNL
2008-09-01
Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.
Multi-Dimensional Recurrent Neural Networks
Graves, Alex; Schmidhuber, Juergen
2007-01-01
Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
NASA Astrophysics Data System (ADS)
Kalenchuk, K. S.; Hutchinson, D.; Diederichs, M. S.
2013-12-01
Downie Slide, one of the world's largest landslides, is a massive, active, composite, extremely slow rockslide located on the west bank of the Revelstoke Reservoir in British Columbia. It is a 1.5 billion m3 rockslide measuring 2400 m along the river valley, 3300m from toe to headscarp and up to 245 m thick. Significant contributions to the field of landslide geomechanics have been made by analyses of spatially and temporally discriminated slope deformations, and how these are controlled by complex geological and geotechnical factors. Downie Slide research demonstrates the importance of delineating massive landslides into morphological regions in order to characterize global slope behaviour and identify localized events, which may or may not influence the overall slope deformation patterns. Massive slope instabilities do not behave as monolithic masses, rather, different landslide zones can display specific landslide processes occurring at variable rates of deformation. The global deformation of Downie Slide is extremely slow moving; however localized regions of the slope incur moderate to high rates of movement. Complex deformation processes and composite failure mechanism are contributed to by topography, non-uniform shear surfaces, heterogeneous rockmass and shear zone strength and stiffness characteristics. Further, from the analysis of temporal changes in landslide behaviour it has been clearly recognized that different regions of the slope respond differently to changing hydrogeological boundary conditions. State-of-the-art methodologies have been developed for numerical simulation of large landslides; these provide important tools for investigating dynamic landslide systems which account for complex three-dimensional geometries, heterogenous shear zone strength parameters, internal shear zones, the interaction of discrete landslide zones and piezometric fluctuations. Numerical models of Downie Slide have been calibrated to reproduce observed slope behaviour, and the calibration process has provided important insight to key factors controlling massive slope mechanics. Through numerical studies it has been shown that the three-dimensional interpretation of basal slip surface geometry and spatial heterogeneity in shear zone stiffness are important factors controlling large-scale slope deformation processes. The role of secondary internal shears and the interaction between landslide morphological zones has also been assessed. Further, numerical simulation of changing groundwater conditions has produced reasonable correlation with field observations. Calibrated models are valuable tools for the forward prediction of landslide dynamics. Calibrated Downie Slide models have been used to investigate how trigger scenarios may accelerate deformations at Downie Slide. The ability to reproduce observed behaviour and forward test hypothesized changes to boundary conditions has valuable application in hazard management of massive landslides. The capacity of decision makers to interpret large amounts of data, respond to rapid changes in a system and understand complex slope dynamics has been enhanced.
Multi-dimensional real Fourier transform
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1971-01-01
Four subroutines compute one-dimensional and multi-dimensional Fourier transforms for real data, multi-dimensional complex Fourier transforms, and multi-dimensional sine, cosine and sine-cosine transforms. Subroutines use Cooley-Tukey fast Fourier transform. In all but one-dimensional case, transforms are calculated in up to six dimensions.
ERIC Educational Resources Information Center
Papay, John P.; Willett, John B.; Murnane, Richard J.
2011-01-01
We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,…
Coulson, Irene Katherina; Galenza, Shirley; Bratt, Sharon; Foisy-Doll, Colette R; Haase, Mary
2015-01-01
A significant transformation occurring in the continuing care industry is an attempt to shift the culture from impersonal institutions into true person-centred care (PCC) homes. This approach re-orients the facility's values, attitudes, norms and hierarchies while creating flexible role descriptions to promote collaborative teamwork. PCC practices will require healthcare teams to develop new approaches that empower residents and families to become partners in the development of a plan of care. This report outlines a study, which will gather data from an organizational policy analysis and interviews with residents and healthcare staff. These data will be examined through a sociological lens to identify areas for team improvement. The results will guide the design of a training curriculum to be delivered using traditional and multi-modal hi-fidelity simulation methods. PMID:24831267
Scaling discourse analysis: Experiences from Hermanus, South Africa and Walvis Bay, Namibia1
Roger Keil; Anne-Marie Debbané
2005-01-01
Scaling discourse analysis refers to the necessity to consider environmental discourse a multi-dimensional and diversified practice. Depending on the various levels of state and society at which environmental policies are applied and depending on the geographical scale at which their solution is sought, we have to differentiate both policy processes and outcomes in environmental politics. We introduce the importance of
Simulation Analysis on Characteristics of a Planar Capacitive Sensor for Large Scale Measurement
Jianping Yu; Wen Wang; Xinxin Li; Zhu Zhu; Zichen Chen
2009-01-01
Measuring multi-dimensional linear displacements in large scale range with high precision is always focused on by researchers. This paper presents the results of simulation analysis on characteristics of a planar capacitive sensor (PCS), which is proposed for X and Y direction linear displacement measurement by measuring the periodic change of capacitance between two periodic plates, one fixed (i.e. fixed plate)
Multi-dimensional laser radars
NASA Astrophysics Data System (ADS)
Molebny, Vasyl; Steinvall, Ove
2014-06-01
We introduce the term "multi-dimensional laser radar", where the dimensions mean not only the coordinates of the object in space, but its velocity and orientation, parameters of the media: scattering, refraction, temperature, humidity, wind velocity, etc. The parameters can change in time and can be combined. For example, rendezvous and docking missions, autonomous planetary landing, along with laser ranging, laser altimetry, laser Doppler velocimetry, are thought to have aboard also the 3D ladar imaging. Operating in combinations, they provide more accurate and safer navigation, docking or landing, hazard avoidance capabilities. Combination with Doppler-based measurements provides more accurate navigation for both space and cruise missile applications. Critical is the information identifying the snipers based on combination of polarization and fluctuation parameters with data from other sources. Combination of thermal imaging and vibrometry can unveil the functionality of detected targets. Hyperspectral probing with laser reveals even more parameters. Different algorithms and architectures of ladar-based target acquisition, reconstruction of 3D images from point cloud, information fusion and displaying is discussed with special attention to the technologies of flash illumination and single-photon focal-plane-array detection.
NASA Astrophysics Data System (ADS)
Huang, Jun-Wei; Bellefleur, Gilles; Milkereit, Bernd
2011-11-01
We present CSimMDMV, a software package to simulate two- and three-dimensional, multi-variant heterogeneous reservoir models from well logs at different characteristic scales. Based on multi-variant conditional stochastic simulation, this software is able to parameterize multi-dimensional heterogeneities and to construct heterogeneous reservoir models with multiple rock properties. The models match the well logs at borehole locations, simulate heterogeneities at the level of detail provided by well logging data elsewhere in the model space, and simultaneously honor the correlations present in various rock properties. It provides a versatile environment in which a variety of geophysical experiments can be performed. This includes the estimation of petrophysical properties and the study of geophysical response to the heterogeneities. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Linux cluster. A case study on the assessment of natural gas hydrate amount in Northwest Territories, Canada is provided. We show that the combination of rock physics theory with multiple realizations of three-dimensional and three-variant (3D-3V) gas hydrate reservoir petrophysical models enable us to estimate the average amount of gas hydrate and associated uncertainties using Monte Carlo method.
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
Face recognition using local multi dimensional statistics
R. Alemy; M. E. Shiri; F. Didehvar; Z. Hajimohammadi
2009-01-01
Though numerous approaches have been proposed for face recognition. In this paper we propose a novel face recognition approach based on adaptively weighted patch local statistic in multi dimensional (LMDS) when only one exemplar image per person is available. In this approach, a face image is decomposed into a set of equal-sized patches in a non-overlapping way. In order to
On Multi-Dimensional Unstructured Mesh Adaption
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
1999-01-01
Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.
Extended Darknet: Multi-Dimensional Internet Threat Monitoring System
NASA Astrophysics Data System (ADS)
Shimoda, Akihiro; Mori, Tatsuya; Goto, Shigeki
Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, light-weight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multi-dimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
PIDX: Efficient Parallel I/O for Multi-resolution Multi-dimensional Scientific Datasets
Utah, University of
used successfully in fields such as digital photography [17], visualization of large scientific dataPIDX: Efficient Parallel I/O for Multi-resolution Multi-dimensional Scientific Datasets Sidharth provides efficient, cache oblivious, and progressive access to large-scale scientific datasets by storing
Weakly multi-dimensional cosmic-ray-modified MHD shocks
NASA Technical Reports Server (NTRS)
Zank, G. P.; Webb, G. W.
1990-01-01
The multi-dimensional structure of weak, energetic-particle-modified shocks is investigated by means of appropriate perturbation techniques. The time-dependent shock-structure equation is found to be a generalized form of the well-known one-dimensional Burgers equation, whose steady state, in the absence of cosmic rays, is shown to be related to an equation modeling steady transonic flow in several dimensions. The time-dependent (1 + 2)- and (1 + 3)-dimensional Burgers equations are integrated exactly by means of Hirota's technique for one-shock solutions. On the basis of the exact solutions, a discussion relating the various length scales associated with the shock is presented.
Vlasov multi-dimensional model dispersion relation
Lushnikov, Pavel M., E-mail: plushnik@math.unm.edu [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Rose, Harvey A. [Theoretical Division, Los Alamos National Laboratory, MS-B213, Los Alamos, New Mexico 87545 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States); Silantyev, Denis A.; Vladimirova, Natalia [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States)
2014-07-15
A hybrid model of the Vlasov equation in multiple spatial dimension D?>?1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?{sup N}, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.
ERIC Educational Resources Information Center
Strijbos, Jan-Willem; Stahl, Gerry
2007-01-01
In CSCL research, collaboration through chat has primarily been studied in dyadic settings. This article discusses three issues that emerged during the development of a multi-dimensional coding procedure for small-group chat communication: (a) the unit of analysis and unit fragmentation, (b) the reconstruction of the response structure and (c)…
Multi-dimensional reconstruction of seismic data
NASA Astrophysics Data System (ADS)
Liu, Bin
2004-12-01
In seismic data processing, we often need to interpolate and extrapolate missing data at spatial locations. The reconstruction problem can be posed as an inverse problem where from inadequate and incomplete data we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired. This thesis presents a wavefield reconstruction scheme called minimum weighted norm interpolation (MWNI) for spatially band limited signals. The method entails the solution of an inverse problem where a wavenumber domain regularization term is included. The regularization term is not only used to constrain the solution to be spatially band limited but also to impose a priori spectral shape. The numerical algorithm is quite efficient since the method of conjugate gradients in conjunction with fast matrix-vector multiplication, implemented via the Fast Fourier Transform (FFT), were adopted. In addition, its computational efficiency allows for feasible extensions to higher dimensional interpolation schemes. In this thesis, the MWNI method is used to interpolate prestack seismic data before wave equation amplitude versus angle imaging. Synthetic data were used to investigate the effectiveness of the 2-D/3-D MWNI scheme at the time of preconditioning seismic data for wave equation AVA imaging where a regular and dense data sampling is required to accurately estimate angle gathers. Two field data examples are presented to illustrate the application of the multi-dimensional MWNI schemes to real-world datasets.
Star-ND (Multi-Dimensional Star-Identification)
Spratling, Benjamin
2012-07-16
In order to perform star-identification with lower processing requirements, multi-dimensional techniques are implemented in this research as a database search as well as to create star pattern parameters. New star pattern parameters are presented...
Multi-Dimensional Sensing for Security in Everyday Life
NASA Astrophysics Data System (ADS)
Yamaguchi, Jun'ichi; Shimomura, Noriko; Umeda, Kazunori; Satou, Yutaka; Jitsumori, Akio; Fujiyoshi, Hironobu; Terada, Kenji; Hontani, Hidekata; Watanabe, Eriko; Okuda, Haruhisa; Haga, Tetsuji; Hashimoto, Manabu
In this paper, the authors describe multi-dimensional sensing that can be adopted to realize security in everyday life. In this study, the applications of sensing based on the use of surveillance cameras for equipment mounted on cars, road transport, fields such as farming, and health maintenance have been investigated. Recently developed systems and methods involving multi-dimensional sensing technologies are introduced, and current issues and trends are described.
Multi-dimensional Mass Spectrometry-based Shotgun Lipidomics
Wang, Miao; Han, Xianlin
2014-01-01
Summary Multi-dimensional mass spectrometry-based shotgun lipidomics (MDMS-SL) has become a foundation analytical technology platform among current lipidomics practices due to its high efficiency, sensitivity, and reproducibility, as well as its broad coverage. This platform has been broadly used to determine the altered content and/or composition of lipid classes, subclasses, and individual molecular species induced by diseases, genetic manipulations, drug treatments, and aging, among others. Herein, we briefly discussed the principles underlying this technology and presented a protocol for routine analysis of many of the lipid classes and subclasses covered by MDMS-SL directly from lipid extracts of biological samples. In particular, lipid sample preparation from a variety of biological materials, which is one of the key components of MDMS-SL was described in details. The protocol of mass spectrometric analysis can readily be expanded for analysis of other lipid classes not mentioned as long as appropriate sample preparation is conducted. It is our sincerely hope that this protocol could aid the researchers in the field to better understand and manage the technology for analysis of cellular lipidomes. PMID:25270931
Multi-dimensional quasi-simple waves in weakly dissipative flows
NASA Astrophysics Data System (ADS)
Rajaee, Leila; Eshraghi, Homayoon; Popovych, Roman O.
2008-03-01
A multi-dimensional simple wave formalism is employed to formulate a multi-dimensional quasi-simple wave for a weakly dissipative fluid. This is a natural but nontrivial generalization of the so-called unidirectional quasi-simple wave. The method is more close to multi-orders analysis which is different from the standard perturbation method. Detailed solving up to the second order is presented for a 2D sound simple wave. A new 2D Burgers equation is derived for the wave phase. It essentially differs from the known 2D generalizations of the Burgers equation, e.g., from the Zabolotskaya-Khokhlov equation. The derived equation is investigated in the framework of group analysis of differential equations. Multi-parameter families of its exact solutions are constructed. Simplest solutions are chosen for analysis of their physical relevance in the initial variables.
High-value energy storage for the grid: a multi-dimensional look
Culver, Walter J.
2010-12-15
The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)
Multi-dimensional fission model with a complex absorbing potential
Scamps, Guillaume
2015-01-01
We study the dynamics of multi-dimensional quantum tunneling by introducing a complex absorbing potential to a two-dimensional model for spontaneous fission. We fist diagonalize the Hamiltonian with the complex potential to determine a resonance state as well as its life-time. We then solve the time-dependent Schr\\"odinger equation with such basis in order to investigate the tunneling path. We compare this method with the semi-classical method for multi-dimensional tunneling with imaginary time. A good agreement is found both for the life-time and for the tunneling path.
Multi-dimensional discovery of biomarker and phenotype complexes
2010-01-01
Background Given the rapid growth of translational research and personalized healthcare paradigms, the ability to relate and reason upon networks of bio-molecular and phenotypic variables at various levels of granularity in order to diagnose, stage and plan treatments for disease states is highly desirable. Numerous techniques exist that can be used to develop networks of co-expressed or otherwise related genes and clinical features. Such techniques can also be used to create formalized knowledge collections based upon the information incumbent to ontologies and domain literature. However, reports of integrative approaches that bridge such networks to create systems-level models of disease or wellness are notably lacking in the contemporary literature. Results In response to the preceding gap in knowledge and practice, we report upon a prototypical series of experiments that utilize multi-modal approaches to network induction. These experiments are intended to elicit meaningful and significant biomarker-phenotype complexes spanning multiple levels of granularity. This work has been performed in the experimental context of a large-scale clinical and basic science data repository maintained by the National Cancer Institute (NCI) funded Chronic Lymphocytic Leukemia Research Consortium. Conclusions Our results indicate that it is computationally tractable to link orthogonal networks of genes, clinical features, and conceptual knowledge to create multi-dimensional models of interrelated biomarkers and phenotypes. Further, our results indicate that such systems-level models contain interrelated bio-molecular and clinical markers capable of supporting hypothesis discovery and testing. Based on such findings, we propose a conceptual model intended to inform the cross-linkage of the results of such methods. This model has as its aim the identification of novel and knowledge-anchored biomarker-phenotype complexes. PMID:21044361
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Bandwidth-intensive FPGA architecture for multi-dimensional DFT
Chi-Li Yu; Chaitali Chakrabarti; Sungho Park; Vijaykrishnan Narayanan
2010-01-01
Multi-dimensional (MD) Discrete Fourier Transform (DFT) is a key kernel algorithm in many signal processing algorithms, including radar data processing and medical imaging. Although there are many efficient software solutions, they are not suitable for applications that require fast response time. In this paper we focus on FPGA-based implementation of MDDFT. The proposed architecture is based on a decomposition algorithm
Application of Multi-Dimensional Sensing Technologies in Production Systems
NASA Astrophysics Data System (ADS)
Shibuya, Hisae; Kimachi, Akira; Suwa, Masaki; Niwakawa, Makoto; Okuda, Haruhisa; Hashimoto, Manabu
Multi-dimensional sensing has been used for various purposes in the field of production systems. The members of the IEEJ MDS committee investigated the trends in sensing technologies and their applications. In this paper, the result of investigations of auto-guided vehicles, cell manufacturing robots, safety, maintenance, worker monitoring, and sensor networks are discussed.
Novel optimization method for multi-dimensional breast photoacoustic tomography
NASA Astrophysics Data System (ADS)
Cao, Meng; Feng, Ting; Yuan, Jie; Du, Sidan; Liu, Xiaojun; Wang, Xueding; Carson, Paul L.
2014-11-01
Photoacoustic tomography (PAT) is an effective optical biomedical imaging method which is characterized with noninonizing and noninvasive, presenting good soft tissue contrast with excellent spatial resolution. To build a multi-dimensional breast PAT image, more ultrasound sensors are needed, which brings difficulties to data acquisition. The time complexity for multi-dimensional breast PAT image reconstruction also rises tremendously. Compressive sensing (CS) theory breaks the restriction of Nyquist sampling theorem and is capable to rebuild signals with fewer measurements. In this contribution, we propose an effective optimization method for multi-dimensional breast PAT, which combines the theory of CS and an unevenly, adaptively distributing data acquisition algorithm. With this method, the quality of our reconstructed breast PAT images are better than those using existing multi-dimensional breast PAT system. To build breast PAT images with the same quality, the required number of ultrasound transducers is decreased by using our proposed method. We have verified our method on simulation data and achieved expected results in both two dimensional and three dimensional PAT image reconstruction. In the future, our method can be applied to various aspects of biomedical PAT imaging such as early stage tumor detection and in vivo imaging monitoring.
Synchronous Circuit Optimization via MultiDimensional Retiming y
Sha, Edwin
by producing a circuit capable of executing all its operations in parallel. The multi is to improve the circuit performance by achieving full parallelism among all operations in the circuit. DueSynchronous Circuit Optimization via MultiDimensional Retiming y Nelson Luiz Passos Edwin Hsing
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
NASA Astrophysics Data System (ADS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Sensitivity of Multi-dimensional Bayesian Classifiers
Utrecht, Universiteit
probability. A sensitivity value of 1 basically marks the transition from a low to a high sensitivity. 3. In a sensitivity analysis, parameters x of a network are varied and some probability of interest as a function. This value captures the sensitivity of an output probability to small changes in a parameter. We compare MBCs
Developing a new multi-dimensional depression assessment scale
Cheung, Ho Nam
2010-07-01
Depression is a global risk factor of mental health. Empirical studies (e.g. Beck, 1967, 1976) and clinical observations (APA, 1996, 2000) showed that it has symptoms in 4 domains-, emotional, cognitive, somatic and interpersonal. A good depression...
Advanced numerics for multi-dimensional fluid flow calculations
NASA Technical Reports Server (NTRS)
Vanka, S. P.
1984-01-01
In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.
Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus
Weizhen Mao; Jie Chen; William Watson
2005-11-30
Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
Portable laser synthesizer for high-speed multi-dimensional spectroscopy
Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)
2012-05-29
Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.
Stationary solutions for multi-dimensional Gross-Pitaevskii equation with double-well potential
Andrea Sacchetti
2013-12-04
In this paper we consider a non-linear Schroedinger equation with a cubic nonlinearity and a multi-dimensional double well potential. In the semiclassical limit the problem of the existence of stationary solutions simply reduces to the analysis of a finite dimensional Hamiltonian system which exhibits different behavior depending on the dimension. In particular, in dimension 1 the symmetric stationary solution shows a standard pitchfork bifurcation effect, while in dimension 2 and 3 new asymmetrical solutions associated to saddle points occur. These last solutions are localized on a single well and this fact is related to the phase transition effect observed in Bose-Einstein condensates in periodical lattices.
Identifying multi-layer gene regulatory modules from multi-dimensional genomic data
Liu, Chun-Chi; Zhou, Xianghong Jasmine
2012-01-01
Motivation: Eukaryotic gene expression (GE) is subjected to precisely coordinated multi-layer controls, across the levels of epigenetic, transcriptional and post-transcriptional regulations. Recently, the emerging multi-dimensional genomic dataset has provided unprecedented opportunities to study the cross-layer regulatory interplay. In these datasets, the same set of samples is profiled on several layers of genomic activities, e.g. copy number variation (CNV), DNA methylation (DM), GE and microRNA expression (ME). However, suitable analysis methods for such data are currently sparse. Results: In this article, we introduced a sparse Multi-Block Partial Least Squares (sMBPLS) regression method to identify multi-dimensional regulatory modules from this new type of data. A multi-dimensional regulatory module contains sets of regulatory factors from different layers that are likely to jointly contribute to a local ‘gene expression factory’. We demonstrated the performance of our method on the simulated data as well as on The Cancer Genomic Atlas Ovarian Cancer datasets including the CNV, DM, ME and GE data measured on 230 samples. We showed that majority of identified modules have significant functional and transcriptional enrichment, higher than that observed in modules identified using only a single type of genomic data. Our network analysis of the modules revealed that the CNV, DM and microRNA can have coupled impact on expression of important oncogenes and tumor suppressor genes. Availability and implementation: The source code implemented by MATLAB is freely available at: http://zhoulab.usc.edu/sMBPLS/. Contact: xjzhou@usc.edu Supplementary information: Supplementary material are available at Bioinformatics online. PMID:22863767
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2013-01-01
In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the…
Fourier transform assisted deconvolution of skewed peaks in complex multi-dimensional chromatograms.
Hanke, Alexander T; Verhaert, Peter D E M; van der Wielen, Luuk A M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2015-05-15
Lower order peak moments of individual peaks in heavily fused peak clusters can be determined by fitting peak models to the experimental data. The success of such an approach depends on two main aspects: the generation of meaningful initial estimates on the number and position of the peaks, and the choice of a suitable peak model. For the detection of meaningful peaks in multi-dimensional chromatograms, a fast data scanning algorithm was combined with prior resolution enhancement through the reduction of column and system broadening effects with the help of two-dimensional fast Fourier transforms. To capture the shape of skewed peaks in multi-dimensional chromatograms a formalism for the accurate calculation of exponentially modified Gaussian peaks, one of the most popular models for skewed peaks, was extended for direct fitting of two-dimensional data. The method is demonstrated to successfully identify and deconvolute peaks hidden in strongly fused peak clusters. Incorporation of automatic analysis and reporting of the statistics of the fitted peak parameters and calculated properties allows to easily identify in which regions of the chromatograms additional resolution is required for robust quantification. PMID:25841612
Multi-dimensional coordination in cross-country skiing analyzed using self-organizing maps.
Lamb, Peter F; Bartlett, Roger; Lindinger, Stefan; Kennedy, Gavin
2014-02-01
This study sought to ascertain how multi-dimensional coordination patterns changed with five poling speeds for 12 National Standard cross-country skiers during roller skiing on a treadmill. Self-organizing maps (SOMs), a type of artificial neural network, were used to map the multi-dimensional time series data on to a two-dimensional output grid. The trajectories of the best-matching nodes of the output were then used as a collective variable to train a second SOM to produce attractor diagrams and attractor surfaces to study coordination stability. Although four skiers had uni-modal basins of attraction that evolved gradually with changing speed, the other eight had two or three basins of attraction as poling speed changed. Two skiers showed bi-modal basins of attraction at some speeds, an example of degeneracy. What was most clearly evident was that different skiers showed different coordination dynamics for this skill as poling speed changed: inter-skier variability was the rule rather than an exception. The SOM analysis showed that coordination was much more variable in response to changing speeds compared to outcome variables such as poling frequency and cycle length. PMID:24060219
Construction of Multi-Dimensional Periodic Complementary Array Sets
NASA Astrophysics Data System (ADS)
Zeng, Fanxin; Zhang, Zhenyu
Multi-dimensional (MD) periodic complementary array sets (CASs) with impulse-like MD periodic autocorrelation function are naturally generalized to (one dimensional) periodic complementary sequence sets, and such array sets are widely applied to communication, radar, sonar, coded aperture imaging, and so forth. In this letter, based on multi-dimensional perfect arrays (MD PAs), a method for constructing MD periodic CASs is presented, which is carried out by sampling MD PAs. It is particularly worth mentioning that the numbers and sizes of sub-arrays in the proposed MD periodic CASs can be freely changed within the range of possibilities. In particular, for arbitrarily given positive integers M and L, two-dimensional periodic polyphase CASs with the number M2 and size L × L of sub-arrays can be produced by the proposed method. And analogously, pseudo-random MD periodic CASs can be given when pseudo-random MD arrays are sampled. Finally, the proposed method's validity is made sure by a given example.
Multi-dimensional Longwave Forcing of Boundary Layer Cloud Systems
Mechem, David B.; Kogan, Y. L.; Ovtchinnikov, Mikhail; Davis, Anthony B; Evans, K. F.; Ellingson, Robert G.
2008-12-20
The importance of multi-dimensional (MD) longwave radiative effects on cloud dynamics is evaluated in a large eddy simulation (LES) framework employing multi-dimensional radiative transfer (Spherical Harmonics Discrete Ordinate Method —SHDOM). Simulations are performed for a case of unbroken, marine boundary layer stratocumulus and a broken field of trade cumulus. “Snapshot” calculations of MD and IPA (independent pixel approximation —1D) radiative transfer applied to LES cloud fields show that the total radiative forcing changes only slightly, although the MD effects significantly modify the spatial structure of the radiative forcing. Simulations of each cloud type employing MD and IPA radiative transfer, however, differ little. For the solid cloud case, relative to using IPA, the MD simulation exhibits a slight reduction in entrainment rate and boundary layer TKE relative to the IPA simulation. This reduction is consistent with both the slight decrease in net radiative forcing and a negative correlation between local vertical velocity and radiative forcing, which implies a damping of boundary layer eddies. Snapshot calculations of the broken cloud case suggest a slight increase in radiative cooling, though few systematic differences are noted in the interactive simulations. We attribute this result to the fact that radiative cooling is a relatively minor contribution to the total energetics. For the cloud systems in this study, the use of IPA longwave radiative transfer is sufficiently accurate to capture the dynamical behavior of BL clouds. Further investigations are required in order to generalize this conclusion for other cloud types and longer time integrations. 1
Factor analysis and scale revision
Steven P. Reise; Niels G. Waller; Andrew L. Comrey
2000-01-01
This article reviews methodological issues that arise in the application of exploratory factor analysis (EFA) to scale revision and refinement. The authors begin by discussing how the appropriate use of EFA in scale revision is influenced by both the hierarchical nature of psychological constructs and the motivations underlying the revision. Then they specifically address (a) important issues that arise prior
Factor Analysis and Scale Revision
Steven P. Reise; Niels G. Waller; Andrew L. Comrey
2000-01-01
This article reviews methodological issues that arise in the application of exploratory factor analysis (EFA) to scale revision and refinement. The authors begin by discussing how the appropriate use of EFA in scale revision is influenced by both the hierarchical nature of psychological constructs and the motivations underlying the revision. Then they specifically address (a) important issues that arise prior
The evolution of anisotropic structures and turbulence in the multi-dimensional Burgers equation
S. N. Gurbatov; A. Yu. Moshkov; A. Noullez
2008-08-20
The goal of the present paper is the investigation of the evolution of anisotropic regular structures and turbulence at large Reynolds number in the multi-dimensional Burgers equation. We show that we have local isotropization of the velocity and potential fields at small scale inside cellular zones. For periodic waves, we have simple decay inside of a frozen structure. The global structure at large times is determined by the initial correlations, and for short range correlated field, we have isotropization of turbulence. The other limit we consider is the final behavior of the field, when the processes of nonlinear and harmonic interactions are frozen, and the evolution of the field is determined only by the linear dissipation.
2011-01-01
Background The concept of resilience has captured the imagination of researchers and policy makers over the past two decades. However, despite the ever growing body of resilience research, there is a paucity of relevant, comprehensive measurement tools. In this article, the development of a theoretically based, comprehensive multi-dimensional measure of resilience in adolescents is described. Methods Extensive literature review and focus groups with young people living with chronic illness informed the conceptual development of scales and items. Two sequential rounds of factor and scale analyses were undertaken to revise the conceptually developed scales using data collected from young people living with a chronic illness and a general population sample. Results The revised Adolescent Resilience Questionnaire comprises 93 items and 12 scales measuring resilience factors in the domains of self, family, peer, school and community. All scales have acceptable alpha coefficients. Revised scales closely reflect conceptually developed scales. Conclusions It is proposed that, with further psychometric testing, this new measure of resilience will provide researchers and clinicians with a comprehensive and developmentally appropriate instrument to measure a young person's capacity to achieve positive outcomes despite life stressors. PMID:21970409
Jie Zhang; Mingxia Gao; Jia Tang; Pengyuan Yang; Yinkun Liu; Xiangmin Zhang
2006-01-01
Recently, matrix-assisted laser desorption ionization (MALDI) technique has been shown to be complementary to electrospray ionization (ESI) with respect to the population of peptides and proteins that can be detected. In this study, we tried to hyphenate MALDI–TOF–TOF–MS and ESI–QUADRUPOLE–TOF–MS with a single 2D liquid chromatography for complicated protein sample analysis. The effluents of RPLC were split into two parts
A Multi-Dimensional approach towards Intrusion Detection System
Thakur, Manoj Rameshchandra
2012-01-01
In this paper, we suggest a multi-dimensional approach towards intrusion detection. Network and system usage parameters like source and destination IP addresses; source and destination ports; incoming and outgoing network traffic data rate and number of CPU cycles per request are divided into multiple dimensions. Rather than analyzing raw bytes of data corresponding to the values of the network parameters, a mature function is inferred during the training phase for each dimension. This mature function takes a dimension value as an input and returns a value that represents the level of abnormality in the system usage with respect to that dimension. This mature function is referred to as Individual Anomaly Indicator. Individual Anomaly Indicators recorded for each of the dimensions are then used to generate a Global Anomaly Indicator, a function with n variables (n is the number of dimensions) that provides the Global Anomaly Factor, an indicator of anomaly in the system usage based on all the dimensions consid...
A Multi-Dimensional Classification Model for Scientific Workflow Characteristics
Ramakrishnan, Lavanya; Plale, Beth
2010-04-05
Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
On Multi-Dimensional Vocabulary Teaching Mode for College English Teaching
ERIC Educational Resources Information Center
Zhou, Li-na
2010-01-01
This paper analyses the major approaches in EFL (English as a Foreign Language) vocabulary teaching from historical perspective and puts forward multi-dimensional vocabulary teaching mode for college English. The author stresses that multi-dimensional approaches of communicative vocabulary teaching, lexical phrase teaching method, the grammar…
ROOT — An object oriented data analysis framework
Rene Brun; Fons Rademakers
1997-01-01
The ROOT system in an Object Oriented framework for large scale data analysis. ROOT written in C++, contains, among others, an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The command and
Multi-Dimensional Analysis of Dynamic Human Information Interaction
ERIC Educational Resources Information Center
Park, Minsoo
2013-01-01
Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…
Multi-dimensional Radiation Transport in Rapidly Expanding Envelopes
NASA Astrophysics Data System (ADS)
Höflich, P.
2009-09-01
We discuss the current status of our HYDrodynamical RAdiation (Hydra) code for rapidly expanding, low density envelopes commonly found in core collapse and thermonuclear supernovae (+ novae and WR stars). We focus on our current implementation of multi-dimensional, non-relativistic radiation transport neglecting all terms of higher order than 0(v/c). Line opacities are treated in the narrow line limit and consistency with the rate equations, radiation field and hydrodynamics is achieved iteratively in each time step via 'accelerated lambda iteration'. The solution of the transport is based on a hybrid scheme between a grid-based Variable Eddington Tensor and a Monte-Carlo method which is used for the auxiliary calculation to determine the Tensor elements. The advantages and limitation of our approach are discussed. We use this hybrid approach to reduce problems related to a grid-imposed directional dependence of the speed of light, and problems inherent to methods based on short and long characteristics for the Tensors. For short and long characteristics, the frequency or directional errors increase with the resolution or memory and computational requirements seem to be beyond feasibility, respectively. The limitations and the potential of our current approach is demonstrated by two simulations for thermonuclear supernovae.
Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory
NASA Astrophysics Data System (ADS)
Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.
2014-01-01
Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.
Multi-dimensional Problems in Health Settings: A Review of Approaches to Decision Making
Del Rio Vilas, Victor J.; Montibeller, Gilberto; Franco, L. Alberto; Aspinall, Willy
2013-01-01
Introduction There appears to be a growing number of prioritization exercises, for example of diseases, in health related settings (1). The decision process around these exercises involves comparing competing alternatives, i.e. diseases, and irreducible objectives. In addition to the multi-dimensional nature of the problem, the lack of reliable data, group dynamics associated to the involvement of experts, and the multiplicity of stakeholders, among other contextual factors, add complexity to the decision process. Here we review trends in such prioritization exercises and applications in different settings and for different events of interest, for example the management of emerging risks. Based on our findings, we discuss a conceptual framework based on multi-attribute utility theory presented to the World Organization for Animal Health (OIE) for the modification of its qualitative assessment of veterinary services performance into a quantifiable decision support system. Methods We searched PubMed for articles containing the key words ‘multi-criteria’, ‘multi-attribute’, ‘multi-objective’, ‘prioritization’, ‘decision making’ and their variations (e.g. without hyphenation) for the period 1990 to 2011 for human and veterinary medicine. We focused on prioritization methodologies and their sound application. Results A large number of prioritization efforts in health settings aim to produce a rank order of diseases to help allocation of scarce surveillance and disease control budgets. A number of applications target the prioritization of competing health interventions against specific diseases. Fewer target different events, for example emerging threats. Common mistakes found in multi-attribute prioritization approaches reported in the social sciences (2) appear also in public and animal health settings. In particular, the application of linear additive models to non-preferentially independent evaluation criteria, the poor design of attributes to assess the decision alternatives, the failure to define suitable criteria scales, and mistakes in defining trade-off weights were prevalent. In addition, most decision support tools tend to be overly complex. This not only compromises their acceptability and long-term sustainability but also increases the likelihood of methodological mistakes in their design and regular application. For example, the failure to properly identify and separate ‘ends’ objectives, such as the improvement of a country’s health, from ‘means’ objectives, i.e. required resources, in the definition of the fundamental drivers in any decision process. Conclusions Our findings, and experience in the practical application of formal prioritization methodologies (3), informed our advice to the OIE for the quantification of its tools for the assessment of veterinary services performance. The current framework used by the OIE produces a purely qualitative output with ordinal scales. The suggested quantitative extension allows additional outputs not available in their current form, for example, the aggregation of assessment scores at any level within the framework to produce a country’s overall score. It also permits the assessment of marginal performance improvements for every criterion and the consideration of trade-offs among the different criteria. The final output of our extension is the identification of the best portfolio of actions that will maximize the overall capability of national veterinary services given available resources. Quantification of the existing tool will deliver obvious benefits such as enhanced accountability and transparency in the decision making process, and will allow the historical analysis of a country’s veterinary services performance. The approach suggested to the OIE is adaptable to similar decision problems, such as monitoring the implementation of the International Health Regulations in a given country.
Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy
2011-01-01
Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268
Steps Toward a Large-Scale Solar Image Data Analysis to Differentiate Solar Phenomena
NASA Astrophysics Data System (ADS)
Banda, J. M.; Angryk, R. A.; Martens, P. C. H.
2013-11-01
We detail the investigation of the first application of several dissimilarity measures for large-scale solar image data analysis. Using a solar-domain-specific benchmark dataset that contains multiple types of phenomena, we analyzed combinations of image parameters with different dissimilarity measures to determine the combinations that will allow us to differentiate between the multiple solar phenomena from both intra-class and inter-class perspectives, where by class we refer to the same types of solar phenomena. We also investigate the problem of reducing data dimensionality by applying multi-dimensional scaling to the dissimilarity matrices that we produced using the previously mentioned combinations. As an early investigation into dimensionality reduction, we investigate by applying multidimensional scaling (MDS) how many MDS components are needed to maintain a good representation of our data (in a new artificial data space) and how many can be discarded to enhance our querying performance. Finally, we present a comparative analysis of several classifiers to determine the quality of the dimensionality reduction achieved with this combination of image parameters, similarity measures, and MDS.
Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.
Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick
2010-06-01
Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.
Kang, InHan
2006-01-01
In this thesis, we present a system for visualizing hierarchical, multi-dimensional, memory-intensive datasets. Specifically, we designed an interactive system to visualize data collected by high-throughput microscopy and ...
Multi-dimensional ultra-high frequency passive radio frequency identification tag antenna designs
Delichatsios, Stefanie Alkistis
2006-01-01
In this thesis, we present the design, simulation, and empirical evaluation of two novel multi-dimensional ultra-high frequency (UHF) passive radio frequency identification (RFID) tag antennas, the Albano-Dipole antenna ...
Multi-dimensional Multiphase Modeling of Sediment Transport
NASA Astrophysics Data System (ADS)
Cheng, Z.; d'Albignac, S.; Yu, X.; Hsu, T.; Sou, I.; Calantoni, J.
2012-12-01
Sediment transport driven by waves and currents is of great significance to further predict coastal morphodynamics. Eulerian two-phase models have been shown effective to study sheet flow sediment transport, though most of them are limited to Reynolds-averaged one-dimensional-vertical formulation. Hence, bedform, plug flow and turbulence cannot be resolved. Our goal is to develop four-way coupled multiphase models for multi-dimensional sediment transport under the numerical framework of OpenFOAM for Eulerian modeling and CFDEM for Euler-Lagrangian modeling. In the Eulerian modeling, particle-particle interaction is modeled using the kinetic theory for granular flow for binary collision and phenomenological closure for stresses of enduring contact. To improve the capability of the model for a range of grain sizes, a new closure for the fluid-particle velocity fluctuation correlation in the k-? equations is proposed. The model is validated by comparing the numerical results with laboratory experiments under steady flow and oscillatory flow for grain size ranging from 0.13~0.51 mm. To improve the closure of particle stress and studying poly-dispersed sediment transport processes, an Euler-Lagrangian solver called CFDEM, which couples OpenFOAM for the fluid phase and LIGGGHTS for particle phase, is modified for sand transport in oscillatory flow. Preliminary investigation suggests that even under sheet flow condition, small bed irregularities are observed during flow reversal. These small irregularities later encourage the formation of large sediment cloud during peak flow. 2D/3D simulation of the recent U-tube experiments at Naval Research Laboratory will be carried out to study instabilities in sheet flow and the poly-dispersed effects.
Kullback–Leibler Information and Its Applications in MultiDimensional Adaptive Testing
Chun Wang; Hua-Hua Chang; Keith A. Boughton
2011-01-01
This paper first discusses the relationship between Kullback–Leibler information (KL) and Fisher information in the context\\u000a of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective.\\u000a This explication should allow for a better understanding of the various item selection methods in multi-dimensional adaptive\\u000a tests (MAT) which are based on these two information measures.
Towards multi-dimensional robotic control via noninvasive brain-computer interface
Xuedong Chen; Ou Bai
2009-01-01
Brain-computer interface (BCI) provides a new communication pathway for patients with neurological disorders who may not make voluntary muscle contraction. A potential BCI application is that patients may control a neuro-prosthetic robot directly from their brain so that they can achieve virtual interaction with environment. Therefore, a BCI supports multi-dimensional control is highly demanded for a multi-dimensional robot. We hypothesized
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Scaling analysis of stock markets.
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis. PMID:24985421
All-optical multi-dimensional imaging of energy-materials beyond the diffraction limit
NASA Astrophysics Data System (ADS)
Smith, Steve; Dagel, D. J.; Zhong, L.; Kolla, P.; Ding, S.-Y.
2011-09-01
Efficient, environmentally-friendly, harvesting, storage, transport and conversion of energy are some of the foremost challenges now facing mankind. An important facet of this challenge is the development of new materials with improved electronic and photonic properties. Nano-scale metrology will be important in developing these materials, and optical methods have many advantages over electrons or proximal probes. To surpass the diffraction limit, near-field methods can be used. Alternatively, the concept of imaging in a multi-dimensional space is employed, where, in addition to spatial dimensions, the added dimensions of energy and time allow to distinguish objects which are closely spaced, and in effect increase the achievable resolution of optical microscopy towards the molecular level. We have employed these methods towards the study of materials relevant to renewable energy processes. Specifically, we image the position and orientation of single carbohydrate binding modules and visualize their interaction with cellulose with ~ 10nm resolution, an important step in identifying the molecular underpinnings of bio-processing and the development of low-cost alternative fuels, and describe our current work implementing these concepts towards characterizing the ultrafast carrier dynamics (~ 100fs) in a new class of nano-structured solar cells, predicted to have theoretical efficiencies exceeding 60%, using femtosecond laser spectroscopy.
All-optical multi-dimensional imaging of energy-materials beyond the diffraction limit
NASA Astrophysics Data System (ADS)
Smith, Steve; Dagel, D. J.; Zhong, L.; Kolla, P.; Ding, S.-Y.
2012-02-01
Efficient, environmentally-friendly, harvesting, storage, transport and conversion of energy are some of the foremost challenges now facing mankind. An important facet of this challenge is the development of new materials with improved electronic and photonic properties. Nano-scale metrology will be important in developing these materials, and optical methods have many advantages over electrons or proximal probes. To surpass the diffraction limit, near-field methods can be used. Alternatively, the concept of imaging in a multi-dimensional space is employed, where, in addition to spatial dimensions, the added dimensions of energy and time allow to distinguish objects which are closely spaced, and in effect increase the achievable resolution of optical microscopy towards the molecular level. We have employed these methods towards the study of materials relevant to renewable energy processes. Specifically, we image the position and orientation of single carbohydrate binding modules and visualize their interaction with cellulose with ~ 10nm resolution, an important step in identifying the molecular underpinnings of bio-processing and the development of low-cost alternative fuels, and describe our current work implementing these concepts towards characterizing the ultrafast carrier dynamics (~ 100fs) in a new class of nano-structured solar cells, predicted to have theoretical efficiencies exceeding 60%, using femtosecond laser spectroscopy.
NASA Astrophysics Data System (ADS)
Suga, Shinsuke
2014-11-01
We propose accurate explicit numerical schemes based on the lattice Boltzmann (LB) method for multi-dimensional diffusion equations. In LB schemes, the velocity models D2Q9 and D2Q13 are used for two-dimensional equations and D3Q19 and D3Q25 for three-dimensional equations. We introduce free parameters that characterize the weight of the equilibrium distribution functions to reduce numerical errors. Consistency analysis through the fourth-order Chapman-Ensgok expansion of the distribution functions gives an approximate diffusion equation with error terms up to fourth-order. The relaxation parameter and weight parameters are determined so that second-order error terms are eliminated in the approximate equation. Stability analysis shows that we can find a relaxation parameter so that each of the presented schemes is stable for given diffusion coefficients and discretizing parameters. Numerical experiments for the isotropic and anisotropic benchmark problems show that the presented schemes derived from the velocity models D2Q13 and D3Q25 are useful for numerical simulations of practical problems governed by two- and three-dimensional diffusion equations, respectively. In particular, schemes in which the value of the relaxation parameter is set to be 1 demonstrate a fourth-order accuracy under the stability condition.
Nucleosynthesis in multi-dimensional SN Ia explosions
NASA Astrophysics Data System (ADS)
Travaglio, C.; Hillebrandt, W.; Reinecke, M.; Thielemann, F.-K.
2004-10-01
We present the results of nucleosynthesis calculations based on multi-dimensional (2D and 3D) hydrodynamical simulations of the thermonuclear burning phase in type Ia supernovae (hereafter SN Ia). The detailed nucleosynthetic yields of our explosion models are calculated by post-processing the ejecta, using passively advected tracer particles. The nuclear reaction network employed in computing the explosive nucleosynthesis contains 383 nuclear species, ranging from neutrons, protons, and ?-particles to 98Mo. Our models follow the common assumption that SN Ia are the explosions of white dwarfs that have approached the Chandrasekhar mass (Mch˜ 1.39), and are disrupted by thermonuclear fusion of carbon and oxygen. But in contrast to 1D models which adjust the burning speed to reproduce lightcurves and spectra, the thermonuclear burning model applied in this paper does not contain adjustable parameters. Therefore variations of the explosion energies and nucleosynthesis yields are dependent on changes of the initial conditions only. Here we discuss the nucleosynthetic yields obtained in 2D and 3D models with two different choices of ignition conditions (centrally ignited, in which the spherical initial flame geometry is perturbated with toroidal rings, and bubbles, in which multi-point ignition conditions are simulated), but keeping the initial composition of the white dwarf unchanged. Constraints imposed on the hydrodynamical models from nucleosynthesis as well as from the radial velocity distribution of the elements are discussed in detail. We show that in our simulations unburned C and O varies typically from ˜40% to ˜50% of the total ejected material. Some of the unburned material remains between the flame plumes and is concentrated in low velocity regions at the end of the simulations. This effect is more pronounced in 2D than in 3D and in models with a small number of (large) ignition spots. The main differences between all our models and standard 1D computations are, besides the higher mass fraction of unburned C and O, the C/O ratio (in our case is typically a factor of 2.5 higher than in 1D computations), and somewhat lower abundances of certain intermediate mass nuclei such as S, Cl, Ar, K, and Ca, and of 56Ni. We also demonstrate that the amount of 56Ni produced in the explosion is a very sensitive function of density and temperature. Because explosive C and O burning may produce the iron-group elements and their isotopes in rather different proportions one can get different 56Ni-fractions (and thus supernova luminosities) without changing the kinetic energy of the explosion. Finally, we show that we need the high resolution multi-point ignition (bubbles) model to burn most of the material in the center (demonstrating that high resolution coupled with a large number of ignition spots is crucial to get rid of unburned material in a pure deflagration SN Ia model). Tables 1 and 2 are only available in electronic form at http://www.edpsciences.org
A Comment on "On Some Contradictory Computations in Multi-dimensional Mathematics"
E. Capelas de Oliveira; W. A. Rodrigues Jr
2006-03-27
In this paper we analyze the status of some `unbelievable results' presented in the paper `On Some Contradictory Computations in Multi-Dimensional Mathematics' [1] published in Nonlinear Analysis, a journal indexed in the Science Citation Index. Among some of the unbelievable results `proved' in the paper we can find statements like that: (i) a linear transformation which is a rotation in R^2 with rotation angle theta different from nphi/2, is inconsistent with arithmetic, (ii) complex number theory is inconsistent. Besides these 'results' of mathematical nature [1],offers also a `proof' that Special Relativity is inconsistent. Now, we are left with only two options (a) the results of [1] are correct and in this case we need a revolution in Mathematics (and also in Physics) or (b) the paper is a potpourri of nonsense. We show that option (b) is the correct one. All `proofs' appearing in [1] are trivially wrong, being based on a poor knowledge of advanced calculus notions. There are many examples (some of them discussed in [2,3,4,5,6]of complete wrong papers using nonsequitur Mathematics in the Physics literature. Taking into account also that a paper like [1] appeared in a Mathematics journal we think that it is time for editors and referees of scientific journals to become more careful in order to avoid the dissemination of nonsense.
Meshless solutions to multi-dimensional integral and partial differential equations
NASA Astrophysics Data System (ADS)
Kansa, E. J.
2013-10-01
Many important problems in physics, quantum chemistry, biology, economics, etc. are expressed as multi-dimensional (MD) integral equations (IEs) and partial differential equations (PDEs) that are difficult to solve with the dominant numerical techniques such as finite elements, difference, and volume methods. The main problem with multi-dimensional problems is the curse of dimensionality requiring increasingly more computer memory and speed. A radial basis function (RBF) method is recommended that possesses exponential convergence as the key technique to circumvent the curse of dimensionality combined with other techniques to solve MD problems numerically.
Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter
ERIC Educational Resources Information Center
Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.
2010-01-01
We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…
ERIC Educational Resources Information Center
Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd
2012-01-01
The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…
Non-periodicity in chemostat equations: a multi-dimensional negative Bendixson-Dulac criterion
Fiedler, Bernold
function approach, we develop and apply a multi-dimensional Bendixson-Dulac type exclusion principle based: For any choice of i > 0 and under assumption (1.4) the above competitive exclusion result still holds of bacteria, the chemostat is also a model for a very simple lake where exploitative competition is easily
A Computational Market for Information Filtering in MultiDimensional Spaces
Grigoris J. Karakoulas; Innes A. Ferguson
1995-01-01
This paper presents the computational market of SIGMA (System of Information Gathering Market- based Agents) as a model of decentralized decision making for the task of information filtering in multi- dimensional spaces such as the Usenet netnews. Dif- ferent learning and adaptation techniques are integrated within SIGMA for creating a robust net- work-based application which adapts to both changes in
Kullback-Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A.
2011-01-01
This paper first discusses the relationship between Kullback-Leibler information (KL) and Fisher information in the context of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective. This explication should allow for a better understanding of the various item selection methods…
NASA Astrophysics Data System (ADS)
Li, Hai-Sheng; Zhu, Qingxin; Zhou, Ri-Gui; Song, Lan; Yang, Xing-jiang
2014-04-01
Multi-dimensional color image processing has two difficulties: One is that a large number of bits are needed to store multi-dimensional color images, such as, a three-dimensional color image of needs bits. The other one is that the efficiency or accuracy of image segmentation is not high enough for some images to be used in content-based image search. In order to solve the above problems, this paper proposes a new representation for multi-dimensional color image, called a -qubit normal arbitrary quantum superposition state (NAQSS), where qubits represent colors and coordinates of pixels (e.g., represent a three-dimensional color image of only using 30 qubits), and the remaining 1 qubit represents an image segmentation information to improve the accuracy of image segmentation. And then we design a general quantum circuit to create the NAQSS state in order to store a multi-dimensional color image in a quantum system and propose a quantum circuit simplification algorithm to reduce the number of the quantum gates of the general quantum circuit. Finally, different strategies to retrieve a whole image or the target sub-image of an image from a quantum system are studied, including Monte Carlo sampling and improved Grover's algorithm which can search out a coordinate of a target sub-image only running in where and are the numbers of pixels of an image and a target sub-image, respectively.
Multi-dimensional simulations of helium shell flash convection F. Herwig1,2
Herwig, Falk
Multi-dimensional simulations of helium shell flash convection F. Herwig1,2 , B. Freytag3,2 , R. M-d stellar evolution codes (Fig.1) have to adopt simplifying assumptions on convection induced mixing, especially at convective boundaries. Here, we report on 2D (Fig.2) and 3D (Fig.3, bottom panel) hydrodynamic
Continuous Multi-dimensional Top-k Query Processing in Sensor Networks
Wang, Dan
Continuous Multi-dimensional Top-k Query Processing in Sensor Networks Hongbo Jiang1 Jie Cheng1 Dan fields of computer science. Efficient implementation of the top-k queries is the key for information of users searching information directly into the physical world, many new challenges arise for top-k query
Efficient Processing of Top-k Dominating Queries on Multi-Dimensional Data Man Lung Yiu
Yiu, Man Lung
Efficient Processing of Top-k Dominating Queries on Multi-Dimensional Data Man Lung Yiu Department of Computer Science University of Hong Kong Pokfulam Road, Hong Kong nikos@cs.hku.hk ABSTRACT The top-k significant objects. In addition, it combines the advantages of top-k and skyline queries without sharing
Jointly Optimal LQG Quantization and Control Policies for Multi-Dimensional Linear Gaussian Sources
YÃ¼ksel, Serdar
. Definition 1.1: Let M = {1, 2, . . ., M} with M = |M|. Let A be a topological space. A quantizer Q(A; MJointly Optimal LQG Quantization and Control Policies for Multi-Dimensional Linear Gaussian Sources quadratic cost criteria, we investigate the existence and the structure of optimal quantization and control
A class of time-optimum FSSP algorithms for multi-dimensional cellular arrays
NASA Astrophysics Data System (ADS)
Umeo, Hiroshi; Kubo, Keisuke; Nishide, Kinuo
2015-04-01
The firing squad synchronization problem (FSSP) on cellular automata has been studied extensively for more than fifty years, and a rich variety of synchronization algorithms has been proposed not only for one-dimensional arrays but also for two-dimensional arrays. In the present paper, we propose a new class of time-optimum FSSP algorithms for multi-dimensional cellular arrays. The algorithm is based on a simple recursive-halving marking schema and it can synchronize any two-dimensional (2D) rectangular arrays of size m × n with a general at one corner in m + n + max (m, n) - 3 optimum-steps. The algorithm is a natural expansion of the well-known one-dimensional (1D) FSSP algorithms proposed by Balzer (1967), Gerken (1987), and Waksman (1966) and it can be easily expanded to three-dimensional (3D) arrays, even to multi-dimensional arrays. The algorithm proposed is isotropic concerning the side-lengths of multi-dimensional arrays and this isotropic property yields algorithmic correctness and easy verification for the multi-dimensional time-optimum FSSP algorithms designed.
Matrix-valued attenuation in multi-dimensional problems of anisotropic viscoelasticity
Andrzej Hanyga
2015-04-06
We extend the theory of matrix-valued complete Bernstein functions and apply it to analyze Green's function of an anisotropic multi-dimensional linear viscoelastic problem. Green's function is expressed in terms of matrix-valued attenuation and dispersion functions given in terms of a matrix-valued positive semi-definite Radon measure.
Approximation of the Multi-dimensional Stokes System with Embedded Pressure Discontinuities
Buscaglia, Gustavo C.
Approximation of the Multi-dimensional Stokes System with Embedded Pressure Discontinuities Gustavo except for the latter field on the embedded interface. A suitable modification of the pressure space convergence rates for this approach. Key words: Embedded discontinuities, finite elements, Galerkin
The Multi-Dimensional Hardy Uncertainty Principle and its Interpretation in Terms of the
Feichtinger, Hans Georg
The Multi-Dimensional Hardy Uncertainty Principle and its Interpretation in Terms of the Wigner 15, AT-1090 Wien February 1, 2008 Abstract We extend Hardy's uncertainty principle for a square. We use this extension to show that Hardy's uncertainty principle is equivalent to a statement
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations
Kabal, Peter
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations A. K constellation. This scheme, called as the ad- dress decomposition, is based on decomposing the addressing. This is called a signal constel- lation. The constellation points are usually selected as a finite subset
BANDWIDTH-INTENSIVE FPGA ARCHITECTURE FOR MULTI-DIMENSIONAL DFT Chi-Li Yu 1
Kambhampati, Subbarao
BANDWIDTH-INTENSIVE FPGA ARCHITECTURE FOR MULTI-DIMENSIONAL DFT Chi-Li Yu 1 , Chaitali Chakrabarti, they are not suitable for applications that require fast response time. In this paper we focus on FPGA into account FPGA resources and the characteristics of off-chip memory access, namely, the burst access pattern
Multi-dimensional Network Security Game: How do attacker and defender battle on parallel targets?
Lui, John C.S.
become a thriv- ing concern in fixed line and mobile Internet. Due to the popularity of e-commerce are battling over "multiple" targets. This type of game is appropriate to model many current network security their resource limits. We model such a multi- dimensional network security game as a constrained non- zero sum
DRAFT DETC2009-87045 Supporting Trade Space Exploration of Multi-Dimensional Data
Zhang, Xiaolong "Luke"
DRAFT DETC2009-87045 Supporting Trade Space Exploration of Multi-Dimensional Data with Interactive. For example, in trade space exploration of large design data sets, designers need to select a subset of data this prototype system. By using visual tools during trade space exploration, this research suggests a new
FaRNet: Fast Recognition of High Multi-Dimensional Network Traffic Patterns
Politècnica de Catalunya, Universitat
administrator the multi-dimensional hierarchical heavy hitters (HHHs) of a dataset. However, existing schemes a fundamentally new approach for extracting HHHs based on generalized frequent item-set mining (FIM), which allows]: Internetworking Keywords: Network Operation and Management; Traffic Profiling; Data Mining 1. INTRODUCTION
A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system
Asadzadeh, M.; Bartoszek, K. [Department of Mathematics, Chalmers University of Technology and University of Gothenburg SE-412 96 Goeteborg (Sweden)
2011-05-20
We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.
Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5
Davis, C.B.
1998-01-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3.
Scale-PC shielding analysis sequences
Bowman, S.M.
1996-05-01
The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.
Multi-dimensional Density Estimation David W. Scott a,,1
Scott, David W.
-validation, Curse of dimensionality, Exploratory data analysis, Frequency polygons, Histograms, Kernel estimators at Denver, Denver, CO 80217-3364 USA Abstract Modern data analysis requires a number of tools to undercover of techniques and a willingness to go beyond simple univariate methodologies. Many experimental scientists today
Multi-dimensional reduction using self-organizing map
NASA Astrophysics Data System (ADS)
Kim, Kho Pui; Yusof, Fadhilah; Daud, Zalina binti Mohd
2014-07-01
Self-Organising Map (SOM) is found to be a useful tool for climatological synoptic, analysis in extreme and rainfall pattern, cloud classification and climate change analysis. In data preprocessing for use in statistical downscaling, Principal Component Analysis (PCA) or empirical orthogonal function (EOF) analysis is used to select the mode criterion for the predictor and predictand fields for building a model. However, EOF contributes less total variance for most cases of which 70% to 90% of total population variance is accounted in the analysis. Therefore, SOM is proposed to obtain a nonlinear mapping for the preprocessing process. This study examines the dimension reduction of NCEP variable using SOM during the periods of November-December-January-February (NDJF). The NCEP data used is the 20 grids point atmospheric data for variable Sea Level Pressure (SLP). The result showed that SOM had extracted the high dimensional data onto a low dimensional representation.
SCALE DRAM subsystem power analysis
Bhalodia, Vimal
2005-01-01
To address the needs of the next generation of low-power systems, DDR2 SDRAM offers a number of low-power modes with various performance and power consumption tradeoffs. The SCALE DRAM Subsystem is an energy-aware DRAM ...
HYPERS: First Ever Multi-Dimensional Asynchronous Hybrid Simulations
NASA Astrophysics Data System (ADS)
Omelchenko, Y.; Karimabadi, H.; Catalyurek, U.; Saule, E.; Brown, M. R.
2009-12-01
Hybrid (electron fluid, kinetic ion) simulations have recently emerged in large-scale laboratory (spheromak, FRC) and global magnetospheric studies with the goal of accurately predicting energetic particle transport in complex plasma/magnetic configurations. Multiple temporal scales associated with both plasma and magnetic field inhomogeneities force severe restrictions on the global timestep in time-stepped hybrid simulations. We present an alternative computational tool for multi-scale plasma modeling: a first-ever uni-dimensional (2D or 3D chosen at compile time) asynchronous hybrid code, HYPERS (HYbrid-Particle Event-Resolved Simulation). HYPERS is currently being developed as part of the Virtual Hybrid Particle Laboratory (VHPL) in support of the SSX experiment at Swarthmore. It discards traditional time stepping in favor of the Discrete-Event Simulation (DES) approach [1]. Time increments for individual particles and local electromagnetic fields are adaptively selected through limiting per-update changes of their properties. This enables fast, reliable and accurate simulations of energetic plasmas immersed in highly inhomogeneous magnetic fields. Preliminary results from 2D simulations of the interaction of streaming plasmas with magnetic dipoles are discussed. We also report our undergoing efforts on developing efficient dynamic load-balancing strategies for future parallel HYPERS runs on petascale architectures. In this initial development phase, we are exploring tradeoffs of developing 2D/3D variations of extremely fast 1D load balancing heuristics such as chain-on-chain partitioning versus using fast geometry-based heuristics.
Reply to Adams: Multi-Dimensional Edge Interference
Eagle, Nathan N.
We completely agree with adams that, in social network analysis, the particular research question should drive the definition of what constitutes a tie ( 1). However, we believe that even studies of inherently social ...
Scaling analysis of computational imaging systems
Ravi A. Athale; Gary W. Euliss; Joseph N. Mait
2008-01-01
Computational imaging systems are characterized by a joint design and optimization of front end optics, focal plane arrays and post-detection processing. Each constituent technology is characterized by its unique scaling laws. In this paper we will attempt a synthesis of the behavior of individual components and develop scaling analysis of the jointly designed and optimized imaging systems.
Beyond the Child-Langmuir Law: The Physics of Multi-dimensional Space-Charge-Limited Emission
NASA Astrophysics Data System (ADS)
Luginsland, John
2001-10-01
Space-Charge-Limited (SCL) flows in diodes have been an area of active research since the pioneering work of Child and Langmuir in the early part of this century. Indeed, the scaling of current density with the voltage to the 3/2s power is one of the best-known limits in the fields of non-neutral plasma physics, accelerator physics, sheath physics, vacuum electronics, and high power microwaves (HPM). In the past five years, there has been renewed interest in the physics and characteristics of space-charge-limited emission in physically realizable configurations. This research has focused on characterizing the current and current density enhancement possible from two- and three-dimensional geometries, such as field-emitting arrays. In 1996, computational efforts led to the development of a scaling law that described the increased current drawn due to two-dimensional effects. Recently, this scaling has been analytically derived from first principles. In parallel efforts, computational work has characterized the edge enhancement of the current density, leading to a better understanding of the physics of explosive emission cathodes. In this talk, the analytic and computational extensions to the one-dimensional Child-Langmuir law will be reviewed, the accuracy of SCL emission algorithms will be assessed, and the experimental implications of multi-dimensional SCL flows will be discussed.
Automated multi-dimensional purification of tagged proteins.
Sigrell, Jill A; Eklund, Pär; Galin, Markus; Hedkvist, Lotta; Liljedahl, Pia; Johansson, Christine Markeland; Pless, Thomas; Torstenson, Karin
2003-01-01
The capacity for high throughput purification (HTP) is essential in fields such as structural genomics where large numbers of protein samples are routinely characterized in, for example, studies of structural determination, functionality and drug development. Proteins required for such analysis must be pure and homogenous and available in relatively large amounts. AKTA 3D system is a powerful automated protein purification system, which minimizes preparation, run-time and repetitive manual tasks. It has the capacity to purify up to 6 different His6- or GST-tagged proteins per day and can produce 1-50 mg protein per run at >90% purity. The success of automated protein purification increases with careful experimental planning. Protocol, columns and buffers need to be chosen with the final application area for the purified protein in mind. PMID:14649294
Thomas Garrity
2012-05-25
A new classification scheme for pairs of real numbers is given, generalizing earlier work of the author that used continued fraction, which in turn was motivated by ideas from statistical mechanics in general and work of Knauf and Fiala and Kleban in particular. Critical for this classification are the number theoretic and geometric properties of the triangle map, a type of multi-dimensional continued fraction.
MultiDimensional Modeling of Natural Gas Autoignition using Detailed Chemical Kinetics
APOORVA AGARWAL; DENNIS N. ASSANIS
2001-01-01
The autoignition of natural gas injected into a combustion bomb at pressures and temperatures typical of top-dead-center conditions in compression ignition engines is studied by combining a detailed chemical kinetic mechanism, consisting of 22 species and 104 elementary reactions, with a multi-dimensional reactive flow code. The effect of natural gas composition, ambient density and temperature on the ignition process is
Amy McGovern; Derek H. Rosendahl; Rodger A. Brown; Kelvin K. Droegemeier
2011-01-01
We introduce an efficient approach to mining multi-dimensional temporal streams of real-world data for ordered temporal motifs\\u000a that can be used for prediction. Since many of the dimensions of the data are known or suspected to be irrelevant, our approach\\u000a first identifies the salient dimensions of the data, then the key temporal motifs within each dimension, and finally the temporal
Development of a multi-dimensional thermal-hydraulic system code, MARS 1.3.1
J.-J. Jeong; K. S. Ha; B. D. Chung; W. J. Lee
1999-01-01
A multi-dimensional thermal-hydraulic system code MARS has been developed by consolidating and restructuring the RELAP5\\/MOD3.2.1.2 and COBRA-TF codes. The two codes were adopted to take advantage of the very general, versatile features of RELAP5 and the realistic three-dimensional hydrodynamic module of COBRA-TF. In the course of code development, major features of each code were consolidated into a single code first.
Study on the construction of multi-dimensional Remote Sensing feature space for hydrological drought
NASA Astrophysics Data System (ADS)
Xiang, Daxiang; Tan, Debao; Cui, Yuanlai; Wen, Xiongfei; Shen, Shaohong; Li, Zhe
2014-03-01
Hydrological drought refers to an abnormal water shortage caused by precipitation and surface water shortages or a groundwater imbalance. Hydrological drought is reflected in a drop of surface water, decrease of vegetation productivity, increase of temperature difference between day and night and so on. Remote sensing permits the observation of surface water, vegetation, temperature and other information from a macro perspective. This paper analyzes the correlation relationship and differentiation of both remote sensing and surface measured indicators, after the selection and extraction a series of representative remote sensing characteristic parameters according to the spectral characterization of surface features in remote sensing imagery, such as vegetation index, surface temperature and surface water from HJ-1A/B CCD/IRS data. Finally, multi-dimensional remote sensing features such as hydrological drought are built on a intelligent collaborative model. Further, for the Dong-ting lake area, two drought events are analyzed for verification of multi-dimensional features using remote sensing data with different phases and field observation data. The experiments results proved that multi-dimensional features are a good method for hydrological drought.
Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices
Rotem, Doron; Stockinger, Kurt; Wu, Kesheng
2006-03-30
Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.
Hitchhiker's guide to multi-dimensional plant pathology.
Saunders, Diane G O
2015-02-01
Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800
Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose
2014-01-01
Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589
NASA Astrophysics Data System (ADS)
Falissard, F.
2013-11-01
This paper addresses the extension of one-dimensional filters in two and three space dimensions. A new multi-dimensional extension is proposed for explicit and implicit generalized Shapiro filters. We introduce a definition of explicit and implicit generalized Shapiro filters that leads to very simple formulas for the analyses in two and three space dimensions. We show that many filters used for weather forecasting, high-order aerodynamic and aeroacoustic computations match the proposed definition. Consequently the new multi-dimensional extension can be easily implemented in existing solvers. The new multi-dimensional extension and the two commonly used methods are compared in terms of compactness, robustness, accuracy and computational cost. Benefits of the genuinely multi-dimensional extension are assessed for various computations using the compressible Euler equations.
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Dr?gu? et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Dr?gu?, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.
Scale-Specific Multifractal Medical Image Analysis
Braverman, Boris
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588
Scale-specific multifractal medical image analysis.
Braverman, Boris; Tambasco, Mauro
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588
Scale Free Reduced Rank Image Analysis.
ERIC Educational Resources Information Center
Horst, Paul
In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2001-01-01
A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
Petrov, G. M.; Davis, J. [Naval Research Laboratory, Plasma Physics Division, 4555 Overlook Ave. SW, Washington, DC 20375 (United States)
2011-07-15
An implicit multi-dimensional particle-in-cell (PIC) code is developed to study the interaction of ultrashort pulse lasers with matter. The algorithm is based on current density decomposition and is only marginally more complicated compared to explicit PIC codes, but it completely eliminates grid heating and possesses good energy conserving properties with relaxed time step and grid resolution. This is demonstrated in a test case study, in which high-energy protons are generated from a thin carbon foil at solid density using linear and circular polarizations. The grid heating rate is estimated to be 1-10 eV/ps.
A case study of nucleosynthesis in multi-dimensional supernova simulations
NASA Astrophysics Data System (ADS)
Sexton, Jack; Young, Patrick A.; Ellinger, Carola I.; Fryer, Chris; Rockefeller, Gabriel
2015-01-01
We present a case study of several multi-dimensional smoothed particle hydrodynamics simulations with large nuclear network post-processing in which the effects of asymmetries on nucleosynthesis in supernovae are assessed. The abundances and spatial distribution of the short-lived radionuclides 26Al, 41Ca, and 60Fe are evaluated along with the coproduced oxygen isotopes and the S/Si ratio, used as an observational tracer. We also examine 44Ti and 56Ni and the bulk abundances of key common elements. Particular attention is paid to the composition of the Rayleigh-Taylor Instability driven 'bullets' of material observed in young supernova remnants.
Giant Leaps and Minimal Branes in Multi-Dimensional Flux Landscapes
Adam R. Brown; Alex Dahlen
2011-09-14
There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'minimal' brane that wraps the internal manifold and exhibits behavior not visible in the effective theory. We discuss the implications for the cosmological constant.
Wavelet Analysis of Large Scale Structure
NASA Astrophysics Data System (ADS)
Minske, J. K.; Watkins, R.; Feldman, H.; Freese, K.
1995-12-01
The existance and behavior of structures in the luminous matter distribution is an excellent diagnostic of conditions in the early universe, so it is imperative to extract as much information as possible from the density field. We introduce a method of identifying structures using wavelet analysis, which, without smoothing and without bias, can simultaneously probe both the spatial and scale aspects of a matter distribution; we use this method to classify structures according to both size and position. After testing this technique on simulated data, we apply the method to the LP and QDOT surveys, and present a catalog of the structures in these regions. Further, in order to measure the extent of clustering in these catalogs, we must choose a statistical approach that takes advantage of the two-dimensionality of our data. We present one technique, the correlation surface, which gives us a scale-by-scale insight into the spatial distribution of luminous matter.
A multi-dimensional scale for repositioning public park and recreation services
Kaczynski, Andrew Thomas
2004-09-30
The goal of this study was to develop an instrument to assist public park and recreation agencies in successfully repositioning their offerings in order to garner increased allocations of tax dollars. To achieve this, an agency must be perceived...
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M
2015-01-01
Changes in gait patterns provide important information about individuals' health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson's disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
NASA Astrophysics Data System (ADS)
Schaerer, Joël; Fassi, Aurora; Riboldi, Marco; Cerveri, Pietro; Baroni, Guido; Sarrut, David
2012-01-01
Real-time optical surface imaging systems offer a non-invasive way to monitor intra-fraction motion of a patient's thorax surface during radiotherapy treatments. Due to lack of point correspondence in dynamic surface acquisition, such systems cannot currently provide 3D motion tracking at specific surface landmarks, as available in optical technologies based on passive markers. We propose to apply deformable mesh registration to extract surface point trajectories from markerless optical imaging, thus yielding multi-dimensional breathing traces. The investigated approach is based on a non-rigid extension of the iterative closest point algorithm, using a locally affine regularization. The accuracy in tracking breathing motion was quantified in a group of healthy volunteers, by pair-wise registering the thoraco-abdominal surfaces acquired at three different respiratory phases using a clinically available optical system. The motion tracking accuracy proved to be maximal in the abdominal region, where breathing motion mostly occurs, with average errors of 1.09 mm. The results demonstrate the feasibility of recovering multi-dimensional breathing motion from markerless optical surface acquisitions by using the implemented deformable registration algorithm. The approach can potentially improve respiratory motion management in radiation therapy, including motion artefact reduction or tumour motion compensation by means of internal/external correlation models.
Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions
Li, Haoran; Xiong, Li; Jiang, Xiaoqian
2014-01-01
Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s ? estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241
NASA Astrophysics Data System (ADS)
Haider, M. M.; Mamun, A. A.
2012-10-01
A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.
Haider, M. M. [Department of Physics, Mawlana Bhashani Science and Technology University, Santosh, Tangail 1902 (Bangladesh); Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka 1342 (Bangladesh)
2012-10-15
A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.
Multi-Dimensional Hydrodynamic Simulations with Non-Equilibrium Radiative Cooling Calculations
NASA Astrophysics Data System (ADS)
Kwak, Kyujin
2015-01-01
In the optically thin gas within the temperature range of 104 to a few times 106 K, radiative cooling due to line emission from abundant metal ions such as carbon, nitrogen, oxygen, neon, silicon, and iron ions can affect the gas dynamics and it becomes important to calculate the cooling rates accurately while running the hydrodynamic simulations. The accurate calculation should trace together the detailed processes of ionization and recombination for all the relevant ions of each metal at each hydrodynamic time step, i.e., in a non-equilibrium fashion. So far, due to the computational cost, it has been delayed to implement this non-equilibrium cooling calculation in the multi-dimensional hydrodynamic simulations, but it is now possible to do this thanks to the rapidly growing computing powers. By using the platform of the FLASH code, we have implemented the non-equilibrium radiative cooling calculation in the multi-dimensional hydrodynamic simulations. Here we present the code development process and the results of some test problems.
Effects of changing scale on landscape pattern analysis: scaling relations
Jianguo Wu
2004-01-01
Landscape pattern is spatially correlated and scale-dependent. Thus, understanding landscape structure and functioning requires\\u000a multiscale information, and scaling functions are the most precise and concise way of quantifying multiscale characteristics\\u000a explicitly. The major objective of this study was to explore if there are any scaling relations for landscape pattern when\\u000a it is measured over a range of scales (grain size
Optimizing threshold for extreme scale analysis
NASA Astrophysics Data System (ADS)
Maynard, Robert; Moreland, Kenneth; Atyachit, Utkarsh; Geveci, Berk; Ma, Kwan-Liu
2013-01-01
As the HPC community starts focusing its efforts towards exascale, it becomes clear that we are looking at machines with a billion way concurrency. Although parallel computing has been at the core of the performance gains achieved until now, scaling over 1,000 times the current concurrency can be challenging. As discussed in this paper, even the smallest memory access and synchronization overheads can cause major bottlenecks at this scale. As we develop new software and adapt existing algorithms for exascale, we need to be cognizant of such pitfalls. In this paper, we document our experience with optimizing a fairly common and parallelizable visualization algorithm, threshold of cells based on scalar values, for such highly concurrent architectures. Our experiments help us identify design patterns that can be generalized for other visualization algorithms as well. We discuss our implementation within the Dax toolkit, which is a framework for data analysis and visualization at extreme scale. The Dax toolkit employs the patterns discussed here within the framework's scaffolding to make it easier for algorithm developers to write algorithms without having to worry about such scaling issues.
Large-Scale Visual Data Analysis
NASA Astrophysics Data System (ADS)
Johnson, Chris
2014-04-01
Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.
Akhter, T.; Hossain, M. M.; Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka-1342 (Bangladesh)
2012-09-15
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Scaling the Natural World Using Dimensional Analysis
NSDL National Science Digital Library
Kass, Stephen
A unit that addresses the sheer volume of incomprehensible numbers (speed, distance, age) in the natural world, helping students to understand the scale of the world using the concepts of rates, proportions and dimensional analysis. Students learn to calculate problems such as: Measurements indicate that the continents of Europe and North America are separating (plate tectonics) at the rate of about 2 centimeters per year. If Columbus could repeat his famous voyage of 1492, about how many feet or yards farther must he travel?
Large-Scale Parametric Survival Analysis†
Mittal, Sushil; Madigan, David; Cheng, Jerry; Burd, Randall S.
2013-01-01
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power has led to considerable interest in analyzing very high-dimensional data where the number of predictor variables and the number of observations range between 104 – 106. In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models. PMID:23625862
Han, Xianlin; Yang, Kui; Gross, Richard W.
2011-01-01
Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525
Ionizing shocks in argon. Part II: Transient and multi-dimensional effects
Kapper, M. G.; Cambier, J.-L. [Air Force Research Laboratory, Edwards AFB, CA 93524 (United States)
2011-06-01
We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.
A G-FDTD scheme for solving multi-dimensional open dissipative Gross-Pitaevskii equations
NASA Astrophysics Data System (ADS)
Moxley, Frederick Ira; Byrnes, Tim; Ma, Baoling; Yan, Yun; Dai, Weizhong
2015-02-01
Behaviors of dark soliton propagation, collision, and vortex formation in the context of a non-equilibrium condensate are interesting to study. This can be achieved by solving open dissipative Gross-Pitaevskii equations (dGPEs) in multiple dimensions, which are a generalization of the standard Gross-Pitaevskii equation that includes effects of the condensate gain and loss. In this article, we present a generalized finite-difference time-domain (G-FDTD) scheme, which is explicit, stable, and permits an accurate solution with simple computation, for solving the multi-dimensional dGPE. The scheme is tested by solving a steady state problem in the non-equilibrium condensate. Moreover, it is shown that the stability condition for the scheme offers a more relaxed time step restriction than the popular pseudo-spectral method. The G-FDTD scheme is then employed to simulate the dark soliton propagation, collision, and the formation of vortex-antivortex pairs.
Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests
NASA Astrophysics Data System (ADS)
Niles, A. M.; Forbes, J. W.; Tarver, C. M.; Chidester, S. K.; Garcia, F.; Greenwood, D. W.; Garza, R. G.
2001-06-01
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 350 ms after projectile impact, creating 0.5-0.6 kb peak shocks centered in PBX 9501 explosives discs. Steven Test calculations based on ignition and growth criteria predict low level reactions occurring at 335 ms which agrees well with experimental data. Additional gauged experiments simulating the Steven Test have been performed and will be discussed. * This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National laboratory under contract No. W-7405-Eng-48.
Measurement of Low Level Explosives Reaction in Gauged Multi-dimensional Steven Impact Tests
NASA Astrophysics Data System (ADS)
Niles, A. M.; Garcia, F.; Greenwood, D. W.; Forbes, J. W.; Tarver, C. M.; Chidester, S. K.; Garza, R. G.; Swizter, L. L.
2002-07-01
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 mus after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.
Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai
2014-01-01
RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here. PMID:24671105
Multi-dimensional instability of multi-ion acoustic solitary waves in a degenerate magnetized plasma
NASA Astrophysics Data System (ADS)
Akter, S.; Haider, M. M.; Duha, S. S.; Salahuddin, M.; Mamun, A. A.
2013-07-01
The multi-dimensional instability of obliquely propagating multi-ion acoustic (MIA) solitary structures was studied theoretically by the small-k (long wavelength plane wave) perturbation expansion technique in an ultra-relativistic degenerate magnetized plasma, which consists of inertia less electrons, inertial ions and stationary arbitrarily charged heavy ions. The Zakharov-Kuznetsov equation is derived by the reductive perturbation method and its solitary wave solution is analyzed. The basic properties of small but finite-amplitude MIA solitary waves have been modified significantly by the combined effects of the degenerate electron number density, heavy ion number density, external magnetic field and obliqueness. The underlying physics of the MIA solitary waves, which are relevant to space plasma situations, and the basic features, such as amplitude, width and growth rate, are briefly discussed.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests
Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L
2001-05-31
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
Multi-dimensional fiber-optic radiation sensor for ocular proton therapy dosimetry
NASA Astrophysics Data System (ADS)
Jang, K. W.; Yoo, W. J.; Moon, J.; Han, K. T.; Park, B. G.; Shin, D.; Park, S.-Y.; Lee, B.
2012-12-01
In this study, we fabricated a multi-dimensional fiber-optic radiation sensor, which consists of organic scintillators, plastic optical fibers and a water phantom with a polymethyl methacrylate structure for the ocular proton therapy dosimetry. For the purpose of sensor characterization, we measured the spread out Bragg-peak of 120 MeV proton beam using a one-dimensional sensor array, which has 30 fiber-optic radiation sensors with a 1.5 mm interval. A uniform region of spread out Bragg-peak using the one-dimensional fiber-optic radiation sensor was obtained from 20 to 25 mm depth of a phantom. In addition, the Bragg-peak of 109 MeV proton beam was measured at the depth of 11.5 mm of a phantom using a two-dimensional sensor array, which has 10×3 sensor array with a 0.5 mm interval.
Marzbanrad, Faezeh; Khandoker, Ahsan H; Endo, Miyuki; Kimura, Yoshitaka; Palaniswami, Marimuthu
2014-08-01
Fetal cardiac assessment techniques are aimed to identify fetuses at risk of intrauterine compromise or death. Evaluation of the electromechanical coupling as a fundamental part of the fetal heart physiology, provides valuable information about the fetal wellbeing during pregnancy. It is based on the opening and closing time of the cardiac valves and the onset of the QRS complex of the fetal electrocardiogram (fECG). The focus of this paper is on the automated identification of the fetal cardiac valve opening and closing from Doppler Ultrasound signal and fECG as a reference. To this aim a novel combination of Emprical Mode Decomposition (EMD) and multi-dimensional Hidden Markov Models (MD-HMM) was employed which provided beat-to-beat estimation of cardiac valve event timings with improved precision (82.9%) compared to the one dimensional HMM (77.4%) and hybrid HMM-Support Vector Machine (SVM) (79.8%) approaches. PMID:25570346
Multi-Dimensional, Non-Contact Metrology using Trilateration and High Resolution FMCW Ladar
Mateo, Ana Baselga
2015-01-01
Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multi-dimensional, non-contact length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration based design provides 3D resolution inherently independent of stand-off range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly-stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of laser speckle issues caused by diffuse scattering of the targets.
Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport
NASA Astrophysics Data System (ADS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-02-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
Deng, Weiran; Yang, Cungeng; Stenger, V. Andrew
2010-01-01
Multi-dimensional RF pulses are of current interest due to their promise for improving high field imaging as well as for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multi-dimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multi-dimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multi-dimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of twenty for standard eight-transmitter 2D spiral RF pulses with a 64 × 64 excitation resolution and a ten-microsecond dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. PMID:21264929
Sussex, University of
1 The multi-dimensional additionality of innovation policies. A multi-level application to Italy, at the national and regional level (multi-level). An empirical application is carried out for Italy and Spain of different variables in Spain (product innovations) and in Italy (process innovations). Overall, only
N. AHMAD; DAVID P. BACON; MARY S. HALL; ANANTHAKRISHNA SARMA
Twenty years ago, the multi-dimensional, positive definite, advection transport algorithm was introduced by Smolarkiewicz. Over the two decades since, it has been applied countless times to numerous problems, however almost always on rectilinear grids. One of the few exceptions is the Operational Multiscale Environment model with Grid Adaptivity (OMEGA), an atmospheric simulation system originally designed to simulate atmospheric dispersion in
ERIC Educational Resources Information Center
Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu
2012-01-01
Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…
Clement, Prabhakar
-species, bio-reactive or radioactive transport problems involving a sequential first-order decay reaction chain-order reaction network involving distinct retardation factors Cristhian R. Quezada a , T. Prabhakar Clement a of multi-dimensional, multi-species transport problems that are coupled with a first-order reaction network
NASA Astrophysics Data System (ADS)
Ono, Junichi; Ando, Koji
2012-11-01
A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed Gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role.
Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.
2008-01-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10?4 samples in range and 2.2 × 10?3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10?3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190
Upscaling river biomass using dimensional analysis and hydrogeomorphic scaling
Foufoula-Georgiou, Efi
and hydro-geomorphologic scaling laws. We first demonstrate the use of dimensional analysis for determiningUpscaling river biomass using dimensional analysis and hydrogeomorphic scaling Elizabeth A. Barnes propose a methodology for upscaling biomass in a river using a combination of dimensional analysis
Learning Hierarchical Bayesian Networks for Large-Scale Data Analysis
Kyu-Baek Hwang; Byoung-hee Kim; Byoung-tak Zhang
2006-01-01
Bayesian network learning is a useful tool for exploratory data analysis. However, applying Bayesian networks to the analysis of large-scale data, consisting of thousands of attributes, is not straight- forward because of the heavy computational burden in learning and vi- sualization. In this paper, we propose a novel method for large-scale data analysis based on hierarchical compression of information and
Preparation of 13C and 15N labelled RNAs for heteronuclear multi-dimensional NMR studies.
Nikonowicz, E P; Sirr, A; Legault, P; Jucker, F M; Baer, L M; Pardi, A
1992-01-01
A procedure is described for the efficient preparation of isotopically enriched RNAs of defined sequence. Uniformly labelled nucleotide 5'triphosphates (NTPs) were prepared from E.coli grown on 13C and/or 15N isotopically enriched media. These procedures routinely yield 180 mumoles of labelled NTPs per gram of 13C enriched glucose. The labelled NTPs were then used to synthesize RNA oligomers by in vitro transcription. Several 13C and/or 15N labelled RNAs have been synthesized for the sequence r(GGCGCUUGCGUC). Under conditions of high salt or low salt, this RNA forms either a symmetrical duplex with two U.U base pairs or a hairpin containing a CUUG loop respectively. These procedures were used to synthesize uniformly labelled RNAs and a RNA labelled only on the G and C residues. The ability to generate milligram quantities of isotopically labelled RNAs allows application of multi-dimensional heteronuclear magnetic resonance experiments that enormously simplify the resonance assignment and solution structure determination of RNAs. Examples of several such heteronuclear NMR experiments are shown. PMID:1383927
NASA Astrophysics Data System (ADS)
McKean, J.; Tonina, D.; Bohn, C.; Wright, C. W.
2014-03-01
New remote sensing technologies and improved computer performance now allow numerical flow modeling over large stream domains. However, there has been limited testing of whether channel topography can be remotely mapped with accuracy necessary for such modeling. We assessed the ability of the Experimental Advanced Airborne Research Lidar, to support a multi-dimensional fluid dynamics model of a small mountain stream. Random point elevation errors were introduced into the lidar point cloud, and predictions of water surface elevation, velocity, bed shear stress, and bed mobility were compared to those made without the point errors. We also compared flow model predictions using the lidar bathymetry with those made using a total station channel field survey. Lidar errors caused < 1 cm changes in the modeled water surface elevations. Effects of the point errors on other flow characteristics varied with both the magnitude of error and the local spatial density of lidar data. Shear stress errors were greatest where flow was naturally shallow and fast, and lidar errors caused the greatest changes in flow cross-sectional area. The majority of the stress errors were less than ± 5 Pa. At near bankfull flow, the predicted mobility state of the median grain size changed over ? 1.3% of the model domain as a result of lidar elevation errors and ? 3% changed mobility in the comparison of lidar and ground-surveyed topography. In this riverscape, results suggest that an airborne bathymetric lidar can map channel topography with sufficient accuracy to support a numerical flow model.
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4th year Secondary Education students - control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
Vogt, Stefan; Ralle, Martina
2012-01-01
Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
From electron energy-loss spectroscopy to multi-dimensional and multi-signal electron microscopy.
Colliex, Christian
2011-01-01
This review intends to illustrate how electron energy-loss spectroscopy (EELS) techniques in the electron microscope column have evolved over the past 60 years. Beginning as a physicist tool to measure basic excitations in solid thin foils, EELS techniques have gradually become essential for analytical purposes, nowadays pushed to the identification of individual atoms and their bonding states. The intimate combination of highly performing techniques with quite efficient computational tools for data processing and ab initio modeling has opened the way to a broad range of novel imaging modes with potential impact on many different fields. The combination of Angström-level spatial resolution with an energy resolution down to a few tenths of an electron volt in the core-loss spectral domain has paved the way to atomic-resolved elemental and bonding maps across interfaces and nanostructures. In the low-energy range, improved energy resolution has been quite efficient in recording surface plasmon maps and from them electromagnetic maps across the visible electron microscopy (EM) domain, thus bringing a new view to nanophotonics studies. Recently, spectrum imaging of the emitted photons under the primary electron beam and the spectacular introduction of time-resolved techniques down to the femtosecond time domain, have become innovative keys for the development and use of a brand new multi-dimensional and multi-signal electron microscopy. PMID:21844587
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4(th) year Secondary Education students--control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
Uncertainty analysis of basin scale compaction processes
NASA Astrophysics Data System (ADS)
Formaggia, L.; Guadagnini, A.; Imperiali, I.; Lever, V.; Porta, G.; Riva, M.; Scotti, A.; Tamellini, L.
2012-04-01
The dynamic evolution of porosity distribution in sedimentary basins has been typically interpreted by assuming that mechanical compaction is the dominant process. While mechanical compaction is particularly relevant during the early burial phase and has been often assumed to play a key role in the diagenesis even at the largest depths, temperature-activated geochemical compaction has been recognized as a major component driving the evolution of the basin characteristics and of the compaction process at least within the deepest layers. As a consequence, modeling basin evolution requires solving a coupled system involving partial differential equations and algebraic relationships between state variables. In this framework, quartz cementation and smectite-illite transformation are recognized to be the most relevant processes affecting sedimentary basins evolution. Spatial and temporal scales of basin evolution are intrinsically very large and it is often difficult to provide reliable estimates for the parameters included in the selected geochemical and compaction models. In this study we focus on the effects that the coupling between the quartz cementation process and mechanical compaction have on the distribution of porosity, pressure and temperature in the evolving sedimentary basin in the presence of uncertain model parameters and boundary conditions. We quantify uncertainty associated with the system state variables by means of a Global Sensitivity Analysis (GSA). The methodology is framed within the context of a generalized Polynomial Chaos Expansion (GPCE) approximation of a basin-scale evolution scenario. Sparse grids sampling techniques are employed to improve the computational efficiency of the methodology. The theoretical and computational framework adopted allows an efficient computation of the variance-based Sobol indices, exploiting a polynomial interpolation over the sparse grid collocation points. An additional advantage of the GPCE is that it yields a surrogate model of the system behavior. This can be exploited within the context of uncertainty propagation studies, e.g., based on numerical Monte Carlo simulations. It allows observing the space-time evolution of the probability density distribution (and its statistical moments) of target problem variables. The approach is illustrated through a one-dimensional example involving the process of quartz cementation in sandstones and the resulting effects on the dynamics of porosity, temperature and pressure.
NASA Astrophysics Data System (ADS)
EL-Shamy, E. F.
2014-08-01
The solitary structures of multi-dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
EL-Shamy, E. F., E-mail: emadel-shamy@hotmail.com [Department of Physics, Faculty of Science, Damietta University, New Damietta 34517, Egypt and Department of Physics, College of Science, King Khalid University, Abha P.O. 9004 (Saudi Arabia)
2014-08-15
The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
Hyeong Min Kim; Thomas Kramer
2006-01-01
We investigate how need for cognition and cognitive effort associated with multi-dimensional pricing combine to influence\\u000a demand. Experiment 1 shows that individuals with low (vs. high) need for cognition are less likely to purchase products that list price and relative discount separately. The direction of the effect of need for cognition\\u000a on demand is found to depend on whether consumers’
Scale Analysis of Deep and Shallow Convection in the Atmosphere
Yoshimitsu Ogura; Norman A. Phillips
1962-01-01
The approximate equations of motion derived by Batchelor in 1953 are derived by a formal scale analysis, with the assumption that the percentage range in potential temperature is small and that the time scale is set by the Brunt-Väisälä frequency. Acoustic waves are then absent. If the vertical scale is small compared to the depth of an adiabatic atmosphere, the
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Factor Analysis of the Interpersonal Trust Scale
ERIC Educational Resources Information Center
Wright, Thomas L.; Tedeschi, Richard G.
1975-01-01
Separate factor analyses of four large samples of respondents to Rotter's Interpersonal Trust Scale produced three orthogonal factors that cross-validated over all samples. Results indicate there may be relatively independent dimensions of trust and factor scores may yield greater prediction than the general scale in many research applications.…
Markov Chain Analysis for Large-Scale Grid Systems
Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski Fern Hunt NISTIR 7566 #12 Gaithersburg, MD 20899-8530 Fern Hunt Mathematical and Computational Sciences Division National Institute;5 Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski and Fern Hunt Abstract: In large
Multidimensional Scaling Versus Components Analysis of Test Intercorrelations
Mark L. Davison
1985-01-01
The relation between coordinate estimates in components analysis and multidimensional scaling (MDS) is considered. Algebraic relations between metric MDS and components analysis are reviewed. Three small Monte Carlo studies suggest that the same relations usually, although not universally, characterize components and nonmetric MDS analyses of correlation matrices. Although the relation between components and scaling solutions is generally complex, in one
Dynamical scaling analysis of plant callus growth
NASA Astrophysics Data System (ADS)
Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.
2003-07-01
We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.
NASA Astrophysics Data System (ADS)
Alizadeh, M.; Schuh, H.; Schmidt, M. G.
2012-12-01
In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT
ALEGRA-HEDP Multi-Dimensional Simulations of Z-pinch Related Physics
NASA Astrophysics Data System (ADS)
Garasi, Christopher J.
2003-10-01
The marriage of experimental diagnostics and computer simulations continues to enhance our understanding of the physics and dynamics associated with current-driven wire arrays. Early models that assumed the formation of an unstable, cylindrical shell of plasma due to wire merger have been replaced with a more complex picture involving wire material ablating non-uniformly along the wires, creating plasma pre-fill interior to the array before the bulk of the array collapses due to magnetic forces. Non-uniform wire ablation leads to wire breakup, which provides a mechanism for some wire material to be left behind as the bulk of the array stagnates onto the pre-fill. Once the bulk of the material has stagnated, electrical current can then shift back to the material left behind and cause it to stagnate onto the already collapsed bulk array mass. These complex effects impact the total radiation output from the wire array which is very important to application of that radiation for inertial confinement fusion. A detailed understanding of the formation and evolution of wire array perturbations is needed, especially for those which are three-dimensional in nature. Sandia National Laboratories has developed a multi-physics research code tailored to simulate high energy density physics (HEDP) environments. ALEGRA-HEDP has begun to simulate the evolution of wire arrays and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. Our three-dimensional code capability now provides us with the ability to solve for the magnetic field and current density distribution associated with the wire array and the evolution of three-dimensional effects seen experimentally. The insight obtained from these multi-dimensional simulations of wire arrays will be presented and specific simulations will be compared to experimental data.
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Multi-dimensional Features of Neutrino Transfer in Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S.
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S n method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ~20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (~2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
Ono, Junichi; Ando, Koji
2012-11-01
A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role. PMID:23145735
Convective scale weather analysis and forecasting
NASA Technical Reports Server (NTRS)
Purdom, J. F. W.
1984-01-01
How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.
Graph OLAP: a multi-dimensional framework for graph data analysis
Chen, Chen; Yan, Xifeng; Zhu, Feida; Han, Jiawei; Yu, Philip S.
2009-01-01
of databases and data ware- house systems to handle graphdatabases, and bioinformatics. He investigates models and algo- rithms for managing and mining complex graphs andgraph cube’s base cuboid; without any dif?culty, we can also aggregate DB, DM and IR into a broad Database ?
Nonlinear Analysis of Multi-Dimensional Signals: Local Adaptive Estimation of Complex
Garbe, Christoph S.
,4 , Ingo Stuke6 , Cicero Mota2,5 , Martin BÂ¨ohme5 , Martin Haker5 , Tobias Schuchert3 , Hanno Scharr3 , Til LÂ¨ubeck, Germany {boehme,haker,barth}@inb.uni-luebeck.de 6 University of LÂ¨ubeck, Institute
Detection and analysis of multi-dimensional pulse wave based on optical coherence tomography
NASA Astrophysics Data System (ADS)
Shen, Yihui; Li, Zhifang; Li, Hui; Chen, Haiyu
2014-11-01
Pulse diagnosis is an important method of traditional Chinese medicine (TCM). Doctors diagnose the patients' physiological and pathological statuses through the palpation of radial artery for radial artery pulse information. Optical coherence tomography (OCT) is an useful tool for medical optical research. Current conventional diagnostic devices only function as a pressure sensor to detect the pulse wave - which can just partially reflect the doctors feelings and lost large amounts of useful information. In this paper, the microscopic changes of the surface skin above radial artery had been studied in the form of images based on OCT. The deformation of surface skin in a cardiac cycle which is caused by arterial pulse is detected by OCT. The patient's pulse wave is calculated through image processing. It is found that it is good consistent with the result conducted by pulse analyzer. The real-time patient's physiological and pathological statuses can be monitored. This research provides a kind of new method for pulse diagnosis of traditional Chinese medicine.
Multi-dimensional analysis of hdl: an approach to understanding atherogenic hdl
Johnson, Jr., Jeffery Devoyne
2009-05-15
-MS), capillary electrophoresis (CE), isoelectric focusing (IEF) and apoptosis studies involving cell cultures. It is becoming clearer that cholesterol concentrations themselves do not provide sufficient data to assess the quality of cardiovascular health. As a...
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Computational methods for criticality safety analysis within the scale system
Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.
1986-01-01
The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs.
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis.
Hong, Yoon-Seok Timothy; Rosen, Michael R; Bhamidimarri, Rao
2003-04-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. PMID:12600389
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
Longitudinal Network Analysis Using Multidimensional Scaling.
ERIC Educational Resources Information Center
Barnett, George A.; Palmer, Mark T.
The Galileo System, a variant of metric multidimensional scaling, is used in this paper to analyze over-time changes in social networks. The paper first discusses the theoretical necessity for the use of this procedure and the methodological problems associated with its use. It then examines the air traffic network among 31 major cities in the…
Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data
NASA Astrophysics Data System (ADS)
Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.
2013-12-01
Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).
Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.
Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef
2009-12-01
In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental results show that the MD PSO evolves to optimum or near-optimum networks in general and has a superior generalization capability. Furthermore, the MD PSO naturally favors a low-dimension solution when it exhibits a competitive performance with a high dimension counterpart and such a native tendency eventually yields the evolution process to the compact network configurations in the architecture space rather than the complex ones, as long as the optimality prevails. PMID:19556105
MOKKEN: Stata module: Mokken scale analysis
Jeroen Weesie
1999-01-01
mokken is command for non-parametric scaling of dichotomuous items. It produces results similar to alpha, item A polytomuous version of mokken (due to Molenaar) is under construction, but it doesnot have high priority at this moment. For those with Stata v6 on an internet-accessible machine, install by typing .net cd http:\\/\\/www.fss.uu.nl\\/soc\\/iscore\\/stata\\/ then .net install mokken
NASA Astrophysics Data System (ADS)
Iorio, Lorenzo
2005-09-01
An unexpected secular increase of the astronomical unit, the length scale of the Solar System, has recently been reported by three different research groups (Krasinsky and Brumberg, Pitjeva, Standish). The latest JPL measurements amount to 7 ± 2 m cy-1. At present, there are no explanations able to accommodate such an observed phenomenon, either in the realm of classical physics or in the usual four-dimensional framework of the Einsteinian general relativity. The Dvali Gabadadze Porrati braneworld scenario, which is a multi-dimensional model of gravity aimed at providing an explanation of the observed cosmic acceleration without dark energy, predicts, among other things, a perihelion secular shift, due to Lue and Starkman, of 5 × 10-4 arcsec cy-1 for all the planets of the Solar System. It yields a variation of about 6 m cy-1 for the Earth Sun distance which is compatible with the observed rate of change for the astronomical unit. The recently measured corrections to the secular motions of the perihelia of the inner planets of the Solar System are in agreement with the predicted value of the Lue Starkman effect for Mercury, Mars and, at a slightly worse level, the Earth.
The Attitudes to Ageing Questionnaire: Mokken Scaling Analysis
Shenkin, Susan D.; Watson, Roger; Laidlaw, Ken; Starr, John M.; Deary, Ian J.
2014-01-01
Background Hierarchical scales are useful in understanding the structure of underlying latent traits in many questionnaires. The Attitudes to Ageing Questionnaire (AAQ) explored the attitudes to ageing of older people themselves, and originally described three distinct subscales: (1) Psychosocial Loss (2) Physical Change and (3) Psychological Growth. This study aimed to use Mokken analysis, a method of Item Response Theory, to test for hierarchies within the AAQ and to explore how these relate to underlying latent traits. Methods Participants in a longitudinal cohort study, the Lothian Birth Cohort 1936, completed a cross-sectional postal survey. Data from 802 participants were analysed using Mokken Scaling analysis. These results were compared with factor analysis using exploratory structural equation modelling. Results Participants were 51.6% male, mean age 74.0 years (SD 0.28). Three scales were identified from 18 of the 24 items: two weak Mokken scales and one moderate Mokken scale. (1) ‘Vitality’ contained a combination of items from all three previously determined factors of the AAQ, with a hierarchy from physical to psychosocial; (2) ‘Legacy’ contained items exclusively from the Psychological Growth scale, with a hierarchy from individual contributions to passing things on; (3) ‘Exclusion’ contained items from the Psychosocial Loss scale, with a hierarchy from general to specific instances. All of the scales were reliable and statistically significant with ‘Legacy’ showing invariant item ordering. The scales correlate as expected with personality, anxiety and depression. Exploratory SEM mostly confirmed the original factor structure. Conclusions The concurrent use of factor analysis and Mokken scaling provides additional information about the AAQ. The previously-described factor structure is mostly confirmed. Mokken scaling identifies a new factor relating to vitality, and a hierarchy of responses within three separate scales, referring to vitality, legacy and exclusion. This shows what older people themselves consider important regarding their own ageing. PMID:24892302
Rasch Analysis of the Geriatric Depression Scale--Short Form
ERIC Educational Resources Information Center
Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.
2009-01-01
Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…
Statistical Modeling and Performance Analysis of MultiScaling Traffic
Nelson X. Liu; John S. Baras
2003-01-01
In this paper we propose a new statistical model for multi-scale traffic, and present an exact queueing analysis for the model. The model is based on the central moments and the marginal distributions of the cumulative traffic loads in different time scales. Only the first two moments are needed to characterize the traffic process, which greatly simplifies the representation and
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS
Eindhoven, Technische Universiteit
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS C. J. VAN DUIJN AND I. S. POP Abstract. In this paper we discuss a pore scale model for crysÂ tal dissolution and precipitation. For the particular case of strips we show that free boundaries ocÂ cur in the form of dissolution/precipitation
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS
Eindhoven, Technische Universiteit
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS C. J. VAN DUIJN AND I. S. POP Abstract. In this paper we discuss a pore scale model for crys- tal dissolution and precipitation. For the particular case of strips we show that free boundaries oc- cur in the form of dissolution/precipitation
Scaling analysis of flow in channel with viscous dissipation
NSDL National Science Digital Library
Krane, Matthew J. M.
2008-10-25
Scaling analysis is used to predict the functional dependence and order of magnitude of the maximum temperature difference between the fluid and the channel wall, for fully developed flow between parallel plates with viscous dissipation.
Multi-scale analysis of heavy rainfall systems
NASA Astrophysics Data System (ADS)
TSUI, Chi Yan; Shao, Yaping
2014-05-01
The aim of this work is to study the cross-scale interactions with focus on mesoscale convective system. A multi-scale analysis of a heavy rainfall event is carried out by dividing the responsible systems into large, middle and small scales. The three distinctive scales correspond respectively to upper- and low-level jets, meso-scale convective systems and convective cells. The governing equations for the three scales are derived and then simplified to their bare essence to illustrate the cross-scale interactions. In particular, the cross-scale transfers of momentum and heat are retained in the equations to illustrate the interactions between the large and small scale motions with the mesoscale system. WRF has been used to simulate a heavy rainfall event in southeast China and the model results are used to test the theory of the multi-scale interactions. Overall, the theory shows a plausible mechanism that the meso-scale convective system is responsible for the vertical momentum transfer from the upper level jet to the lower level jet which maintains the low-level positive vorticity of the convective system. The low-level jet also carries large quantities of moisture from the South China Sea to Southeast China, which are necessary for small scale convections.
SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION
John Crepeau
2011-03-01
Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.
NASA Astrophysics Data System (ADS)
Masum Haider, M.; Akter, Suraya; Duha, Syed; Mamun, Abdullah
2012-10-01
The basic features and multi-dimensional instability of electrostatic (EA) solitary waves propagating in an ultra-relativistic degenerate dense magnetized plasma (containing inertia-less electrons, inertia-less positrons, and inertial ions) have been theoretically investigated by reductive perturbation method and small-k perturbation expansion technique. The Zakharov-Kuznetsov (ZK) equation has been derived, and its numerical solutions for some special cases have been analyzed to identify the basic features (viz. amplitude, width, instability, etc.) of these electrostatic solitary structures. The implications of our results in some compact astrophysical objects, particularly white dwarfs and neutron stars, are briefly discussed.
NASA Astrophysics Data System (ADS)
Masum Haider, M.; Akter, Suraya; Duha, Syed S.; Mamun, Abdullah A.
2012-10-01
The basic features and multi-dimensional instability of electrostatic (EA) solitary waves propagating in an ultra-relativistic degenerate dense magnetized plasma (containing inertia-less electrons, inertia-less positrons, and inertial ions) have been theoretically investigated by reductive perturbation method and small- k perturbation expansion technique. The Zakharov-Kuznetsov (ZK) equation has been derived, and its numerical solutions for some special cases have been analyzed to identify the basic features (viz. amplitude, width, instability, etc.) of these electrostatic solitary structures. The implications of our results in some compact astrophysical objects, particularly white dwarfs and neutron stars, are briefly discussed.
Kaethner, Christian, E-mail: kaethner@imt.uni-luebeck.de; Ahlborg, Mandy; Buzug, Thorsten M., E-mail: buzug@imt.uni-luebeck.de [Institute of Medical Engineering, Universität zu Lübeck, 23562 Lübeck (Germany); Knopp, Tobias [Thorlabs GmbH, 23562 Lübeck (Germany); Sattel, Timo F. [Philips Medical Systems DMC GmbH, 22335 Hamburg (Germany)
2014-01-28
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
NASA Astrophysics Data System (ADS)
Ducrot, Arnaud
2015-04-01
This paper is concerned with the study of the asymptotic behaviour of a multi-dimensional Fisher–KPP equation posed in an asymptotically homogeneous medium and supplemented together with a compactly supported initial datum. We derive precise estimates for the location of the front before proving the convergence of the solutions towards the travelling front. In particular, we show that the location of the front drastically depends on the rate at which the medium becomes homogeneous at infinity. Fast rate of convergence only changes the location by some constant while lower rate of convergence induces further logarithmic delay.
Scale Dependent Analysis Approach for Star Events
NASA Astrophysics Data System (ADS)
Rogachevsky, O. V.
2007-11-01
This work presents a new approach to analyze multi-particle events in nucleus-nucleus collisions. Events multiplicity obtained recently at STAR detector gives us the possibility to estimate fractal dimension for each event and make classification of events based on this quantities. This analysis is applied for data from Au + Au collisions at ? {sNN} = 200 GeV and ? {sNN} = 62 and for different kinematic variables. It is shown that this analysis provides an information of the dynamical properties of events.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
[Study of Beck's hopelessness scale. Validation and factor analysis].
Bouvard, M; Charles, S; Guérin, J; Aimard, G; Cottraux, J
1992-01-01
The validation study and factorial analysis of the Beck's hopelessness scale is presented. Two groups were compared including patients suffering from depression (n = 100) and a control group (n = 93). Age and sex were comparable in the two groups. The hopelessness scale is valid, and differentiates depressive patients from control subjects. The scale has a good reliability (test-retest, r = .81) and a good internal consistency (alpha = .97) for depressive subjects and alpha = .79 for control subjects). It also shows a good concurrent validity with other scales assessing depressive cognitions, the automatic thoughts questionnaire, the dysfunctional attitudes scale (form A) and a scale assessing the suicidal risk (ERSD). No concurrent validity is found with scales assessing the intensity of depression, the Beck depression inventory and the Hamilton scale. The factorial analysis elicits a general factor, accounting for 38.15% of the variance, and reflecting negative feelings about the future. The study of all the factorial analysis shows the stability of the factorial structure. PMID:1299593
Large-Scale Vehicle Sharing Systems: Analysis of Vélib'
Rahul Nair; Elise Miller-Hooks; Robert C. Hampshire; Ana Buši?
2012-01-01
A quantitative analysis of the pioneering large-scale bicycle sharing system, Vélib' in Paris, France is presented. This system involves a fleet of bicycles strategically located across the network. Users are free to check out a bicycle from close to their origin and drop it off close to their destination to complete their trip. The analysis provides key insights on the
Assessment of RELAP5-3D multi-dimensional component model using data from LOFT Test L2-5
Davis, C.B.
1998-07-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the Loss-of-Fluid Test (LOFT) L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident (LOCA). Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L21-5 experiment were performed using the RELAP5-3D computer code. The calculations simulated the blowdown, refill, and reflood portions of the transient. The calculated thermal-hydraulic response of the primary coolant system was generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP5/MOD3.
NASA Astrophysics Data System (ADS)
Kundu, N. R.; Masud, M. M.; Ashrafi, K. S.; Mamun, A. A.
2013-01-01
A rigorous theoretical investigation has been made on multi-dimensional instability of obliquely propagating electrostatic dust-ion-acoustic (DIA) solitary structures in a magnetized dusty electronegative plasma which consists of Boltzmann electrons, nonthermal negative ions, cold mobile positive ions, and arbitrarily charged stationary dust. The Zakharov-Kuznetsov (ZK) equation is derived by the reductive perturbation method, and its solitary wave solution is analyzed for the study of the DIA solitary structures, which are found to exist in such a dusty plasma. The multi-dimensional instability of these solitary structures is also studied by the small- k (long wave-length plane wave) perturbation expansion technique. The combined effects of the external magnetic field, obliqueness, and nonthermal distribution of negative ions, which are found to significantly modify the basic properties of small but finite-amplitude DIA solitary waves, are examined. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable DIA solitary waves. The basic features (viz. speed, amplitude, width, instability, etc.) and the underlying physics of the DIA solitary waves, which are relevant to many astrophysical situations (especially, auroral plasma, Saturn's E-ring and F-ring, Halley's comet, etc.) and laboratory dusty plasma situations, are briefly discussed.
NASA Astrophysics Data System (ADS)
Miller, Gregory K.; Petti, David A.; Varacalle, Dominic J.; Maki, John T.
2003-04-01
The fundamental design for a gas-cooled reactor relies on the behavior of the coated particle fuel. The coating layers, termed the TRISO coating, act as a mini-pressure vessel that retains fission products. Results of US irradiation experiments show that many more fuel particles have failed than can be attributed to one-dimensional pressure vessel failures alone. Post-irradiation examinations indicate that multi-dimensional effects, such as the presence of irradiation-induced shrinkage cracks in the inner pyrolytic carbon layer, contribute to these failures. To address these effects, the methods of prior one-dimensional models are expanded to capture the stress intensification associated with multi-dimensional behavior. An approximation of the stress levels enables the treatment of statistical variations in numerous design parameters and Monte Carlo sampling over a large number of particles. The approach is shown to make reasonable predictions when used to calculate failure probabilities for irradiation experiments of the New Production - Modular High Temperature Gas Cooled Reactor Program.
Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR
Ishii, M.; Ravankar, S.T.; Dowlati, R. [Purdue Univ., Lafayette, IN (United States). School of Nuclear Engineering] [and others
1996-04-01
The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.
Honeycomb: Visual Analysis of Large Scale Social Networks
NASA Astrophysics Data System (ADS)
van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.
The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.
Swan Jr., Colby Corson
1 Multi-Scale Unit-Cell Analysis ofMulti-Scale Unit-Cell Analysis of Textile CompositesTextile for textile composites to facilitate structural analysis Enhanced understanding of textile composites. 3. Results on fiber-diameter scale 4. Results on textile unit-cell scale #12;3 Research Objectives
Large scale analysis of signal reachability
Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer
2014-01-01
Motivation: Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. Results: We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. Availability: All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. Contact: atodor@cise.ufl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932011
An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles
NASA Technical Reports Server (NTRS)
Brown, Clifford; Bridges, James
2003-01-01
Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
NASA Astrophysics Data System (ADS)
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-03-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Multiple-length-scale deformation analysis in a thermoplastic polyurethane.
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P; Prisacariu, Cristina; Korsunsky, Alexander M
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Geographical Scale Effects on the Analysis of Leptospirosis Determinants
Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães
2014-01-01
Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536
Geographical scale effects on the analysis of leptospirosis determinants.
Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães
2014-01-01
Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536
Shielding analysis methods available in the scale computational system
Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.
1986-01-01
Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.
Full-scale system impact analysis: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.
Matthew A. Douglas; Stephen M. Swartz
2009-01-01
Purpose – The purpose of this paper is to develop a measurement scale to assess over-the-road commercial motor vehicle operators' attitudes toward safety regulations. Design\\/methodology\\/approach – A literature review of the current USA motor carrier safety literature and general safety literature is conducted to determine the existence of a construct and measurement scale suitable for assessing truck drivers' attitudes toward
Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Amadi, Ovid Charles
2013-01-01
The requirement that individual cells be able to communicate with one another over a range of length scales is a fundamental prerequisite for the evolution of multicellular organisms. Often diffusible chemical molecules ...
A rasch analysis of the statistical anxiety rating scale.
Teman, Eric D
2013-01-01
The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS. PMID:24064581
Confirmatory Factor Analysis of the Recent Exposure to Violence Scale
Manfred H. M. van Dulmen; Lara M. Belliston; Mark Singer
The purpose of the current study is to advance the psychometric properties of the child- administered 22-item Recent Exposure to Violence Scale (REVS) using confirmatory factor analysis (CFA) across three large and ethnically diverse samples of children ranging in age from middle childhood through adolescence. Results of the CFA suggest that a seven- factor solution best represents the REVS. This
Crater ejecta scaling laws: fundamental forms based on dimensional analysis
K. R. Housen; R. M. Schmidt; K. A. Holsapple
1983-01-01
A model of crater ejecta is constructed using dimensional analysis and a recently developed theory of energy and momentum coupling in cratering events. General relations are derived that provide a rationale for scaling laboratory measurements of ejecta to larger events. Specific expressions are presented for ejection velocities and ejecta blanket profiles in two limiting regimes of crater formation: the so-called
A Confirmatory Factor Analysis of the Professional Opinion Scale
ERIC Educational Resources Information Center
Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.
2007-01-01
The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…
Paris-Sud XI, UniversitÃ© de
) determining reliability without overestimation. Threading, i.e. connections between analysis units, proved in CSCL are discussed. Keywords: content analysis, methodology, reliability, threading, coding hal as an indicator for the quality of messages. Later, methods like `thread-length' analysis and `social network
Wu, Xiaolin
Golden-section search method Newton's method Multi-dimensional Unconstrained Optimization Analytical Different techniques Golden-section search method Question 13.3 (5th Edition) Solve for the value of x that maximize f(x) in Prob. 13.2 using the golden-section search. Employ initial guesses of xl = 0 and xu = 2
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-01-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two…
SINEX: SCALE shielding analysis GUI for X-Windows
Browman, S.M.; Barnett, D.L.
1997-12-01
SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.
Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders
ERIC Educational Resources Information Center
Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.
2013-01-01
Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…
Efficient High Order Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations: Talk Slides
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Brian R. (Technical Monitor)
2002-01-01
This viewgraph presentation presents information on the attempt to produce high-order, efficient, central methods that scale well to high dimension. The central philosophy is that the equations should evolve to the point where the data is smooth. This is accomplished by a cyclic pattern of reconstruction, evolution, and re-projection. One dimensional and two dimensional representational methods are detailed, as well.
Time and space optimization for processing groups of multi-dimensional scientific queries
Suresh Aryangat; Henrique Andrade; Alan Sussman
2004-01-01
Data analysis applications in areas as diverse as remote sensing and telepathology require operating on and processing very large datasets. For such applications to execute efficiently, careful attention must be paid to the storage, retrieval, and manipulation of the datasets. This paper addresses the optimizations performed by a high performance database system that processes groups of data analysis requests for
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
New Criticality Safety Analysis Capabilities in SCALE 5.1
Bowman, Stephen M [ORNL; DeHart, Mark D [ORNL; Dunn, Michael E [ORNL; Goluoglu, Sedat [ORNL; Horwedel, James E [ORNL; Petrie Jr, Lester M [ORNL; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2007-01-01
Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.
Light water reactor fuel performance modeling and multi-dimensional simulation
NASA Astrophysics Data System (ADS)
Rashid, Joseph Y. R.; Yagnik, Suresh K.; Montgomery, Robert O.
2011-08-01
Light water reactor fuel is a multicomponent system required to produce thermal energy through the fission process, efficiently transfer the thermal energy to the coolant system, and provide a barrier to fission product release by maintaining structural integrity. The operating conditions within a reactor induce complex multi-physics phenomena that occur over time scales ranging from less than a microsecond to years and act over distances ranging from inter-atomic spacing to meters. These conditions impose challenging and unique modeling, simulation, and verification data requirements in order to accurately determine the state of the fuel during its lifetime in the reactor. The capabilities and limitations of the current engineering-scale one-dimensional and two-dimensional fuel performance codes is discussed and the challenges of employing higher level fidelity atomistic modeling techniques such as molecular dynamics and phase-field simulations is presented.
Microbial community analysis of a full-scale DEMON bioreactor.
Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muñoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus
2015-03-01
Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398
Multiple scales analysis of a nonlinear ordinary differential equation
Gang Xu; Changqing Shu; Lei Lin
1985-01-01
Asymptotic solutions of the nonlinear ordinary differential equation d2?\\/dZ2 +a d?\\/dZ +f(?)=0 for large a are obtained by the singular perturbation method of multiple scales analysis. They are in the form of ?(Z)=A(Z\\/a)+B(Z\\/a)exp(?aZ). Initial and boundary value problems are discussed. The special case of f(?)=?+cos 2?(?<1), encountered in shearing nematic liquid crystal soliton problems and other physical systems, is solved
Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)
1999-01-01
We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.
Genome-scale analysis of Mannheimia succiniciproducens metabolism.
Kim, Tae Yong; Kim, Hyun Uk; Park, Jong Myoung; Song, Hyohak; Kim, Jin Sik; Lee, Sang Yup
2007-07-01
Mannheimia succiniciproducens MBEL55E isolated from bovine rumen is a capnophilic gram-negative bacterium that efficiently produces succinic acid, an industrially important four carbon dicarboxylic acid. In order to design a metabolically engineered strain which is capable of producing succinic acid with high yield and productivity, it is essential to optimize the whole metabolism at the systems level. Consequently, in silico modeling and simulation of the genome-scale metabolic network was employed for genome-scale analysis and efficient design of metabolic engineering experiments. The genome-scale metabolic network of M. succiniciproducens consisting of 686 reactions and 519 metabolites was constructed based on reannotation and validation experiments. With the reconstructed model, the network structure and key metabolic characteristics allowing highly efficient production of succinic acid were deciphered; these include strong PEP carboxylation, branched TCA cycle, relative weak pyruvate formation, the lack of glyoxylate shunt, and non-PTS for glucose uptake. Constraints-based flux analyses were then carried out under various environmental and genetic conditions to validate the genome-scale metabolic model and to decipher the altered metabolic characteristics. Predictions based on constraints-based flux analysis were mostly in excellent agreement with the experimental data. In silico knockout studies allowed prediction of new metabolic engineering strategies for the enhanced production of succinic acid. This genome-scale in silico model can serve as a platform for the systematic prediction of physiological responses of M. succiniciproducens to various environmental and genetic perturbations and consequently for designing rational strategies for strain improvement. PMID:17405177
2 Upscaling river biomass using dimensional analysis and 3 hydrogeomorphic scale-invariance
Power, Mary Eleanor
analysis and 10 hydro-geomorphologic scaling laws. We first demonstrate 11 the use of dimensional analysis2 Upscaling river biomass using dimensional analysis and 3 hydrogeomorphic scale-invariance 4 analysis and 22 hydrogeomorphic scale-invariance, Geophys. Res. Lett., 34, 23 L24S26, doi:10.1029/2007GL
Voice Dysfunction in Dysarthria: Application of the Multi-Dimensional Voice Program.
ERIC Educational Resources Information Center
Kent, R. D.; Vorperian, H. K.; Kent, J. F.; Duffy, J. R.
2003-01-01
Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice in individuals with dysarthria. In Part 2, acoustic data are reviewed for dysarthria associated with Parkinson disease (PD), cerebellar disease, amytrophic lateral sclerosis, traumatic brain injury, unilateral hemispheric stroke, and essential tremor.…
REGIONAL-SCALE WIND FIELD CLASSIFICATION EMPLOYING CLUSTER ANALYSIS
Glascoe, L G; Glaser, R E; Chin, H S; Loosmore, G A
2004-06-17
The classification of time-varying multivariate regional-scale wind fields at a specific location can assist event planning as well as consequence and risk analysis. Further, wind field classification involves data transformation and inference techniques that effectively characterize stochastic wind field variation. Such a classification scheme is potentially useful for addressing overall atmospheric transport uncertainty and meteorological parameter sensitivity issues. Different methods to classify wind fields over a location include the principal component analysis of wind data (e.g., Hardy and Walton, 1978) and the use of cluster analysis for wind data (e.g., Green et al., 1992; Kaufmann and Weber, 1996). The goal of this study is to use a clustering method to classify the winds of a gridded data set, i.e, from meteorological simulations generated by a forecast model.
Multi-scale analysis of bias correction of soil moisture
NASA Astrophysics Data System (ADS)
Su, C.-H.; Ryu, D.
2015-01-01
Remote sensing, in situ networks and models are now providing unprecedented information for environmental monitoring. To conjunctively use multi-source data nominally representing an identical variable, one must resolve biases existing between these disparate sources, and the characteristics of the biases can be non-trivial due to spatio-temporal variability of the target variable, inter-sensor differences with variable measurement supports. One such example is of soil moisture (SM) monitoring. Triple collocation (TC) based bias correction is a powerful statistical method that is increasingly being used to address this issue, but is only applicable to the linear regime, whereas the non-linear method of statistical moment matching is susceptible to unintended biases originating from measurement error. Since different physical processes that influence SM dynamics may be distinguishable by their characteristic spatio-temporal scales, we propose a multi-timescale linear bias model in the framework of a wavelet-based multi-resolution analysis (MRA). The joint MRA-TC analysis was applied to demonstrate scale-dependent biases between in situ, remotely sensed and modelled SM, the influence of various prospective bias correction schemes on these biases, and lastly to enable multi-scale bias correction and data-adaptive, non-linear de-noising via wavelet thresholding.
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
Barrett, Louise; Henzi, S. Peter; Lusseau, David
2012-01-01
Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054
A 3D multi-modal and multi-dimensional digital brain model as a framework for data sharing.
Mailly, Philippe; Haber, Suzanne N; Groenewegen, Henk J; Deniau, Jean-Michel
2010-12-15
Computer based three-dimensional reconstruction and co-registration of experimental data provide powerful tools for integration of observation derived from various technical approaches leading to better understanding of brain functions. Here we describe a method to build a 3D multi-modal and multi-dimensional model of brain structures providing framework for data sharing. All image processing, registration and 3D reconstruction were performed using open source software IMOD package software and ImageJ. The reconstruction procedure is based on series of AChE and Nissl stained sections aligned to blockface pictures. Integration of experimental data into the reference model is achieved by co-registration of Nissl sections of experimental brain cases by positioning landmarks on corresponding anatomical structures. To overcome the challenge of comparing for experimental sections with those of the reference model, adjustment of experimental model to the brain model was done section by section and limited to the structures of interest. For this adjustment we stress the use of cytoarchitectural criteria for accurate registration of anatomical structures and co-registration procedures. PMID:20043949
A high-order multi-dimensional HLL-Riemann solver for non-linear Euler equations
NASA Astrophysics Data System (ADS)
Capdeville, G.
2011-04-01
This article presents a numerical model that enables to solve on unstructured triangular meshes and with a high-order of accuracy, a multi-dimensional Riemann problem that appears when solving hyperbolic problems. For this purpose, we use a MUSCL-like procedure in a "cell-vertex" finite-volume framework. In the first part of this procedure, we devise a four-state bi-dimensional HLL solver (HLL-2D). This solver is based upon the Riemann problem generated at the centre of gravity of a triangular cell, from surrounding cell-averages. A new three-wave model makes it possible to solve this problem, approximately. A first-order version of the bi-dimensional Riemann solver is then generated for discretizing the full compressible Euler equations. In the second part of the MUSCL procedure, we develop a polynomial reconstruction that uses all the surrounding numerical data of a given point, to give at best third-order accuracy. The resulting over determined system is solved by using a least-square methodology. To enforce monotonicity conditions into the polynomial interpolation, we develop a simplified central WENO (CWENO) procedure. Numerical tests and comparisons with competing numerical methods enable to identify the salient features of the whole model.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.; Paletou, F.
2011-01-01
In the previous paper of this series, we presented a formulation of the polarized radiative transfer equation for resonance scattering with partial frequency redistribution (PRD) in multi-dimensional media for a two-level atom model with unpolarized ground level, using the irreducible spherical tensors {T}^K_Q(i,?) for polarimetry. We also presented a polarized approximate lambda iteration method to solve this equation using the Jacobi iteration scheme. The formal solution used was based on a simple finite volume technique. In this paper, we develop a faster and more efficient method which uses the projection techniques applied to the radiative transfer equation (the Stabilized Preconditioned Bi-Conjugate Gradient method). We now use a more accurate formal solver, namely the well-known two-dimensional (2D) short characteristics method. Using the numerical method developed in Paper I, we can consider only simpler cases of finite 2D slabs due to computational limitations. Using the method developed in this paper, we could compute PRD solutions in 2D media in the more difficult context of semi-infinite 2D slabs also. We present several solutions which may serve as benchmarks in future studies in this area.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2008-04-01
Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )
NASA Astrophysics Data System (ADS)
Papalexandris, Miltiadis V.; Leonard, Anthony; Dimotakis, Paul E.
1997-11-01
This work describes an unsplit scheme for multi-dimensional systems of hyperbolic conservation laws with source terms, such as the compressible Euler equations for reacting flows. The scheme is an extension of the algorithm for 1-D problems (Papalexandris M.V., Leonard A., and Dimotakis P.E., J. Comp. Phys.), 134, pp. 31-61. It is a MUSCL-type, shock-capturing scheme that integrates all terms of the governing equations simultaneously. Appropriate families of space-time manifolds are introduced, along which the conservation equations decouple to the characteristic equations of the corresponding 1-D homogeneous system. Numerical integration of the characteristic equations is performed on these manifolds in the upwinding part of the algorithm. Numerical studies of 2-D detonations in channel flows will be presented. The proposed scheme appears capable of capturing many of the important details of the flow-fields. No explicit artificial-viscosity mechanisms or other fixes need to be employed with the proposed scheme.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.
Reactor Physics Methods and Analysis Capabilities in SCALE
DeHart, Mark D [ORNL; Bowman, Stephen M [ORNL
2011-01-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Two-field analysis of no-scale supergravity inflation
NASA Astrophysics Data System (ADS)
Ellis, John; García, Marcos A. G.; Nanopoulos, Dimitri V.; Olive, Keith A.
2015-01-01
Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index ns and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values ll 0.1. We also calculate the non-Gaussianity measure fNL, finding that is well below the current experimental sensitivity.
Reactor Physics Methods and Analysis Capabilities in SCALE
Mark D. DeHart; Stephen M. Bowman
2011-05-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Mark Shevlin; Jeremy N. V. Miles; Christopher Alan Lewis
Greenspoon and Saklofske ((1998). Confirmatory factor analysis of the multidimensional Students' Life Satisfaction Scale. Personality and Individual DiÄerences, 25, 965-971) present the results of a confirmatory factor analysis of the multidimensional Students' Life Satisfaction Scale. In this paper, we argue that the results of this analysis do not demonstrate, as the authors claim, that the proposed model is able to
2012-01-01
Background Calculating the electrostatic surface potential (ESP) of a biomolecule is critical towards understanding biomolecular function. Because of its quadratic computational complexity (as a function of the number of atoms in a molecule), there have been continual efforts to reduce its complexity either by improving the algorithm or the underlying hardware on which the calculations are performed. Results We present the combined effect of (i) a multi-scale approximation algorithm, known as hierarchical charge partitioning (HCP), when applied to the calculation of ESP and (ii) its mapping onto a graphics processing unit (GPU). To date, most molecular modeling algorithms perform an artificial partitioning of biomolecules into a grid/lattice on the GPU. In contrast, HCP takes advantage of the natural partitioning in biomolecules, which in turn, better facilitates its mapping onto the GPU. Specifically, we characterize the effect of known GPU optimization techniques like use of shared memory. In addition, we demonstrate how the cost of divergent branching on a GPU can be amortized across algorithms like HCP in order to deliver a massive performance boon. Conclusions We accelerated the calculation of ESP by 25-fold solely by parallelization on the GPU. Combining GPU and HCP, resulted in a speedup of at most 1,860-fold for our largest molecular structure. The baseline for these speedups is an implementation that has been hand-tuned SSE-optimized and parallelized across 16 cores on the CPU. The use of GPU does not deteriorate the accuracy of our results. PMID:22537008
Scaling and dimensional analysis of acoustic streaming jets
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H. [Laboratoire de Mécanique des Fluides et d’Acoustique, CNRS/Université de Lyon, Ecole Centrale de Lyon/Université Lyon 1/INSA de Lyon, ECL, 36 Avenue Guy de Collongue, 69134 Ecully Cedex (France); Garandet, J.-P. [CEA, Laboratoire d’Instrumentation et d’Expérimentation en Mécanique des Fluides et Thermohydraulique, DEN/DANS/DM2S/STMF/LIEFT, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France)
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Scaling and dimensional analysis of acoustic streaming jets
NASA Astrophysics Data System (ADS)
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.; Garandet, J.-P.
2014-09-01
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Three decades of multi-dimensional change in global leaf phenology
NASA Astrophysics Data System (ADS)
Buitenwerf, Robert; Rose, Laura; Higgins, Steven I.
2015-04-01
Changes in the phenology of vegetation activity may accelerate or dampen rates of climate change by altering energy exchanges between the land surface and the atmosphere and can threaten species with synchronized life cycles. Current knowledge of long-term changes in vegetation activity is regional, or restricted to highly integrated measures of change such as net primary productivity, which mask details that are relevant for Earth system dynamics. Such details can be revealed by measuring changes in the phenology of vegetation activity. Here we undertake a comprehensive global assessment of changes in vegetation phenology. We show that the phenology of vegetation activity changed severely (by more than 2 standard deviations in one or more dimensions of phenological change) on 54% of the global land surface between 1981 and 2012. Our analysis confirms previously detected changes in the boreal and northern temperate regions. The adverse consequences of these northern phenological shifts for land-surface-climate feedbacks, ecosystems and species are well known. Our study reveals equally severe phenological changes in the southern hemisphere, where consequences for the energy budget and the likelihood of phenological mismatches are unknown. Our analysis provides a sensitive and direct measurement of ecosystem functioning, making it useful both for monitoring change and for testing the reliability of early warning signals of change.
NASA Astrophysics Data System (ADS)
Tsai, Chin-Chung; Liu, Shiang-Yao
2005-10-01
The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of scientific epistemological views (SEVs) held by high school students. The instrument included five SEV dimensions (subscales): the role of social negotiation on science, the invented and creative reality of science, the theory-laden exploration of science, the cultural impacts on science, and the changing features of science. Six hundred and thirteen high school students in Taiwan responded to the instrument. Data analysis indicated that the instrument developed in this study had satisfactory validity and reliability measures. Correlation analysis and in-depth interviews supported the legitimacy of using multiple dimensions in representing student SEVs. Significant differences were found between male and female students, and between students’ and their teachers’ responses on some SEV dimensions. Suggestions were made about the use of the instrument to examine complicated interplays between SEVs and science learning, to evaluate science instruction, and to understand the cultural differences in epistemological views of science.
Evidence for a multi-dimensional latent structural model of externalizing disorders.
Witkiewitz, Katie; King, Kevin; McMahon, Robert J; Wu, Johnny; Luk, Jeremy; Bierman, Karen L; Coie, John D; Dodge, Kenneth A; Greenberg, Mark T; Lochman, John E; Pinderhughes, Ellen E
2013-02-01
Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) take into account the underlying continuum of externalizing behaviors. However, most of this research has not included measures of disorders that appear in childhood [e.g., attention-deficit/hyperactivity disorder (ADHD) or oppositional defiant disorder (ODD)], nor has it considered the full range of possibilities for the latent structure of externalizing behaviors, particularly factor mixture models, which allow for a latent factor to have both continuous and categorical dimensions. Finally, the majority of prior studies have not tested multidimensional models. Using lifetime diagnoses of externalizing disorders from participants in the Fast Track Project (n?=?715), we analyzed a series of latent variable models ranging from fully continuous factor models to fully categorical mixture models. Continuous models provided the best fit to the observed data and also suggested that a two-factor model of externalizing behavior, defined as (1) ODD+ADHD+CD and (2) SUD with adult antisocial behavior sharing common variance with both factors, was necessary to explain the covariation in externalizing disorders. The two-factor model of externalizing behavior was then replicated using a nationally representative sample drawn from the National Comorbidity Survey-Replication data (n?=?5,692). These results have important implications for the conceptualization of externalizing disorders in DSM-5. PMID:22936218
NASA Astrophysics Data System (ADS)
West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman
2009-02-01
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.
Geostatistical methods for analysis of multiple scales of variation in spatial data
John B. Collins
1998-01-01
Scale-related effects in digital images result from interaction between the measurement support size and multiple scales of landscape variation. Analysis of such effects requires consideration of two distinct scale-related concepts. First, most spatially varying properties exhibit multiple characteristic scales, or typical spatial periodicities. Second, observation scale, equivalent to sensor resolution, controls the degree to which scene properties can be detected
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Vanleer, Bram
1989-01-01
The solution of the two-dimensional Euler equations is based on the two-dimensional linear convection equation and the Euler-equation decomposition developed by Hirsch et al. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative, and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.
A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Van Leer, Bram
1989-01-01
A scheme of solving the two-dimensional Euler equations is developed. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.
SAMNet: a network-based approach to integrate multi-dimensional high throughput datasets
Gosline, Sara JC; Spencer, Sarah J; Ursu, Oana; Fraenkel, Ernest
2012-01-01
The rapid development of high throughput biotechnologies has led to an onslaught of data describing genetic perturbations and changes in mRNA and protein levels in the cell. Because each assay provides a one-dimensional snapshot of active signaling pathways, it has become desirable to perform multiple assays (e.g. mRNA expression and phospho-proteomics) to measure a single condition. However, as experiments expand to accommodate various cellular conditions, proper analysis and interpretation of these data have become more challenging. Here we introduce a novel approach called SAMNet, for Simultaneous Analysis of Multiple Networks, that is able to interpret diverse assays over multiple perturbations. The algorithm uses a constrained optimization approach to integrate mRNA expression data with upstream genes, selecting edges in the protein-protein interaction network that best explain the changes across all perturbations. The result is a putative set of protein interactions that succinctly summarizes the results from all experiments, highlighting the network elements unique to each perturbation. We evaluated SAMNet in both yeast and human datasets. The yeast dataset measured the cellular response to seven different transition metals, and the human dataset measured cellular changes in four different lung cancer models of Epithelial-Mesenchymal Transition (EMT), a crucial process in tumor metastasis. SAMNet was able to identify canonical yeast metal –processing genes unique to each commodity in the yeast dataset, as well as human genes such as ?-catenin and TCF7L2/TCF4 that are required for EMT signaling but escaped detection in the mRNA and phospho-proteomic data. Moreover, SAMNet also highlighted drugs likely to modulate EMT, identifying a series of less canonical genes known to be affected by the BCR-ABL inhibitor imatinib (Gleevec), suggesting a possible influence of this drug on EMT. PMID:23060147
Kylie Redfern; John Crawford
2010-01-01
This paper investigates the influence of modernisation on the moral judgements of 211 managers in the People’s Republic of\\u000a China, based on their responses to a series of vignettes depicting potentially unethical behaviour in organisations. Since\\u000a modernisation can take many forms, this paper takes a multi dimensional approach to examining levels of modernisation in different\\u000a provinces in China. Three different
Multi-scale analysis and simulation of powder blending in pharmaceutical manufacturing
Ngai, Samuel S. H
2005-01-01
A Multi-Scale Analysis methodology was developed and carried out for gaining fundamental understanding of the pharmaceutical powder blending process. Through experiment, analysis and computer simulations, microscopic ...
Irregularities and scaling in signal and image processing: multifractal analysis
NASA Astrophysics Data System (ADS)
Abry, Patrice; Jaffard, Herwig; Wendt, Stéphane
2015-03-01
B. Mandelbrot gave a new birth to the notions of scale invariance, self-similarity and non-integer dimensions, gathering them as the founding corner-stones used to build up fractal geometry. The first purpose of the present contribution is to review and relate together these key notions, explore their interplay and show that they are different facets of a single intuition. Second, we will explain how these notions lead to the derivation of the mathematical tools underlying multifractal analysis. Third, we will reformulate these theoretical tools into a wavelet framework, hence enabling their better theoretical understanding as well as their efficient practical implementation. B. Mandelbrot used his concept of fractal geometry to analyze real-world applications of very different natures. As a tribute to his work, applications of various origins, and where multifractal analysis proved fruitful, are revisited to illustrate the theoretical developments proposed here.
Multi-dimensional combustor flowfield analyses in gas-gas rocket engine
NASA Technical Reports Server (NTRS)
Tsuei, Hsin-Hua; Merkle, Charles L.
1994-01-01
The objectives of the present research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustions processes. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an on-going experimental program at NASA LeRC. Detailed computational modeling requires the application of the full three-dimensional Navier Stokes equations, coupled with species diffusion equations. The numerical procedure is performed on both time-marching and time-accurate algorithms and using an LU approximate factorization in time, flux split upwinding differencing in space. The emphasis in this paper is focused on using numerical analysis to understand detailed combustor flowfields, including the shear layer dynamics created between fuel film cooling and the core gas in the vicinity on the nearby combustor wall; the integrity and effectiveness of the coolant film; three-dimensional fuel jets injection/mixing/combustion characteristics; and their impacts on global engine performance.
Multidimensional sphere model and instantaneous vegetation trend analysis
T. Jay Bai; Tom Cottrell; Dun-Yuan Hao; Tala Te; Robert J. Brozka
1997-01-01
The Multi-Dimensional Sphere Model (MDSM), a new method for multivariate instantaneous trend analysis, is introduced. The model handles three subscript data, Z(i,j,k), e.g., for vegetation analysis, i, j and k are species, quadrats and time, respectively. The MDSM uses species, or species groups, as dimensions of a multi-dimensional space, and quadrats as points (vectors) in the space. The quadrats are
Crater ejecta scaling laws: fundamental forms based on dimensional analysis
Housen, K.R.; Schmidt, R.M.; Holsapple, K.A.
1983-03-10
A model of crater ejecta is constructed using dimensional analysis and a recently developed theory of energy and momentum coupling in cratering events. General relations are derived that provide a rationale for scaling laboratory measurements of ejecta to larger events. Specific expressions are presented for ejection velocities and ejecta blanket profiles in two limiting regimes of crater formation: the so-called gravity and strength regimes. In the gravity regime, ejectra velocities at geometrically similar launch points within craters vary as the square root of the product of crater radius and gravity. This relation implies geometric similarity of ejecta blankets. That is, the thickness of an ejecta blanket as a function of distance from the crater center is the same for all sizes of craters if the thickness and range are expressed in terms of crater radii. In the strength regime, ejecta velocities are independent of crater size. Consequently, ejecta blankets are not geometrically similar in this regime. For points away from the crater rim the expressions for ejecta velocities and thickness take the form of power laws. The exponents in these power laws are functions of an exponent, ..cap alpha.., that appears in crater radius scaling relations. Thus experimental studies of the dependence of crater radius on impact conditions determine scaling relations for ejecta. Predicted ejection velocities and ejecta-blanket profiles, based on measured values of ..cap alpha.., are compared to existing measurements of velocities and debris profiles.
A Multi-scale Approach to Urban Thermal Analysis
NASA Technical Reports Server (NTRS)
Gluch, Renne; Quattrochi, Dale A.
2005-01-01
An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.
Flux coupling analysis of genome-scale metabolic network reconstructions.
Burgard, Anthony P; Nikolaev, Evgeni V; Schilling, Christophe H; Maranas, Costas D
2004-02-01
In this paper, we introduce the Flux Coupling Finder (FCF) framework for elucidating the topological and flux connectivity features of genome-scale metabolic networks. The framework is demonstrated on genome-scale metabolic reconstructions of Helicobacter pylori, Escherichia coli, and Saccharomyces cerevisiae. The analysis allows one to determine whether any two metabolic fluxes, v(1) and v(2), are (1) directionally coupled, if a non-zero flux for v(1) implies a non-zero flux for v(2) but not necessarily the reverse; (2) partially coupled, if a non-zero flux for v(1) implies a non-zero, though variable, flux for v(2) and vice versa; or (3) fully coupled, if a non-zero flux for v(1) implies not only a non-zero but also a fixed flux for v(2) and vice versa. Flux coupling analysis also enables the global identification of blocked reactions, which are all reactions incapable of carrying flux under a certain condition; equivalent knockouts, defined as the set of all possible reactions whose deletion forces the flux through a particular reaction to zero; and sets of affected reactions denoting all reactions whose fluxes are forced to zero if a particular reaction is deleted. The FCF approach thus provides a novel and versatile tool for aiding metabolic reconstructions and guiding genetic manipulations. PMID:14718379
NASA Astrophysics Data System (ADS)
Over, M. W.; Murakami, H.; Hahn, M. S.; Yang, Y.; Rubin, Y.
2010-12-01
The method of anchored distributions (MAD, Rubin et al., Water Resour. Res., 2010) is a Bayesian inversion technique that combines geostatistical concepts with a strategy for localization of data that is indirectly related to the target variables, using anchors. Anchors are statistical distributions of the target variables (e.g., the hydraulic conductivity) at specific locations The variable field is described by the statistical distributions of structural parameters that characterize global features and by anchor distributions that intend to capture local effects. The posterior distributions of structural and anchor parameter sets are used to update the approximate spatial distribution of target variable and are generated by re-sampling the parameter sets using their normalized likelihood estimates as the probability of being selected. Increasing the dimension of the data, to include additional information in the likelihood estimate, increases the computational burden. Two measures are taken to accommodate the advantageous additional data without spurious side effects. (1) Partitioning parameter sets into hypercubes, based upon the similarity of the structural parameter values. (2) Principal component analysis, to reduce the dimensionality by discarding a certain percentage of principal components. As an additional feature for large sample sets, or faster calculation, a ‘bundling’ regime can be implemented. Bundling is employed immediately after partitioning the parameter sets into hypercubes. Bundling identifies spatial patterns amongst the realizations generated from the distributions defining the anchor parameters. The added organizational step allows data with reduced sample sizes to be passed to the PCA algorithm. The division of the data set allows for simple parallelization of the computation and our case study achieved a three-fold dimension reduction. Because of the high dimension involved in the calculation, without absurdly large sample sizes, it is reasonable to assume that the data sparsely populates the hyperspace. In order to avoid using an interpolation scheme that would average and smooth the likelihood distribution over extensive regions of unpopulated hyperspace, the data is scanned for clusters using the HOPACH algorithm authored by M. Van der Laan. The density is estimated, over the clusters, non-parametrically. The cluster approximations are summed up using a mixture model to achieve the final likelihood estimate.
MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A
Anusha, L. S.; Nagendra, K. N., E-mail: bhasari@mps.mpg.de, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)
2013-04-20
Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.
NASA Astrophysics Data System (ADS)
Pandarinath, Kailasa
2014-12-01
Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of Precambrian rocks.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
Large-scale dimension densities for heart rate variability analysis
NASA Astrophysics Data System (ADS)
Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen
2006-04-01
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (?ls?=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ?ls? (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.
Anomaly Detection in Multiple Scale for Insider Threat Analysis
Kim, Yoohwan [ORNL] [ORNL; Sheldon, Frederick T [ORNL] [ORNL; Hively, Lee M [ORNL] [ORNL
2012-01-01
We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.
Large-Scale Quantitative Analysis of Painting Arts
NASA Astrophysics Data System (ADS)
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Genome-scale analysis of demographic history and adaptive selection.
Wu, Qi; Zheng, Pingping; Hu, Yibu; Wei, Fuwen
2014-02-01
One of the main topics in population genetics is identification of adaptive selection among populations. For this purpose, population history should be correctly inferred to evaluate the effect of random drift and exclude it in selection identification. With the rapid progress in genomics in the past decade, vast genome-scale variations are available for population genetic analysis, which however requires more sophisticated models to infer species' demographic history and robust methods to detect local adaptation. Here we aim to review what have been achieved in the fields of demographic modeling and selection detection. We summarize their rationales, implementations, and some classical applications. We also propose that some widely-used methods can be improved in both theoretical and practical aspects in near future. PMID:24474201
Global Mapping Analysis: Stochastic Gradient Algorithm in Multidimensional Scaling
NASA Astrophysics Data System (ADS)
Matsuda, Yoshitatsu; Yamaguchi, Kazunori
In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.
INUNDATION PATTERNS AND FATALITY ANALYSIS ON LARGE-SCALE FLOOD
NASA Astrophysics Data System (ADS)
Ikeuchi, Koji; Ochi, Shigeo; Yasuda, Goro; Okamura, Jiro; Aono, Masashi
In order to enhance the emergency preparedness for large-scale floods of the Ara River, we categorized the inundation patterns and calculated fatality estimates. We devised an effective continuous embankment elevation estimation method employing light detection and ranging data analysis. Drainage pump capabilities, in terms of operatable inundation depth and operatable duration limited by fuel supply logistics, were modeled from pump station data of eac h site along the rivers. Fatality reduction effects due to the enhancement of the drainage capabilities were calculated. We found proper operations of the drainage facilities can decrease the number of estimat ed fatalities considerably in some cases. We also estimated the difference of risk between floods with 200 years return period and those with 1000 years return period. In some of the 1000 years return period cases, we found the estimated fatalities jumped up whereas the populations in inundated areas changed only a little.
Multidimensional Scaling Analysis of the Dynamics of a Country Economy
Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Genome-scale network analysis of imprinted human metabolic genes.
Sigurdsson, Martin I; Jamshidi, Neema; Jonsson, Jon J; Palsson, Bernhard O
2009-01-01
System analysis of metabolic network reconstructions can be used to calculate functional states or phenotypes. This provides tools to study the metabolic effects of genetic and epigenetic properties, such as dosage sensitivity. We used the genome-scale reconstruction of human metabolism (Recon 1) to analyze the effect of nine known or predicted imprinted genes on metabolic phenotypes. Simulations of maternal deletion of ATP10A indicated an anabolic metabolism consistent with the known clinical phenotypes of obesity. The abnormal expression of the other genes affected fewer subsections of metabolism consistent with a lack of established clinical phenotypes. We found that four of nine genes had metabolic effect as predicted by the Haig's parental conflict theory. PMID:19218833
Psychometric analysis of the Ten-Item Perceived Stress Scale.
Taylor, John M
2015-03-01
Although the 10-item Perceived Stress Scale (PSS-10) is a popular measure, a review of the literature reveals 3 significant gaps: (a) There is some debate as to whether a 1- or a 2-factor model best describes the relationships among the PSS-10 items, (b) little information is available on the performance of the items on the scale, and (c) it is unclear whether PSS-10 scores are subject to gender bias. These gaps were addressed in this study using a sample of 1,236 adults from the National Survey of Midlife Development in the United States II. Based on self-identification, participants were 56.31% female, 77% White, 17.31% Black and/or African American, and the average age was 54.48 years (SD = 11.69). Findings from an ordinal confirmatory factor analysis suggested the relationships among the items are best described by an oblique 2-factor model. Item analysis using the graded response model provided no evidence of item misfit and indicated both subscales have a wide estimation range. Although t tests revealed a significant difference between the means of males and females on the Perceived Helplessness Subscale (t = 4.001, df = 1234, p < .001), measurement invariance tests suggest that PSS-10 scores may not be substantially affected by gender bias. Overall, the findings suggest that inferences made using PSS-10 scores are valid. However, this study calls into question inferences where the multidimensionality of the PSS-10 is ignored. (PsycINFO Database Record (c) 2015 APA, all rights reserved). PMID:25346996
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
Wolfgang Bock
2014-01-21
We give an outlook, how to realize the ideas of complex scaling from Grothaus, Vogel and Streit to phase space path integrals in the framework of White Noise Analysis. The idea of this scaling method goes back to Doss. Therefore we extend the concept complex scaling to scaling with suitable bounded operators.
Confirmatory Factor Analysis of the Educators' Attitudes toward Educational Research Scale
ERIC Educational Resources Information Center
Ozturk, Mehmet Ali
2011-01-01
This article reports results of a confirmatory factor analysis performed to cross-validate the factor structure of the Educators' Attitudes Toward Educational Research Scale. The original scale had been developed by the author and revised based on the results of an exploratory factor analysis. In the present study, the revised scale was given to…
Spatial data analysis for exploration of regional scale geothermal resources
NASA Astrophysics Data System (ADS)
Moghaddam, Majid Kiavarz; Noorollahi, Younes; Samadzadegan, Farhad; Sharifi, Mohammad Ali; Itoi, Ryuichi
2013-10-01
Defining a comprehensive conceptual model of the resources sought is one of the most important steps in geothermal potential mapping. In this study, Fry analysis as a spatial distribution method and 5% well existence, distance distribution, weights of evidence (WofE), and evidential belief function (EBFs) methods as spatial association methods were applied comparatively to known geothermal occurrences, and to publicly-available regional-scale geoscience data in Akita and Iwate provinces within the Tohoku volcanic arc, in northern Japan. Fry analysis and rose diagrams revealed similar directional patterns of geothermal wells and volcanoes, NNW-, NNE-, NE-trending faults, hotsprings and fumaroles. Among the spatial association methods, WofE defined a conceptual model correspondent with the real world situations, approved with the aid of expert opinion. The results of the spatial association analyses quantitatively indicated that the known geothermal occurrences are strongly spatially-associated with geological features such as volcanoes, craters, NNW-, NNE-, NE-direction faults and geochemical features such as hotsprings, hydrothermal alteration zones and fumaroles. Geophysical data contains temperature gradients over 100 °C/km and heat flow over 100 mW/m2. In general, geochemical and geophysical data were better evidence layers than geological data for exploring geothermal resources. The spatial analyses of the case study area suggested that quantitative knowledge from hydrothermal geothermal resources was significantly useful for further exploration and for geothermal potential mapping in the case study region. The results can also be extended to the regions with nearly similar characteristics.
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...
Jiang, Boyang
2012-02-14
parameters (bottom friction, eddy viscosity, etc.); errors in input fields and errors in the specification of boundary information (lateral boundary conditions, etc.). Errors in input parameters can be addressed with fairly straightforward parameter...
NASA Technical Reports Server (NTRS)
Cao, Yiding; Faghri, Amir; Chang, Won Soon
1989-01-01
An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.
ERIC Educational Resources Information Center
Sengupta, Atanu; Pal, Naibedya Prasun
2012-01-01
Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…
S. C. Irvine; D. M. Paganin; S. Dubsky; R. A. Lewis; A. Fouras
Threedimensional flow measurements, obtained from high? resolution synchrotronbased xray phase contrast i mages of blood invitro , are presented. Using data collected on beamline BL20XU at the SPring?8 synchrotron in Hyogo, Japan, we demonstrate the benefits to be gained by preproces sing of speckled Xray phase contrast images prior to PIV a nalysis. Such preprocessing techniques include use of a
Chuan Li; Changjie Tang; Jing Peng; Jianjun Hu; Lingming Zeng; Xiaoxiong Yin; Yongguang Jiang; Juan Liu
2004-01-01
\\u000a This paper introduces the architecture and algorithms of TCMiner: a high performance data mining system for multi-dimensional\\u000a data analysis of Traditional Chinese Medicine prescriptions. The system has the following competing advantages: (1) High Performance\\u000a (2) Multi-dimensional Data Analysis Capability (3) High Flexibility (4) Powerful Interoperability (5) Special Optimization\\u000a for TCM. This data mining system can work as a powerful assistant
Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.
ERIC Educational Resources Information Center
Faulstich, Michael E.; And Others
1985-01-01
Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)
Dream Intensity Scale: Factors in the Phenomenological Analysis of Dreams
Calvin Kai-Ching Yu
2010-01-01
The present study aimed to develop a comprehensive assessment tool for measuring subjective dream intensity by revising the original probes and response scales of the Dream Intensity Inventory and incorporating new variables. The factor analyses suggested that 18 items of the new instrument, Dream Intensity Scale, could form four scales and six subscales. The revision of the probes and response
Large-Scale Candidate Gene Analysis of HDL Particle Features
Kaess, Bernhard M.; Tomaszewski, Maciej; Braund, Peter S.; Stark, Klaus; Rafelt, Suzanne; Fischer, Marcus; Hardwick, Robert; Nelson, Christopher P.; Debiec, Radoslaw; Huber, Fritz; Kremer, Werner; Kalbitzer, Hans Robert; Rose, Lynda M.; Chasman, Daniel I.; Hopewell, Jemma; Clarke, Robert; Burton, Paul R.; Tobin, Martin D.
2011-01-01
Background HDL cholesterol (HDL-C) is an established marker of cardiovascular risk with significant genetic determination. However, HDL particles are not homogenous, and refined HDL phenotyping may improve insight into regulation of HDL metabolism. We therefore assessed HDL particles by NMR spectroscopy and conducted a large-scale candidate gene association analysis. Methodology/Principal Findings We measured plasma HDL-C and determined mean HDL particle size and particle number by NMR spectroscopy in 2024 individuals from 512 British Caucasian families. Genotypes were 49,094 SNPs in >2,100 cardiometabolic candidate genes/loci as represented on the HumanCVD BeadChip version 2. False discovery rates (FDR) were calculated to account for multiple testing. Analyses on classical HDL-C revealed significant associations (FDR<0.05) only for CETP (cholesteryl ester transfer protein; lead SNP rs3764261: p?=?5.6*10?15) and SGCD (sarcoglycan delta; rs6877118: p?=?8.6*10?6). In contrast, analysis with HDL mean particle size yielded additional associations in LIPC (hepatic lipase; rs261332: p?=?6.1*10?9), PLTP (phospholipid transfer protein, rs4810479: p?=?1.7*10?8) and FBLN5 (fibulin-5; rs2246416: p?=?6.2*10?6). The associations of SGCD and Fibulin-5 with HDL particle size could not be replicated in PROCARDIS (n?=?3,078) and/or the Women's Genome Health Study (n?=?23,170). Conclusions We show that refined HDL phenotyping by NMR spectroscopy can detect known genes of HDL metabolism better than analyses on HDL-C. PMID:21283740
Full-scale testing and analysis of fuselage structure
NASA Technical Reports Server (NTRS)
Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.
1994-01-01
This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.
Numerical Simulation and Scaling Analysis of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2011-11-01
Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.
MicroScale Thermophoresis: Interaction analysis and beyond
NASA Astrophysics Data System (ADS)
Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis
2014-12-01
MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.
MIXREGLS: A Program for Mixed-Effects Location Scale Analysis
Hedeker, Donald; Nordgren, Rachel
2013-01-01
MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062
Integrated Water and Energy Analysis at Decision Relevant Scales
NASA Astrophysics Data System (ADS)
Yates, D. N.; Sieber, J.; Heaps, C.; Purkey, D.; Mehta, V. K.
2012-12-01
While the energy-water nexus information-base has been growing, there remains few modeling tools able to evaluate the interactions and feedbacks between these sectors at decision relevant scales. In particular, a spatially explicit coupled modeling system would facilitate a more complete and accurate evaluation of the workability and consequences of alternative climate mitigation and adaptation strategies. For example, in order to evaluate a region's changing water supply sources, it will be important to represent the water side of a coupled modeling system in some detail. It would also be valuable to simultaneously work through the energy use and carbon emission consequences of the region's water adaptation alternatives, as well as the potential feedbacks associated with new energy system developments on system-wide water demands. The Water Evaluation and Planning (WEAP) and Long Range Energy Alternatives (LEAP) tools have been integrated to explicitly couple water and energy systems to conduct such analysis. We demonstrate the merits of this integration for the Southwestern U.S.he Integration of LEAP (left) and WEAP (right)
Huerta, M.
1981-06-01
This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures.
NASA Astrophysics Data System (ADS)
Case, B. S.; Duncan, R.; Hale, R.
2011-12-01
Across scales from local to global, the altitudinal position of alpine treelines displays considerable variability. There is still much debate regarding the drivers of this variability and, ultimately, the causes of treeline formation. Decades of research has emphasized the importance of either broad, climatic drivers, or site-specific factors. In this study, we present an approach that teases apart the relative importance of the various environmental influences on treeline position across multiple spatial scales. First, using a comprehensive spatial dataset, we characterized treeline position and environmental variability along 27,000 km of Nothofagus treelines across seven degrees of latitude in New Zealand. We then employed a novel analysis framework to partition variability in treeline position across five, natural, hierarchical scales. Finally, we modelled the influences of heat and moisture input, mountain mass, soil conditions, terrain variability, and earthquake disturbance in explaining variability in treeline position at each scale. Seventy-seven percent of the variability in treeline position was explained by our spatial framework. Variance partitioning showed that 46% of the the variability in treeline position resided at the broadest scale, with the remaining four scales accounting for 10%, 4%, 8% and 9%. Heat input emerged as the primary driver of Nothofagus treeline position in New Zealand. However, across all scales, heat and the other environmental factors had varying, and often predictable, influence on treeline position. This reiterates that treeline position is the result of a complex mixture of processes and that a multiscale analysis approach is essential in disentangling these effects.
Finite Element Analysis of Small Scale Continuous Calving
NASA Astrophysics Data System (ADS)
Christmann, Julia; Müller, Ralf; Humbert, Angelika; Gross, Dietmar
2013-04-01
Ice shelves are floating ice masses, which are sensitive to climate changes. The main mechanisms for the mass loss of ice shelves around Antarctica are basal melting and calving. For an understanding of the mechanisms of calving the influence of environmental parameters needs to be investigated. We use a fracture mechanical approach to examine the nature and frequency of calving events. Ice responses to load in two ways: on long time scales ice reacts like a viscous fluid, and on short time scale like an elastic solid. As calving is a representation of the solid nature of ice, the elastic response is important and linear elastic fracture mechanics can be applied. However, gravity remains a long time load and hence, a viscous component needs to be taken into account as well. Therefore, we use a Kelvin-Voigt model for analyzing the transient response of an ice shelf to a calving event. In a simplified 2D-model the ice shelf is treated as a rectangular block, in which the gravity force is the only load in a first analysis. The stresses on the surface in the vicinity of the calving front are computed with the finite element software COMSOL. The boundary conditions are the water pressure at the front and bottom of the ice shelf and a constant displacement at the inflow. A stationary state will reappear until eventually the subsequent calving event occurs, the termination time is around 175days. Based on this time interval and the flow velocity of the ice shelf we estimate the calving rate. Different parameter studies reveal the influence of geometry and material parameters on the stresses for an elastic material model. The literature and measurements at the Ekstroem Ice Shelf, East Antarctica, provides the relevant parameter range. Due to the depth-dependent water pressure at the ice front, a bell shaped distribution of stresses on the surface is found. For this reason the location of the maximal stress denotes the most likely position for a calving event and is arranged in between 0.65H and 0.85H, with H the thickness at the ice front. The results of these studies are compared to the results for two cross-sections of measured geometries of the Ekstroem Ice Shelf.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
Nishikido, Noriko; Yuasa, Akiko; Motoki, Chiharu; Tanaka, Mika; Arai, Sumiko; Matsuda, Kazumi; Ikeda, Tomoko; Iijima, Miyoko; Hirata, Mamoru; Hojoh, Minoru; Tsutaki, Miho; Ito, Akiyoshi; Maeda, Kazutoshi; Miyoshi, Yukari; Mitsuhashi, Hiroyuki; Fukuda, Eiko; Kawakami, Yuko
2006-01-01
To meet diversified health needs in workplaces, especially in developed countries, occupational safety and health (OSH) activities should be extended. The objective of this study is to develop a new multi-dimensional action checklist that can support employers and workers in understanding a wide range of OSH activities and to promote participation in OSH in small and medium-sized enterprises (SMEs). The general structure of and specific items in the new action checklist were discussed in a focus group meeting with OSH specialists based upon the results of a literature review and our previous interviews with company employers and workers. To assure practicality and validity, several sessions were held to elicit the opinions of company members and, as a result, modifications were made. The new multi-dimensional action checklist was finally formulated consisting of 6 core areas, 9 technical areas, and 61 essential items. Each item was linked to a suitable section in the information guidebook that we developed concomitantly with the action checklist. Combined usage of the action checklist with the information guidebook would provide easily comprehended information and practical support. Intervention studies using this newly developed action checklist will clarify the effectiveness of the new approach to OSH in SMEs. PMID:16610531
NASA Astrophysics Data System (ADS)
Lu, Bing-Nan; Zhao, Jie; Zhao, En-Guang; Zhou, Shan-Gui
2014-03-01
We have developed multi-dimensional constrained covariant density functional theories (MDC-CDFT) for finite nuclei in which the shape degrees of freedom ??? with even ?, e.g., ?20, ?22, ?30, ?32, ?40, etc., can be described simultaneously. The functional can be one of the following four forms: the meson exchange or point-coupling nucleon interactions combined with the non-linear or density-dependent couplings. For the pp channel, either the BCS approach or the Bogoliubov transformation is implemented. The MDC-CDFTs with the BCS approach for the pairing (in the following labelled as MDC-RMF models with RMF standing for "relativistic mean field") have been applied to investigate multi-dimensional potential energy surfaces and the non-axial octupole Y32-correlations in N = 150 isotones. In this contribution we present briefly the formalism of MDC-RMF models and some results from these models. The potential energy surfaces with and without triaxial deformations are compared and it is found that the triaxiality plays an important role upon the second fission barriers of actinide nuclei. In the study of Y32-correlations in N = 150 isotones, it is found that, for 248Cf and 260Fm, ?32 > 0.03 and the energy is lowered by the ?32 distortion by more than 300 keV; while for 246Cm and 252No, the pocket with respect to ?32 is quite shallow.
Nanometer to Centimeter Scale Analysis and Modeling of Pore Structures
NASA Astrophysics Data System (ADS)
Wesolowski, D. J.; Anovitz, L.; Vlcek, L.; Rother, G.; Cole, D. R.
2011-12-01
The microstructure and evolution of pore space in rocks is a critically important factor controlling fluid flow. The size, distribution and connectivity of these confined geometries dictate how fluids including H2O and CO2, migrate into and through these micro- and nano-environments, wet and react with the solid. (Ultra)small-angle neutron scattering and autocorrelations derived from BSE imaging provide a method of quantifying pore structures in a statistically significant manner from the nanometer to the centimeter scale. Multifractal analysis provides additional constraints. These methods were used to characterize the pore features of a variety of potential CO2 geological storage formations and geothermal systems such as the shallow buried quartz arenites from the St. Peter Sandstone and the deeper Mt. Simon quartz arenite in Ohio as well as the Eau Claire shale and mudrocks from the Cranfield MS CO2 injection test and the normal temperature and high-temperature vapor-dominated parts of the Geysers geothermal system in California. For example, analyses of samples of St. Peter sandstone show total porosity correlates with changes in pores structure including pore size ratios, surface fractal dimensions, and lacunarity. These samples contain significant large-scale porosity, modified by quartz overgrowths, and neutron scattering results show significant sub-micron porosity, which may make up fifty percent or more of the total pore volume. While previous scattering data from sandstones suggest scattering is dominated by surface fractal behavior, our data are both fractal and pseudo-fractal. The scattering curves are composed of steps, modeled as polydispersed assemblages of pores with log-normal distributions. In some samples a surface-fractal overprint is present. There are also significant changes in the mono and multifractal dimensions of the pore structure as the pore fraction decreases. There are strong positive correlations between D(0) and image and total scattering porosities, and strong negative correlations between these and multifractality, which increases as pore fraction decreases and the percent of (U)SANS porosity increases. Individual fractal dimensions at all q values from the BSE images decrease during silcrete formation. These data suggest that microporosity is more prevalent and may play a much more important role than previously thought in fluid/rock interactions in coarse-grained sandstone. Preliminary results from shale and mudrocks indicate there are dramatic differences not only in terms of total micro- to nano-porosity, but also in terms of pore surface fractal (roughness) and mass fractal (pore distributions) dimensions as well as size distributions. Information from imaging and scattering data can also be used to constrain computer-generated, random, three-dimensional porous structures. The results integrate various sources of experimental information and are statistically compatible with the real rock. This allows a more detailed multiscale analysis of structural correlations in the material. Acknowledgements. Research sponsored by the Division of Chemical Sciences, Geosciences and Biosciences, Office of Basic Energy Sciences, U.S. Department of Energy.
A theoretical analysis of basin-scale groundwater temperature distribution
NASA Astrophysics Data System (ADS)
An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong
2015-03-01
The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814
GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY
Lee, S
2008-05-28
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2013-08-01
In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).
Impact and fracture analysis of fish scales from Arapaima gigas.
Torres, F G; Malásquez, M; Troncoso, O P
2015-06-01
Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature. PMID:25842120
Observation and analysis of large-scale human Janez Pers a,,1,2
Kovacic, Stanislav
Observation and analysis of large-scale human motion Janez Pers a,,1,2 , Marta Bon b,2 , Stanislav. Bon, S. Kovacic, M. Sibila, B. Dezman: "Observation and Analysis of Large-scale Human Motion", Human, SI-1000 Ljubljana, Slovenia Abstract Many team sports include complex human movement, which can
Pre-site Characterization Risk Analysis for Commercial-Scale Carbon Sequestration
Lu, Zhiming
Pre-site Characterization Risk Analysis for Commercial-Scale Carbon Sequestration Zhenxue Dai, funders and regulators require a preinjection risk analysis that identifies potential problem areas a probability framework to evaluate subsurface risks associated with commercial-scale carbon sequestration
QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web
Liu, Ling
1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web
Caverlee, James
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering
Menut, Laurent
Bayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive. The inversion method is based on Bayesian Monte Carlo analysis applied to a regional-scale chemistry transport are attributed to individual Monte Carlo simulations by comparing them with observations from the AIRPARIF
Zhao Fang; Zhao Mei-Shan
2008-01-01
This paper is concerned with the determination of a unique scaling parameter in complex scaling analysis and with accurate calculation of dynamics resonances. In the preceding paper we have presented a theoretical analysis and provided a formalism for dynamical resonance calculations. In this paper we present accurate numerical results for two non-trivial dynamical processes, namely, models of diatomic molecular predissociation
Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011
Home News and Events Multimedia Library Videos Large-Scale Cancer Genomics Data Analysis - David Haussler Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011 You will need Adobe Flash Player 8 or later and JavaScript
Lacunarity analysis of fracture networks: Evidence for scale-dependent clustering Ankur Roy a
Perfect, Ed
Lacunarity analysis of fracture networks: Evidence for scale-dependent clustering Ankur Roy networks mapped at different scales from the Hornelen basin, our analysis shows that clustering increases Available online 24 September 2010 Keywords: Fractures Joints Lacunarity Clustering Gliding-box algorithm
PROMIS Pediatric Anger Scale: An Item Response Theory Analysis
Irwin, Debra E.; Stucky, Brian D.; Langer, Michelle M.; Thissen, David; DeWitt, Esi Morgan; Lai, Jin-Shei; Yeatts, Karin B.; Varni, James W.; DeWalt, Darren A.
2011-01-01
Purpose The Patient Reported Outcomes Measurement Information System (PROMIS) aims to develop patient-reported outcome (PROs) instruments for use in clinical research. The PROMIS pediatrics (ages 8–17) project focuses on the development of PROs across several health domains (physical function, pain, fatigue, emotional distress, social role relationships, and asthma symptoms). The objective of the present study is to report on the psychometric properties of the PROMIS Pediatric Anger Scale. Methods Participants (n=759) were recruited in public school settings, hospital-based outpatient and subspecialty pediatrics clinics. The anger items (k=10) were administered on one test form. A hierarchical confirmatory factor analytic model (CFA) was conducted to evaluate scale dimensionality and local dependence. Item response theory (IRT) analyses were then used to finalize the item scale and short form. Results CFA confirmed that the anger items are representative of a unidimensional scale and items with local dependence were removed resulting in a six-item short form. The IRT-scaled scores from summed scores and each score’s conditional standard error were calculated for the new six-item PROMIS Pediatric Anger Scale. Conclusions This study provides initial calibrations of the anger items and creates the PROMIS Pediatric Anger Scale, version 1.0 PMID:21785833
Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer
NASA Astrophysics Data System (ADS)
Schnieders, Jana; Garbe, Christoph
2014-05-01
The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.
B. Müller; H. -Th. Janka
2014-06-26
Considering general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 solar masses, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the Vertex-CoCoNuT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies of electron antineutrinos and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M>10 M_sun as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of the electron antineutrino mean energy with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ~10kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such "SASI neutrino chirps" reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
NASA Astrophysics Data System (ADS)
Ivan, Lucian; De Sterck, Hans; Northrup, Scott A.; Groth, Clinton P. T.
2013-12-01
A scalable parallel and block-adaptive cubed-sphere grid simulation framework is described for solution of hyperbolic conservation laws in domains between two concentric spheres. In particular, the Euler and ideal magnetohydrodynamics (MHD) equations are considered. Compared to existing cubed-sphere grid algorithms, a novelty of the proposed approach involves the use of a fully multi-dimensional finite-volume method. This leads to important advantages when the treatment of boundaries and corners of the six sectors of the cubed-sphere grid is considered. Most existing finite-volume approaches use dimension-by-dimension differencing and require special interpolation or reconstruction procedures at ghost cells adjacent to sector boundaries in order to achieve an order of solution accuracy higher than unity. In contrast, in our multi-dimensional approach, solution blocks adjacent to sector boundaries can directly use physical cells from the adjacent sector as ghost cells while maintaining uniform second-order accuracy. This leads to important advantages in terms of simplicity of implementation for both parallelism and adaptivity at sector boundaries. Crucial elements of the proposed scheme are: unstructured connectivity of the six grid root blocks that correspond to the six sectors of the cubed-sphere grid, multi-dimensional k-exact reconstruction that automatically takes into account information from neighbouring cells isotropically and is able to automatically handle varying stencil size, and adaptive division of the solution blocks into smaller blocks of varying spatial resolution that are all treated exactly equally for inter-block communication, flux calculation, adaptivity and parallelization. The proposed approach is fully three-dimensional, whereas previous studies on cubed-sphere grids have been either restricted to two-dimensional geometries on the sphere or have grids and solution methods with limited capabilities in the third dimension in terms of adaptivity and parallelism. Numerical results for several problems, including systematic grid convergence studies, MHD bow-shock flows, and global modelling of solar wind flow are discussed to demonstrate the accuracy and efficiency of the proposed solution procedure, along with assessment of parallel computing scalability for up to thousands of computing cores.
An Algebraic Approach for the MIMO Control of Small Scale Helicopter
Agus Budiyono; T. Sudiyanto
2008-01-01
The control of small-scale helicopter is a MIMO problem. To use of classical control approach to formally solve a MIMO problem, one needs to come up with multi- dimensional Root Locus diagram to tune the control parameters. The problem with the required dimension of the RL diagram for MIMO design has forced the design procedure of classical approach to be
ERIC Educational Resources Information Center
Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.
2010-01-01
The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…
Rating Scale Analysis and Psychometric Properties of the Caregiver Self-Efficacy Scale for Transfers
ERIC Educational Resources Information Center
Cipriani, Daniel J.; Hensen, Francine E.; McPeck, Danielle L.; Kubec, Gina L. D.; Thomas, Julie J.
2012-01-01
Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid…
Scale Analysis for Remotely Sensed Images and Areal Census Data: Stressing Objects
Keping Chen
2003-01-01
Spatial scale analysis for disparate geospatial data is facilitated by object-based reasoning and analysis. This paper introduces a wavelet transform-based approach for remotely sensed images and a variogram approach for areal census data; both stress objects as the core for performing scale analysis. By calculating a series of statistics from wavelet transform-based sub-images and energy signature images, it was found
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…
Scale Free Analysis and the Prime Number Theorem
Dhurjati Prasad Datta; Anuja Roy Choudhuri
2010-08-13
We present an elementary proof of the prime number theorem. The relative error follows a golden ratio scaling law and respects the bound obtained from the Riemann's hypothesis. The proof is derived in the framework of a scale free nonarchimedean extension of the real number system exploiting the concept of relative infinitesimals introduced recently in connection with ultrametric models of Cantor sets. The extended real number system is realized as a completion of the field of rational numbers $Q$ under a {\\em new} nonarchimedean absolute value, which treats arbitrarily small and large numbers separately from a finite real number.
SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS
MICHAEL T. ITAMUA AND CLIFFORD K. HO
1998-06-04
The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.
Analysis of deep inelastic scattering with $z$-dependent scale
R. G. Roberts
1999-04-13
Evolution of the parton densities at NLO in $\\alpha_S$ using $\\tilde W^2 = Q^2 (1-z)/z$ instead of the usual $Q^2$ for the scale of the running coupling $\\alpha_S$ is investigated. While this renormalisation scale change was originally proposed as the relevant one for $x \\to 1$, we explore the consequences for all $x$ with this choice. While it leads to no improvement to the description of DIS data, the nature of the gluon at low $x$, low $Q^2$ is different, avoiding the need for a `valence-like' gluon.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.
Transient Analysis of Large-scale Stochastic Service Systems
Ko, Young Myoung
2012-07-16
of the effectiveness of the adjusted limits. We study both a call center which is a canonical example of large-scale service systems and an emerging peer-based Internet multimedia service network known as P 2 P. Based on our findings, we introduce a possible extension...
The Asian Values Scale: Development, factor analysis, validation, and reliability
Bryan S. K. Kim; Donald R. Atkinson; Peggy H. Yang
1999-01-01
Multicultural researchers and theorists have noted that client adherence to culture-of-or igin values plays an important role in the provision of culturally relevant and sensitive psychological services. However, lack of instruments that measure ethnic cultural values has been a shortcoming in past research that attempted to examine this relationship. In this article, the development of the Asian Values Scale (AVS)
The Asian Values Scale: Development, Factor Analysis, Validation, and Reliability
Bryan S. K. Kim; Donald R. Atkinson; Peggy H. Yang
1999-01-01
Multicultural researchers and theorists have noted that client adherence to culture-of-origin values plays an important role in the provision of culturally relevant and sensitive psychological services. However, lack of instruments that measure ethnic cultural values has been a shortcoming in past research that attempted to examine this relationship. In this article, the development of the Asian Values Scale (AVS) is
Large-scale functional analysis using peptide or protein arrays
Alia Qureshi Emili; Gerard Cagney
2000-01-01
The array format for analyzing peptide and protein function offers an attractive experimental alternative to traditional library screens. Powerful new approaches have recently been described, ranging from synthetic peptide arrays to whole proteins expressed in living cells. Comprehensive sets of purified peptides and proteins permit high-throughput screening for discrete biochemical properties, whereas formats involving living cells facilitate large-scale genetic screening
A Multidimensional Scaling Analysis of Students' Attitudes about Science Careers
ERIC Educational Resources Information Center
Masnick, Amy M.; Valenti, S. Stavros; Cox, Brian D.; Osman, Christopher J.
2010-01-01
To encourage students to seek careers in Science, Technology, Engineering and Mathematics (STEM) fields, it is important to gauge students' implicit and explicit attitudes towards scientific professions. We asked high school and college students to rate the similarity of pairs of occupations, and then used multidimensional scaling (MDS) to create…
THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS
Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...
Exergy analysis of domestic-scale solar water heaters
Wang Xiaowu; Hua Ben
2005-01-01
Solar water heater is the most popular means of solar energy utilization because of technological feasibility and economic attraction compared with other kinds of solar energy utilization. Earlier assessments of domestic-scale solar water heaters were based on the first thermodynamic law. However, this kind of assessment cannot perfectly describe the performance of solar water heaters, since the essence of energy
Energy Yield and Cost Analysis of Small Scale Wind Turbines
N. J. Stannard; J. R. Bumby
2006-01-01
This paper seeks to explore the attractiveness of small-scale wind turbines in the current economic climate. In order to do this, three pieces of information must be known: (1) the cost of the system; (2) the annual energy yield of the system; (3) the value of that energy yield. An Internet based search was conducted to establish current market prices
Analysis of Small-Scale Hydraulic Actuation Jicheng Xia
Durfee, William K.
of force and power while at the same time being relatively light weight compared to the equivalent to an equivalent electromechanical system comprised of off-the-shelf components. Calculation results revealed that high operating pressures are needed for small-scale hydraulics to be lighter than the equivalent
Introducing Scale Analysis by Way of a Pendulum
ERIC Educational Resources Information Center
Lira, Ignacio
2007-01-01
Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…
Garcia, A. III; Kallis, J.M.; Trucker, D.C.
1983-06-01
Construction of several full size qualification modules using polyurethane encapsulant and a Tedlar front cover is discussed. In addition, an electrical isolation analysis is reported which was to develop a method for evaluating the multi-dimensional effects of the modular conductor geometry on the electric field. The multi-dimensional electric field was calculated with a finite-element-based code using the analogy between thermal and electrostatic fields. (LEW)
Hutter, Jana; Schmitt, Peter; Saake, Marc; Stubinger, Axel; Grimm, Robert; Forman, Christoph; Greiser, Andreas; Hornegger, Joachim; Maier, Andreas
2015-02-01
4-D time-resolved velocity-encoded phase-contrast MRI (4-D PCI) is a fully non-invasive technique to assess hemodynamics in vivo with a broad range of potential applications in multiple cardiovascular diseases. It is capable of providing quantitative flow values and anatomical information simultaneously. The long acquisition time, however, still inhibits its wider clinical use. Acceleration is achieved at present using parallel MRI (pMRI) techniques which can lead to substantial loss of image quality for higher acceleration factors. Both the high-dimensionality and the significant degree of spatio-temporal correlation in 4-D PCI render it ideally suited for recently proposed compressed sensing (CS) techniques. We propose the Multi-Dimensional Flow-preserving Compressed Sensing (MuFloCoS) method to exploit these properties. A multi-dimensional iterative reconstruction is combined with an interleaved sampling pattern (I-VT), an adaptive masked and weighted temporal regularization (TMW) and fully automatically obtained vessel-masks. The performance of the novel method was analyzed concerning image quality, feasibility of acceleration factors up to 15, quantitative flow values and diagnostic accuracy in phantom experiments and an in vivo carotid study with 18 volunteers. Comparison with iterative state-of-the-art methods revealed significant improvements using the new method, the temporal normalized root mean square error of the peak velocity was reduced by 45.32% for the novel MuFloCoS method with acceleration factor 9. The method was furthermore applied to two patient cases with diagnosed high-grade stenosis of the ICA, which confirmed the performance of MuFloCoS to produce valuable results in the presence of pathological findings in 56 s instead of over 8 min (full sampling). PMID:25252278
Large-scale computations in analysis of structures
McCallen, D.B.; Goudreau, G.L.
1993-09-01
Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.
Craig A. Velozo; Livia C. Magalhaes; Ay-Woan Pan; Pamella Leiter
1995-01-01
Objective: To determine the construct validity of the Level of Rehabilitation Scale-III (LORS-III) with a special focus on this instrument's capability to discriminate rehabilitation inpatient activities of daily living (ADL)\\/ mobility and communication\\/cognition ability at admission and discharge. Design: Rasch analysis of existing data sets in the LORS-III American Data System (LADS). Patients: Existing admission and discharge data from 3056
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Reliability analysis of a utility-scale solar power plant
G. J. Kolb
1992-01-01
This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western
Williams, Dean; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Steed, Chad; Krishnan, Harinarayan; Silva, Claudio; Chaudhary, Aashish; Bremer, Peer-Timo; Pugmire, David; Bethel, E. Wes; Childs, Hank; Prabhat, Mr; Geveci, Berk; Bauer, Andrew; Pletzer, Alexander; Poco, Jorge; Ellqvist, Tommy; Santos, Emanuele; Potter, Gerald; Smith, Brian; Maxwell, Thomas; Kindig, David; Koop, David
2013-05-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Miller, Ross G [ORNL; Steed, Chad A [ORNL; Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Chaudhary, Aashish [Kitware; Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Pugmire, Dave [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Prabhat, [Lawrence Berkeley National Laboratory (LBNL); Geveci, Berk [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Poco, Jorge [Polytechnic Institute of New York University; Ellqvist, Tommy [New York University; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Potter, Gerald [National Aeronautics and Space Administration (NASA); Smith, Brian E [ORNL; Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA); Kindig, Dave [Tech-X Corporation; Koop, David [New York University
2013-01-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in interactive visual analysis of distributed large-scale data for the climate community.
Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Shipman, Galen M [ORNL; Miller, Ross G [ORNL; Pugmire, Dave [ORNL; Smith, Brian E [ORNL; Steed, Chad A [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Koop, David [New York University; Ellqvist, Tommy [New York University; Poco, Jorge [Polytechnic Institute of New York University; Geveci, Berk [Kitware; Chaudhary, Aashish [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Kindig, Dave [Tech-X Corporation; Potter, Gerald [National Aeronautics and Space Administration (NASA); Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA)
2013-01-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
Wang, Ke
, such as trend analysis, answ.C., Canada¡ han, lamd, peijian, wangk¢ @cs.sfu.ca Abstract Constrained gradient analysis (similar, capable of capturing trends in data and answering "what-if" questions. To facilitate our discussion, we
An Analysis Approach to Large-scale Vehicular Network Simulations
Perumalla, Kalyan S [ORNL; Beckerman, Martin [ORNL
2007-01-01
Advances in parallel simulation capabilities are now enabling the possibility of simulating multiple scenarios of large problem configurations. In emergency management applications, for example, it is now conceivable to consider simulating phenomena in large (city- or state-scale) vehicular networks. However, an informed understanding of simulation results is needed for real-time decision support tools that make use a number of simulation runs. Of special interest are insights into trade-offs between accuracy and confidence bounds of simulation results, such as in the quality of predicted evacuation time in emergencies. In some of our emergency management projects, we are exploring approaches that not only aid in making statistically significant interpretations of simulation results but also provide a basis for presenting the inherent qualitative properties of the results to the decision makers. We provide experimental results that demonstrate the possibility of applying our approach to large-scale vehicular network simulations for emergency planning and management.
QCD analysis of pbar N ??? ? in the scaling limit
NASA Astrophysics Data System (ADS)
Pire, B.; Szymanowski, L.
2005-08-01
We study the scaling regime of nucleon-antinucleon annihilation into a deeply virtual photon and a meson, pbar N ??? ?, in the forward kinematics, where | t | ?Q2 ? s. We obtain the leading twist amplitude in the kinematical region where it factorizes into an antiproton distribution amplitude, a short-distance matrix element related to nucleon form factor and the long-distance dominated transition distribution amplitudes which describe the nucleon to meson transition. We give the Q2 evolution equation for these transition distribution amplitudes. The scaling of the cross section of this process may be tested at the proposed GSI intense antiproton beam facility FAIR with the PANDA or PAX detectors. We comment on related processes such as ?N ?N??? and ?? N ?N? ? which may be experimentally studied at intense meson beams facilities and at JLab or Hermes, respectively.
Crater ejecta scaling laws - Fundamental forms based on dimensional analysis
NASA Technical Reports Server (NTRS)
Housen, K. R.; Schmidt, R. M.; Holsapple, K. A.
1983-01-01
Self-consistent scaling laws are developed for meteoroid impact crater ejecta. Attention is given to the ejection velocity of material as a function of the impact point, the volume of ejecta with a threshold velocity, and the thickness of ejecta deposit in terms of the distance from the impact. Use is made of recently developed equations for energy and momentum coupling in cratering events. Consideration is given to scaling of laboratory trials up to real-world events and formulations are developed for calculating the ejection velocities and ejecta blanket profiles in the gravity and strength regimes of crater formation. It is concluded that, in the gravity regime, the thickness of an ejecta blanket is the same in all directions if the thickness and range are expressed in terms of the crater radius. In the strength regime, however, the ejecta velocities are independent of crater size, thereby allowing for asymmetric ejecta blankets. Controlled experiments are recommended for the gravity/strength transition.
Observation and analysis of large-scale human motion
Janez Perš; Marta Bon; Stanislav Kova?i?; Marko Šibila; Branko Dežman
2002-01-01
Many team sports include complex human movement, which can be observed at different levels of detail. Some aspects of the athlete's motion can be studied in detail using commercially available high-speed, high-accuracy biomechanical measurement systems. However, due to their limitations, these devices are not appropriate for studying large-scale motion during a game (for example, the motion of a player running
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Melina Ramp; Fary Khan; Rose Anne Misajon; Julie F Pallant
2009-01-01
BACKGROUND: Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological
Development and Initial Analysis of Multiple Sclerosis Self-Management Scale
Malachy Bishop; Michael Frain
This article describes the development and initial psychometric analysis of the Multiple Sclerosis Self-Management Scale (MSSM). The scale was developed to provide a comprehensive and psycho- metrically sound assessment of self-management knowledge and practices among adults with multi- ple sclerosis (MS). Items were developed based on a review of the MS and self-management litera- ture and professional consultation. The scale
Wavelet multiscale analysis for Hedge Funds: Scaling and strategies
NASA Astrophysics Data System (ADS)
Conlon, T.; Crane, M.; Ruskin, H. J.
2008-09-01
The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.
Murray Gibson
2010-01-08
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
C. Jenkinson; V. Peto; A. Coulter
1994-01-01
This paper compares the sensitivity to change of a multi-item, multi-dimensional health status measure with a single global health status question, in the assessment of treatment for menorrhagia. A cohort study of patients recruited by general practitioners, was carried out, with a follow up at eighteen months. Questionnaires were administered postally at baseline and follow up. General practices in Berkshire,
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.
2012-01-01
The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…
ERIC Educational Resources Information Center
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R.
2012-01-01
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)
ERIC Educational Resources Information Center
Peterson, Shelley Stagg; McClay, Jill; Main, Kristin
2011-01-01
This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…
DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data
Stasko, John T.
in information visualization, and the vast number of different approaches to solving this problem attests to itsDataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data Niklas Elmqvist INRIA of Technology ABSTRACT Supporting visual analytics of multiple large-scale multidimen- sional datasets requires
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists Michael Q. In this survey I review three recent experiments related to transcriptional regulation and discuss the great challenge for computational biologists trying to extract functional information from such large-scale gene
Scaling Analysis of the Thermal Boundary Layer Adjacent to an Abruptly Heated Inclined Flat Plate
S. C. Saha; C. Lei; J. C. Patterson
The natural convection thermal boundary layer adjacent to an abruptly heated inclined flat plate is investigated through a scaling analysis and verified by numerical simulations. In general, the development of the thermal flow can be characterized by three distinct stages, i.e. a start-up stage, a transitional stage and a steady state stage. Major scales including the flow velocity, flow development
Nat Genet . Author manuscript Large-scale genome-wide association analysis of bipolar disorder
Boyer, Edmond
Nat Genet . Author manuscript Page /1 12 Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4 Pamela Sklar 1 2 * , Stephan Ripke 2 3 , Laura J
Physiological Tracking of Differentiation Time Series Using Large Scale Gene Expression Analysis
. 4B). Analyses of the trophoblast differentiation time series revealed a proper differentiation untilPhysiological Tracking of Differentiation Time Series Using Large Scale Gene Expression Analysis M sources using shared physiological processes, submitted. Tracking differentiation time series We applied
Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii
Not Available
2012-03-01
This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.
Cook, Robert Annan
1972-01-01
SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis ROBERT ANNAN COOK Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement..., for the degree of MASTER OF SCIENCE August 1972 Ma)or Sub)oct: Geology SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis by ROBERT ANNAN COOK Approved as to style and content by...
A tree swaying in a turbulent wind: a scaling analysis.
Odijk, Theo
2015-01-01
A tentative scaling theory is presented of a tree swaying in a turbulent wind. It is argued that the turbulence of the air within the crown is in the inertial regime. An eddy causes a dynamic bending response of the branches according to a time criterion. The resulting expression for the penetration depth of the wind yields an exponent which appears to be consistent with that pertaining to the morphology of the tree branches. An energy criterion shows that the dynamics of the branches is basically passive. The possibility of hydrodynamic screening by the leaves is discussed. PMID:25169247
A Tree Swaying in a Turbulent Wind: A Scaling Analysis
Theo Odijk
2014-07-10
A tentative scaling theory is presented of a tree swaying in a turbulent wind. It is argued that the turbulence of the air within the crown is in the inertial regime. An eddy causes a dynamic bending response of the branches according to a time criterion. The resulting expression for the penetration depth of the wind yields an exponent which appears to be consistent with that pertaining to the morphology of the tree branches. An energy criterion shows that the dynamics of the branches is basically passive. The possibility of hydrodynamic screening by the leaves is discussed.
Analysis of World Economic Variables Using Multidimensional Scaling
Machado, J.A. Tenreiro; Mata, Maria Eugénia
2015-01-01
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene. PMID:25811177
Enabling Large-Scale Biomedical Analysis in the Cloud
Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen
2013-01-01
Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665
ERIC Educational Resources Information Center
Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.
2015-01-01
This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory…
ERIC Educational Resources Information Center
Staik, Irene M.
A study was undertaken to provide a factor analysis of the Omega Scale, a 25-item, Likert-type scale developed in 1984 to assess attitudes toward death and funerals and other body disposition practices. The Omega Scale was administered to 250 students enrolled in introductory psychology classes at two higher education institutions in Alabama.…
NASA Astrophysics Data System (ADS)
Chen, Cheng; Hu, Dandan; Westacott, Donald; Loveless, David
2013-10-01
Nanometer-scale scanning electron microscopy was applied in visualizing the microscopic pores within shale kerogen. Geometrical information of all individual pores was extracted by image analysis. Image segmentation and separation showed that most of the intrakerogen pores are discrete and isolated from each other, having relatively spherical morphology. These isolated intrakerogen pores result in huge challenges in gas production, because they are not effectively connected to natural and hydraulic fractures. Statistical results showed that nanopores, which have diameters smaller than 100 nm, make up 92.7% of the total pore number, while they make up only 4.5% of the total pore volume. Intrakerogen porosity and specific surface area are 29.9% and 14.0 m2/g, respectively. Accurate visualization and measurement of intrakerogen pores are critical for evaluation of gas storage and optimization of hydraulic fracturing. By lattice Boltzmann simulations, permeabilities and tortuosities were simulated in the three principal directions. Long tails were observed in breakthrough curves, resulting from diffusion of solute particles from low-flow-velocity pores to larger conduits at late times. The long-tailing phenomena at the pore scale are qualitatively consistent with those observed in real productions. Understanding the pore-scale transport processes between microscopic pores within kerogen and large fracture systems is of great importance in predicting hydrocarbon production. Upscaling methods are needed to investigate larger-scale processes and properties in shale reservoirs.
Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales
Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault
2011-05-01
Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
Development of a statistical sampling method for uncertainty analysis with SCALE
Williams, M.; Wiarda, D.; Smith, H.; Jessee, M. A.; Rearden, B. T. [Oak Ridge National Laboratory, P.O Box 2008, Oak Ridge, TN 37831-6354 (United States); Zwermann, W.; Klein, M.; Pautz, A.; Krzykacz-Hausmann, B.; Gallner, L. [Gesellschaft fuer Anlagen- und Reaktorsicherheit GRS, Forschungszentrum, Boltzmannstrasse 14, 85748 Garching (Germany)
2012-07-01
A new statistical sampling sequence called Sampler has been developed for the SCALE code system. Random values for the input multigroup cross sections are determined by using the XSUSA program to sample uncertainty data provided in the SCALE covariance library. Using these samples, Sampler computes perturbed self-shielded cross sections and propagates the perturbed nuclear data through any specified SCALE analysis sequence, including those for criticality safety, lattice physics with depletion, and shielding calculations. Statistical analysis of the output distributions provides uncertainties and correlations in the desired responses. (authors)
TIME-MASS SCALING IN SOIL TEXTURE ANALYSIS
Technology Transfer Automated Retrieval System (TEKTRAN)
Data on texture are used in the majority of inferences about soil functioning and use. The model of fractal fragmentation has attracted attention as a possible source of minimum set of parameters to describe observed particle size distributions. Popular techniques of textural analysis employ the rel...
Multi-scale analysis of water alteration on the rockslope stability framework
NASA Astrophysics Data System (ADS)
Dochez, Sandra; Laouafa, Farid; Franck, Christian; Guedon, Sylvine; Martineau, François; D'Amato, Julie; Saintenoy, Albane
2014-10-01
Water is an important weathering factor on rock discontinuities and in rock mass mechanical behaviour because of its chemical features such as temperature, pH or salinity which make it a "good" candidate to rock degradation. Furthermore the increase of rainfall frequency or intensity highlights some problems on the rock slope stability analysis. This study aims to evaluate the effect of water flow on the rock slope stability and it is performed at two space scales: in situ scale and laboratory (micro scale and macro scale). It shows how water induces degradation at multi-scale (surface roughness and matrix) and thus may decrease the stability of the discontinuous rock mass. It has two main components: the effect of water-solid chemical mechanisms and the analysis of the mechanical response of the discontinuity modified by the water alteration.
Construct Validation of the Translated Version of the Work-Family Conflict Scale for Use in Korea
ERIC Educational Resources Information Center
Lim, Doo Hun; Morris, Michael Lane; McMillan, Heather S.
2011-01-01
Recently, the stress of work-family conflict has been a critical workplace issue for Asian countries, especially within those cultures experiencing rapid economic development. Our research purpose is to translate and establish construct validity of a Korean-language version of the Multi-Dimensional Work-Family Conflict (WFC) scale used in the U.S.…
Computational solutions to large-scale data management and analysis
Schadt, Eric E.; Linderman, Michael D.; Sorenson, Jon; Lee, Lawrence; Nolan, Garry P.
2011-01-01
Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US$5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems. PMID:20717155
Fine scale thermal blooming instability: a linear stability analysis
Barnard, J.J.
1989-02-01
The fine-scale thermal blooming instability of a high power trans-atmospheric laser beam is shown to be affected by the laser pulse length. In this study, we calculate the asymptotic gain of a sinusoidal perturbation as a function of pulse length and perturbation wavenumber. We include the effects of viscosity, diffusion, and wind shear, and we heuristically estimate the effect of turbulence. We find that for short laser pulses, the small wavenumber perturbations are reduced due to acoustic effects. However, large wavenumber perturbations remain large and extend to a higher cutoff in wavenumber than in the long laser pulse limit. At wavenumbers higher than this cutoff, thermal diffusion causes exponential decay of the perturbations. For long laser pulse length wind shear and turbulence limit perturbation growth.
Molecular scale analysis of dry sliding copper asperities
NASA Astrophysics Data System (ADS)
Vadgama, Bhavin N.; Jackson, Robert L.; Harris, Daniel K.
2015-04-01
A fundamental characterization of friction requires an accurate understanding of how the surfaces in contact interact at the nano or atomic scales. In this work, molecular dynamics simulations are used to study friction and deformation in the dry sliding interaction of two hemispherical asperities. The material simulated is copper and the atomic interactions are defined by the embedded atom method potential. The effect of interference, ?, relative sliding velocity, v, asperity size, R, lattice orientation, ?, and temperature control, on the friction characteristics are investigated. Extensive plastic deformation and material transfer between the asperities were observed. The sliding process was dominated by adhesion and resulted in high effective friction coefficient values. The friction force and the effective friction coefficient increased with the interference and asperity size but showed no significant change with an increase in the sliding velocity or with temperature control. The friction characteristics varied strongly with the lattice orientation and an average effective friction coefficient was calculated that compared quantitatively with existing measurements.
Molecular scale analysis of dry sliding copper asperities
NASA Astrophysics Data System (ADS)
Vadgama, Bhavin N.; Jackson, Robert L.; Harris, Daniel K.
2014-07-01
A fundamental characterization of friction requires an accurate understanding of how the surfaces in contact interact at the nano or atomic scales. In this work, molecular dynamics simulations are used to study friction and deformation in the dry sliding interaction of two hemispherical asperities. The material simulated is copper and the atomic interactions are defined by the embedded atom method potential. The effect of interference, ?, relative sliding velocity, v, asperity size, R, lattice orientation, ?, and temperature control, on the friction characteristics are investigated. Extensive plastic deformation and material transfer between the asperities were observed. The sliding process was dominated by adhesion and resulted in high effective friction coefficient values. The friction force and the effective friction coefficient increased with the interference and asperity size but showed no significant change with an increase in the sliding velocity or with temperature control. The friction characteristics varied strongly with the lattice orientation and an average effective friction coefficient was calculated that compared quantitatively with existing measurements.
Large-scale Cosmic Homogeneity from a Multifractal Analysis of the PSCz Catalogue
Jun Pan; Peter Coles
2000-08-16
We investigate the behaviour of galaxy clustering on large scales using the PSCz catalogue. In particular, we ask whether there is any evidence of large-scale fractal behaviour in this catalogue. We find the correlation dimension in this survey varies with scale, consistent with other analyses. For example, our results on small and intermediate scales are consistent those obtained from the QDOT sample, but the larger PSCz sample allows us to extend the analysis out to much larger scales. We find firm evidence that the sample becomes homogeneous at large scales; the correlation dimension of the sample is D_2=2.992(3) for r>30 h^{-1} Mpc. This provides strong evidence in favour of a Universe which obeys the Cosmological Principle.
Large-scale cosmic homogeneity from a multifractal analysis of the PSCz catalogue
NASA Astrophysics Data System (ADS)
Pan, Jun; Coles, Peter
2000-11-01
We investigate the behaviour of galaxy clustering on large scales using the PSCz catalogue. In particular, we ask whether there is any evidence of large-scale fractal behaviour in this catalogue. We find the correlation dimension in this survey varies with scale, consistent with other analyses. For example, our results on small and intermediate scales are consistent those obtained from the QDOT sample, but the larger PSCz sample allows us to extend the analysis out to much larger scales. We find firm evidence that the sample becomes homogeneous at large scales; the correlation dimension of the sample is D2=2.992+/-0.003 for r>30h-1Mpc. This provides strong evidence in favour of a universe that obeys the cosmological principle.
Large-scale proteomic analysis of membrane proteins
Ahram, Mamoun; Springer, David L.
2004-10-01
Proteomic analysis of membrane proteins is promising in identification of novel candidates as drug targets and/or disease biomarkers. Despite notable technological developments, obstacles related to extraction and solubilization of membrane proteins are frequently encountered. A critical discussion of the different preparative methods of membrane proteins is offered in relation to downstream proteomic applications, mainly gel-based analyses and mass spectrometry. Unknown proteins are often identified by high-throughput profiling of membrane proteins. In search for novel membrane proteins, analysis of protein sequences using computational tools is performed to predict for the presence of transmembrane domains. Here, we also present these bioinformatic tools with the human proteome as a case study. Along with technological innovations, advancements in the areas of sample preparation and computational prediction of membrane proteins will lead to exciting discoveries.
Mayunga, Joseph S.
2010-07-14
of disaster: mitigation, preparedness, response, and recovery. Furthermore, a fruitful approach to measure disaster resilience is to assess various forms of capital: social, economic, physical, and human. These capitals are important resources for communities...
Wu, Hui-Chun [Los Alamos National Laboratory; Hegelich, B.M. [Los Alamos National Laboratory; Fernandez, J.C. [Los Alamos National Laboratory; Shah, R.C. [Los Alamos National Laboratory; Palaniyappan, S. [Los Alamos National Laboratory; Jung, D. [Los Alamos National Laboratory; Yin, L [Los Alamos National Laboratory; Albright, B.J. [Los Alamos National Laboratory; Bowers, K. [Guest Scientist of XCP-6; Huang, C. [Los Alamos National Laboratory; Kwan, T.J. [Los Alamos National Laboratory
2012-06-19
Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.
NASA Astrophysics Data System (ADS)
Petroy, S. B.; Leisso, N.; Hinckley, E. S.; Meier, C. L.; Barnett, D.
2013-12-01
The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 60 sites across the US, including Alaska, Hawaii, and Puerto Rico. The NEON vegetation sampling protocol currently directs the collection of foliar samples from dominant species at each site; field spectra are collected from the samples that are further analyzed for bulk and isotopic carbon and nitrogen content. Through employment of consistent sampling and analysis strategies, NEON will provide a unique, rich, and varied data collection to support studies of foliar traits within species at specific sites and across/between regions. When combined with the NEON airborne hyperspectral and LiDAR imagery, these data will be key to support validation efforts of existing algorithms for deriving canopy scale nitrogen, carbon and other foliar traits, as well as supporting development of data products that are informed by - and include - the ground data specifically, thereby potentially reducing uncertainties in the observational data products. Presented here are prototype datasets collected at NEON Domain 1 (Harvard Forest, summer 2012) and Domain 17 (San Joaquin Experiment Range, summer 2013). Lessons-learned from the field campaigns are discussed, along with preliminary results from the Harvard Forest campaign, which combine the field and the laboratory data in support of current algorithm validation efforts. Extension of these protocols to future NEON Domain characterization activities is also presented.
Analysis of scale effect in compressive ice failure and implications for design
Rocky Scott Taylor
2010-01-01
The main focus of the study was the analysis of scale effect in local ice pressure resulting from probabilistic (spalling) fracture and the relationship between local and global loads due to the averaging of pressures across the width of a structure. A review of fundamental theory, relevant ice mechanics and a critical analysis of data and theory related to the
Diagnosis of Plus Disease in Retinopathy of Prematurity Using Retinal Image multiScale Analysis
Rony Gelman; M. Elena Martinez-Perez; Deborah K. Vanderveen; Anne Moskowitz; Anne B. Fulton
2005-01-01
PURPOSE. To evaluate a semiautomated image analysis software package, Retinal Image multiScale Analysis (RISA), for the di- agnosis of plus disease in preterm infants with retinopathy of prematurity (ROP). METHODS. Digital images of the posterior pole showing both disc and macula in preterm infants with ROP were analyzed with an enhanced version of RISA. Venules (N 106) and arterioles (N
Stress analysis of 27% scale model of AH-64 main rotor hub
NASA Technical Reports Server (NTRS)
Hodges, R. V.
1985-01-01
Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.
Applying Static Analysis to Large-scale, Multi-threaded Java Programs
Biere, Armin
Applying Static Analysis to Large-scale, Multi-threaded Java Programs Cyrille Artho and Armin BiereÂ¨urich, Switzerland {artho,biere}@inf.ethz.ch Abstract Static analysis is a tremendous help when trying to find faults in complex software. Writing multi-threaded pro- grams is difficult, because the thread scheduling increases
Selective harvesting by small-scale sheries: ecosystem analysis of San Miguel Bay, Philippines
Pauly, Daniel
Selective harvesting by small-scale ®sheries: ecosystem analysis of San Miguel Bay, Philippines, Philippines. The estuarine ecosystem therein is described through a mass-balance model that includes 16 analysis of San Miguel Bay, Philippines. Fisheries Research 53: 263-281. #12;Indeed, the problems
ERIC Educational Resources Information Center
Frisby, Craig L.; Kim, Se-Kang
2008-01-01
Profile Analysis via Multidimensional Scaling (PAMS) is a procedure for extracting latent core profiles in a multitest data set. The PAMS procedure offers several advantages compared with other profile analysis procedures. Most notably, PAMS estimates individual profile weights that reflect the degree to which an individual's observed profile…
Technical Analysis of Scores on the "Self-Efficacy Self-Report Scale"
ERIC Educational Resources Information Center
Erford, Bradley T.; Schein, Hallie; Duncan, Kelly
2011-01-01
The purpose of this study was to provide preliminary analysis of reliability and validity of scores on the "Self-Efficacy Self-Report Scale", which was designed to assess general self-efficacy in students aged 10 to 17 years. Confirmatory factor analysis on cross-validated samples was conducted revealing a marginal fit of the data to the 19-item…
MULTI-SCALE ANALYSIS OF SKIN HYPER-PIGMENTATION EVOLUTION Sylvain Prigent1
Boyer, Edmond
to get an evolution curve per patient, or to do statis- tical analysis of a treatment efficacy on a group in order to quantify the treatment efficacy on a patient. Index Terms-- multi-scale analysis, statistical- ysis (ICA) [2]. We proposed a method based on support vec- tor machine (SVM) in [2]. The second step
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
Physical Analysis and Scaling of a Jet and Vortex Actuator
NASA Technical Reports Server (NTRS)
Lachowicz, Jason T.; Yao, Chung-Sheng; Joslin, Ronald D.
2004-01-01
Our previous studies have shown that the Jet and Vortex Actuator generates free-jet, wall-jet, and near- wall vortex flow fields. That is, the actuator can be operated in different modes by simply varying the driving frequency and/or amplitude. For this study, variations are made in the actuator plate and wide-slot widths and sine/asymmetrical actuator plate input forcing (drivers) to further study the actuator induced flow fields. Laser sheet flow visualization, particle- image velocimetry, and laser velocimetry are used to measure and characterize the actuator induced flow fields. Laser velocimetry measurements indicate that the vortex strength increases with the driver repetition rate for a fixed actuator geometry (wide slot and plate width). For a given driver repetition rate, the vortex strength increases as the plate width decreases provided the wide-slot to plate-width ratio is fixed. Using an asymmetric plate driver, a stronger vortex is generated for the same actuator geometry and a given driver repetition rate. The nondimensional scaling provides the approximate ranges for operating the actuator in the free jet, wall jet, or vortex flow regimes. Finally, phase-locked velocity measurements from particle image velocimetry indicate that the vortex structure is stationary, confirming previous computations. Both the computations and the particle image velocimetry measurements (expectantly) show unsteadiness near the wide-slot opening, which is indicative of mass ejection from the actuator.
ANALYSIS OF TURBULENT MIXING JETS IN LARGE SCALE TANK
Lee, S; Richard Dimenna, R; Robert Leishear, R; David Stefanko, D
2007-03-28
Flow evolution models were developed to evaluate the performance of the new advanced design mixer pump for sludge mixing and removal operations with high-velocity liquid jets in one of the large-scale Savannah River Site waste tanks, Tank 18. This paper describes the computational model, the flow measurements used to provide validation data in the region far from the jet nozzle, the extension of the computational results to real tank conditions through the use of existing sludge suspension data, and finally, the sludge removal results from actual Tank 18 operations. A computational fluid dynamics approach was used to simulate the sludge removal operations. The models employed a three-dimensional representation of the tank with a two-equation turbulence model. Both the computational approach and the models were validated with onsite test data reported here and literature data. The model was then extended to actual conditions in Tank 18 through a velocity criterion to predict the ability of the new pump design to suspend settled sludge. A qualitative comparison with sludge removal operations in Tank 18 showed a reasonably good comparison with final results subject to significant uncertainties in actual sludge properties.
Global analysis of large-scale chemical and biological experiments.
Root, David E; Kelley, Brian P; Stockwell, Brent R
2002-05-01
Research in the life sciences is increasingly dominated by high-throughput data collection methods that benefit from a global approach to data analysis. Recent innovations that facilitate such comprehensive analyses are highlighted. Several developments enable the study of the relationships between newly derived experimental information, such as biological activity in chemical screens or gene expression studies, and prior information, such as physical descriptors for small molecules or functional annotation for genes. The way in which global analyses can be applied to both chemical screens and transcription profiling experiments using a set of common machine learning tools is discussed. PMID:12058610
Drift-scale thermomechanical analysis for the retrievability systems study
Tsai, F.C. [M& O/Woodward Clyde Federal Services, Las Vegas, NV (United States)
1996-04-01
A numerical method was used to estimate the stability of potential emplacement drifts without considering a ground support system as a part of the Thermal Loading Systems Study for the Yucca Mountain Site Characterization Project. The stability of the drift is evaluated with two variables: the level of thermal loading and the diameter of the emplacement drift. The analyses include the thermomechanical effects generated by the excavation of the drift, subsequently by the thermal loads from heat-emitting waste packages, and finally by the thermal reduction resulting from rapid cooling ventilation required for the waste retrieval if required. The Discontinuous Deformation Analysis (DDA) code was used to analyze the thermomechanical response of the rock mass of multiple blocks separated by joints. The result of this stability analysis is used to discuss the geomechanical considerations for the advanced conceptual design (ACD) with respect to retrievability. In particular, based on the rock mass strength of the host rock described in the current version of the Reference Information Base, the computed thermal stresses, generated by 111 MTU/acre thermal loads in the near field at 100 years after waste emplacement, is beyond the criterion for the rock mass strength used to predict the stability of the rock mass surrounding the emplacement drift.
Brazilian version of the Jefferson Scale of Empathy: psychometric properties and factor analysis
2012-01-01
Background Empathy is a central characteristic of medical professionalism and has recently gained attention in medical education research. The Jefferson Scale of Empathy is the most commonly used measure of empathy worldwide, and to date it has been translated in 39 languages. This study aimed to adapt the Jefferson Scale of Empathy to the Brazilian culture and to test its reliability and validity among Brazilian medical students. Methods The Portuguese version of the Jefferson Scale of Empathy was adapted to Brazil using back-translation techniques. This version was pretested among 39 fifth-year medical students in September 2010. During the final fifth- and sixth-year Objective Structured Clinical Examination (October 2011), 319 students were invited to respond to the scale anonymously. Cronbach’s alpha, exploratory factor analysis, item-total correlation, and gender comparisons were performed to check the reliability and validity of the scale. Results The student response rate was 93.7% (299 students). Cronbach’s coefficient for the scale was 0.84. A principal component analysis confirmed the construct validity of the scale for three main factors: Compassionate Care (first factor), Ability to Stand in the Patient’s Shoes (second factor), and Perspective Taking (third factor). Gender comparisons did not reveal differences in the scores between female and male students. Conclusions The adapted Brazilian version of the Jefferson Scale of Empathy proved to be a valid, reliable instrument for use in national and cross-cultural studies in medical education. PMID:22873730
Stewart Barr; Andrew Gilg; Nicholas Ford
2005-01-01
This paper examines the structure of waste reduction, reuse and recycling behavior within the context of wider research on environmental action in and around the home. Using a sample of 1265 households from Devon, England, the research examined a range of environmental behaviors, focusing on energy saving, water conservation, green consumerism and waste management. Using factor analysis, the data were
Scaling Analysis for the Direct Reactor Auxiliary Cooling System for AHTRs
Yoder Jr, Graydon L [ORNL] [ORNL; Wilson, Dane F [ORNL] [ORNL; Wang, X. NMN [Ohio State University] [Ohio State University; Lv, Q. NMN [Ohio State University] [Ohio State University; Sun, X NMN [Ohio State University] [Ohio State University; Christensen, R. N. [Ohio State University] [Ohio State University; Blue, T. E. [Ohio State University] [Ohio State University; Subharwall, Piyush [Idaho National Laboratory (INL)] [Idaho National Laboratory (INL)
2011-01-01
The Direct Reactor Auxiliary Cooling System (DRACS), shown in Fig. 1 [1], is a passive heat removal system proposed for the Advanced High-Temperature Reactor (AHTR). It features three coupled natural circulation/convection loops completely relying on the buoyancy as the driving force. A prototypic design of the DRACS employed in a 20-MWth AHTR has been discussed in our previous work [2]. The total height of the DRACS is usually more than 10 m, and the required heating power will be large (on the order of 200 kW), both of which make a full-scale experiment not feasible in our laboratory. This therefore motivates us to perform a scaling analysis for the DRACS to obtain a scaled-down model. In this paper, theory and methodology for such a scaling analysis are presented.
Adapting and Validating a Scale to Measure Sexual Stigma among Lesbian, Bisexual and Queer Women
Logie, Carmen H.; Earnshaw, Valerie
2015-01-01
Lesbian, bisexual and queer (LBQ) women experience pervasive sexual stigma that harms wellbeing. Stigma is a multi-dimensional construct and includes perceived stigma, awareness of negative attitudes towards one’s group, and enacted stigma, overt experiences of discrimination. Despite its complexity, sexual stigma research has generally explored singular forms of sexual stigma among LBQ women. The study objective was to develop a scale to assess perceived and enacted sexual stigma among LBQ women. We adapted a sexual stigma scale for use with LBQ women. The validation process involved 3 phases. First, we held a focus group where we engaged a purposively selected group of key informants in cognitive interviewing techniques to modify the survey items to enhance relevance to LBQ women. Second, we implemented an internet-based, cross-sectional survey with LBQ women (n=466) in Toronto, Canada. Third, we administered an internet-based survey at baseline and 6-week follow-up with LBQ women in Toronto (n=24) and Calgary (n=20). We conducted an exploratory factor analysis using principal components analysis and descriptive statistics to explore health and demographic correlates of the sexual stigma scale. Analyses yielded one scale with two factors: perceived and enacted sexual stigma. The total scale and subscales demonstrated adequate internal reliability (total scale alpha coefficient: 0.78; perceived sub-scale: 0.70; enacted sub-scale: 0.72), test-retest reliability, and construct validity. Perceived and enacted sexual stigma were associated with higher rates of depressive symptoms and lower self-esteem, social support, and self-rated health scores. Results suggest this sexual stigma scale adapted for LBQ women has good psychometric properties and addresses enacted and perceived stigma dimensions. The overwhelming majority of participants reported experiences of perceived sexual stigma. This underscores the importance of moving beyond a singular focus on discrimination to explore perceptions of social judgment, negative attitudes and social norms. PMID:25679391
Meta-Analysis of the MMPI2 Fake Bad Scale: Utility in Forensic Practice
Nathaniel W. Nelson; Jerry J. Sweet; George J. Demakis
2006-01-01
Some clinical researchers disagree regarding the clinical utility of the MMPI-2 Fake Bad scale (FBS ) within forensic and clinical settings. The present meta-analysis summarizes weighted effect size differences among the FBS and other commonly used validity scales (L, F, K, Fb, Fp, F-K, O-S, Ds2, Dsr2 ) in symptom overreporting and comparison groups. Forty studies that included FBS were
A multi-scale segmentation\\/object relationship modelling methodology for landscape analysis
C. Burnett; Thomas Blaschke
2003-01-01
Natural complexity can best be explored using spatial analysis tools based on concepts of landscape as process continuums that can be partially decomposed into objects or patches. We introduce a five-step methodology based on multi-scale segmentation and object relationship modelling. Hierarchical patch dynamics (HPD) is adopted as the theoretical framework to address issues of heterogeneity, scale, connectivity and quasi-equilibriums in
Large-Scale Flows in the Convection Zone Using Ring-Diagram Analysis
D. A. Haber
1999-01-01
There are many important scales of motion in the solar convection zone that require intensive investigations by new helioseismic methods. A number of studies have focused on solar rotation and meridional flows and have provided significant information for solar models. The local technique known as ring-diagram analysis is most effective for inferring large-scale velocity flows in layers above 0.97 R_\\/sun.
Kato, Takahiro A.; Watabe, Motoki; Kanba, Shigenobu
2013-01-01
Neurons and synapses have long been the dominant focus of neuroscience, thus the pathophysiology of psychiatric disorders has come to be understood within the neuronal doctrine. However, the majority of cells in the brain are not neurons but glial cells including astrocytes, oligodendrocytes, and microglia. Traditionally, neuroscientists regarded glial functions as simply providing physical support and maintenance for neurons. Thus, in this limited role glia had been long ignored. Recently, glial functions have been gradually investigated, and increasing evidence has suggested that glial cells perform important roles in various brain functions. Digging up the glial functions and further understanding of these crucial cells, and the interaction between neurons and glia may shed new light on clarifying many unknown aspects including the mind-brain gap, and conscious-unconscious relationships. We briefly review the current situation of glial research in the field, and propose a novel translational research with a multi-dimensional model, combining various experimental approaches such as animal studies, in vitro & in vivo neuron-glia studies, a variety of human brain imaging investigations, and psychometric assessments. PMID:24155727
Kalkan, Erol; Chopra, Anil K.
2010-01-01
Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.
Intrinsic multi-scale analysis: a multi-variate empirical mode decomposition framework
Looney, David; Hemakom, Apit; Mandic, Danilo P.
2015-01-01
A novel multi-scale approach for quantifying both inter- and intra-component dependence of a complex system is introduced. This is achieved using empirical mode decomposition (EMD), which, unlike conventional scale-estimation methods, obtains a set of scales reflecting the underlying oscillations at the intrinsic scale level. This enables the data-driven operation of several standard data-association measures (intrinsic correlation, intrinsic sample entropy (SE), intrinsic phase synchrony) and, at the same time, preserves the physical meaning of the analysis. The utility of multi-variate extensions of EMD is highlighted, both in terms of robust scale alignment between system components, a pre-requisite for inter-component measures, and in the estimation of feature relevance. We also illuminate that the properties of EMD scales can be used to decouple amplitude and phase information, a necessary step in order to accurately quantify signal dynamics through correlation and SE analysis which are otherwise not possible. Finally, the proposed multi-scale framework is applied to detect directionality, and higher order features such as coupling and regularity, in both synthetic and biological systems. PMID:25568621
A new approach for modeling and analysis of molten salt reactors using SCALE
Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)
2013-07-01
The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-06
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360{degree}, 180{degree}, and 90{degree} sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360{degree}, 180{degree}, and 90{degree} cases, respectively.
IRLbot: design and performance analysis of a large-scale web crawler
Lee, Hsin-Tsang
2008-10-10
IRLBOT: DESIGN AND PERFORMANCE ANALYSIS OF A LARGE-SCALE WEB CRAWLER A Thesis by HSIN-TSANG LEE Submitted to the O–ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree of MASTER OF SCIENCE May 2008... Major Subject: Computer Science IRLBOT: DESIGN AND PERFORMANCE ANALYSIS OF A LARGE-SCALE WEB CRAWLER A Thesis by HSIN-TSANG LEE Submitted to the O–ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree...
NASA Astrophysics Data System (ADS)
Verma, Surendra P.; Pandarinath, Kailasa; Verma, Sanjeet K.; Agrawal, Salil
2013-05-01
For the discrimination of four tectonic settings of island arc, continental arc, within-plate (continental rift and ocean island together), and collision, we present three sets of new diagrams obtained from linear discriminant analysis of natural logarithm transformed ratios of major elements, immobile major and trace elements, and immobile trace elements in acid magmas. The use of discordant outlier-free samples prior to linear discriminant analysis had improved the success rates by about 3% on the average. Success rates of these new diagrams were acceptably high (about 69% to 97% for the first set, about 69% to 99% for the second set, and about 60% to 96% for the third set). Testing of these diagrams for acid rock samples (not used for constructing them) from known tectonic settings confirmed their overall good performance. Application of these new diagrams to Precambrian case studies provided the following generally consistent results: a continental arc setting for the Caribou greenstone belt (Canada) at about 3000 Ma, São Francisco craton (Brazil) at about 3085-2983 Ma, Penakacherla greenstone terrane (Dharwar craton, India) at about 2700 Ma, and Adola (Ethiopia) at about 885-765 Ma; a transitional continental arc to collision setting for the Rio Maria terrane (Brazil) at about 2870 Ma and Eastern felsic volcanic terrain (India) at about 2700 Ma; a collision setting for the Kolar suture zone (India) at about 2610 Ma and Korpo area (Finland) at about 1852 Ma; and a within-plate (likely a continental rift) setting for Malani igneous suite (India) at about 745-700 Ma. These applications suggest utility of the new discrimination diagrams for all four tectonic settings. In fact, all three sets of diagrams were shown to be robust against post-emplacement compositional changes caused by analytical errors, element mobility related to low or high temperature alteration, or Fe-oxidation caused by weathering.
Multi-scale analysis of the spatial variability of soil organic carbon
NASA Astrophysics Data System (ADS)
Stevens, François; Bogaert, Patrick; van Wesemael, Bas
2014-05-01
Information on soil properties and state is required for food security, global environmental management and climate change mitigation. Therefore, important efforts are put in the collection of soil data of many types and at very different spatial scales. Besides, soil organic carbon dynamics models at regional or global level and integrated soil policies require to predict soil properties on extensive areas, while keeping a resolution of a few meters. However, predict soil properties at fine resolution on large area is challenging, since soil properties are generally the result of a large number of soil processes, which may act at very different spatial scale. Indeed, both the strength and the nature of the link between soil properties and environmental factors depend on the scale at which we look to. Therefore, the characterization of the link between a soil property and a given controlling factor may be complicated by some variability in the soil property resulting from additionnal processes acting at other spatial scales. We propose a method of geostatistical analysis to decompose the spatial information on a soil property into multiple scale components. The variogram of soil properties is modeled by a function which is the sum of multiple sub-model with different ranges. Each sub-model can be used separately to predict the soil property at a particular scale. The analysis was performed in Belgian Loess Belt with the legacy dataset Aardewerk. The method allowed to highlight relationships between soil properties at particular spatial scales, which were hardly observable without spatial decomposition. In particular, the link between texture and organic carbon, or between topsoil and subsoil organic carbon, appeared more clearly at the coarsest scale. Besides allowing a better understanding of the controls on soil variables, the method provides a way to improve prediction of soil variables when different covariates are available at different scales.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
NASA Astrophysics Data System (ADS)
Boularas, A.; Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.
2014-08-01
Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip-sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.
Boularas, A., E-mail: boularas@laplace.univ-tlse.fr; Baudoin, F.; Villeneuve-Faure, C. [LAPLACE (Laboratoire Plasma et Conversion d'Energie), Université de Toulouse, UPS, INPT, 118 route de Narbonne, 31062 Toulouse cedex 9 (France); Clain, S. [Universidade do Minho, Centro de Matemática, Campus de Gualtar, 4710 - 057 Braga (Portugal); Université Paul Sabatier, Institut de Mathématiques de Toulouse, 31062 Toulouse (France); Teyssedre, G. [LAPLACE (Laboratoire Plasma et Conversion d'Energie), Université de Toulouse, UPS, INPT, 118 route de Narbonne, 31062 Toulouse cedex 9 (France); CNRS, LAPLACE, F-31071 Toulouse (France)
2014-08-28
Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip–sample distances (from 0 to 600?nm) and for different DC voltages applied to the tip (6 to 20?V) are shown and compared with experimental data in order to validate our approach.
Informatics Strategies for Large-Scale Novel Cross-linking Analysis
Gordon A. Anderson; Nikola Tolic; Xiaoting Tang; Chunxiang Zheng; James E. Bruce
2007-01-01
The analysis of protein interactions in biological systems represents a significant challenge for today's technology. Chemical cross-linking provides the potential to impart new chemical bonds in a complex system that result in mass changes in the analysis of a set of tryptic peptides. However, system complexity and cross-linking product heterogeneity have precluded widespread chemical cross-linking use for large-scale identification of
Analysis of Scaling Strategies for Sub30 nm Double-Gate SOI N-MOSFETs
Nicola Barin; Marco Braccioli; Claudio Fiegna; Enrico Sangiorgi
2007-01-01
State-of-the-art device simulation is applied to the analysis of possible scaling strategies for the future CMOS technology, adopting the ultrathin silicon body (UTB) double-gate (DG) MOSFET and considering the main figures of merit (FOM) for the high-performance N-MOS transistor. The results of our analysis confirm the potentials of UTB-DG MOSFETs. In particular, the possibility to control the short-channel effects by
SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems
Mueller, Don [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL
2009-01-01
Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.
Large-scale analysis of the yeast proteome by multidimensional protein identification technology
Michael P. Washburn; Dirk Wolters; John R. Yates III
2001-01-01
We describe a largely unbiased method for rapid and large-scale proteome analysis by multidimensional liquid chromatography, tandem mass spectrometry, and database searching by the SEQUEST algorithm, named multidimensional protein identification technology (MudPIT). MudPIT was applied to the proteome of the Saccharomyces cerevisiae strain BJ5460 grown to mid-log phase and yielded the largest proteome analysis to date. A total of 1,484