Sibley, Chris G; Houkamau, Carla A
2013-01-01
We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for M?ori peoples, is the Multi-Dimensional Model of M?ori Identity and Cultural Engagement (MMM-ICE). M?ori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as M?ori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 M?ori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of M?ori identification tended to be higher among older M?ori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in M?ori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included. PMID:23356361
NASA Astrophysics Data System (ADS)
Milledge, D.; Bellugi, D.; McKean, J. A.; Dietrich, W.
2012-12-01
The infinite slope model is the basis for almost all watershed scale slope stability models. However, it assumes that a potential landslide is infinitely long and wide. As a result, it cannot represent resistance at the margins of a potential landslide (e.g. from lateral roots), and is unable to predict the size of a potential landslide. Existing three-dimensional models generally require computationally expensive numerical solutions and have previously been applied only at the hillslope scale. Here we derive an alternative analytical treatment that accounts for lateral resistance by representing the forces acting on each margin of an unstable block. We apply 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure using a log-spiral method on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion is an exponential function of soil depth. We benchmark this treatment against other more complete approaches (Finite Element (FE) and closed form solutions) and find that our model: 1) converges on the infinite slope predictions as length / depth and width / depth ratios become large; 2) agrees with the predictions from state-of-the-art FE models to within +/- 30% error, for the specific cases in which these can be applied. We then test our model's ability to predict failure of an actual (mapped) landslide where the relevant parameters are relatively well constrained. We find that our model predicts failure at the observed location with a nearly identical shape and predicts that larger or smaller shapes conformal to the observed shape are indeed more stable. Finally, we perform a sensitivity analysis using our model to show that lateral reinforcement sets a minimum landslide size, while the additional strength at the downslope boundary means that the optimum shape for a given size is longer in a downslope direction. However, reinforcement effects cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of spatial patterns of key parameters (e.g. pore water pressure) and motivating the model's watershed scale application. This watershed scale application requires an efficient method to find the least stable shapes among an almost infinite set. However, when applied in this context, it allows a more complete examination of the controls on landslide size, shape and location.
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Development of multi-dimensional body image scale for malaysian female adolescents.
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
The Earth Mover's Distance, Multi-Dimensional Scaling, and Color-Based Image Retrieval
Tomasi, Carlo
The Earth Mover's Distance, Multi-Dimensional Scaling, and Color-Based Image Retrieval Yossi Rubner of multi-dimensional scal- ing MDS techniques to embed a group of images as points in a two- or three-dimensional in a geometric space so that their loca- tions re ect di erences and similarities in their color distributions
Development of a Multi-Dimensional Scale for PDD and ADHD
ERIC Educational Resources Information Center
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…
Rübel, Oliver; Ahern, Sean; Bethel, E Wes; Biggin, Mark D; Childs, Hank; Cormier-Michel, Estelle; Depace, Angela; Eisen, Michael B; Fowlkes, Charless C; Geddes, Cameron G R; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V E; Knowles, David W; Hendriks, Cris L Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H; Wu, Kesheng
2010-05-01
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach. PMID:23762211
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-06-08
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
Indoor People Tracking Based on Dynamic Weighted MultiDimensional Scaling
Indoor People Tracking Based on Dynamic Weighted MultiDimensional Scaling Jose Maria Cabero. In this paper, we explore the use of short-range radio technologies to track people indoors. The network or commercial advantage and that copies bear this notice and the full citation on the first page. To copy
A Monte Carlo Evaluation of Interactive Multi-dimensional Scaling
ERIC Educational Resources Information Center
Girard, Roger A.; Cliff, Norman
1976-01-01
An experimental procedure involving interaction between subject and computer was used to determine an opitmum subset of stimuli for multidimensional scaling (MDS). A computer program evaluated this procedure compared with MDS based on (a) all pairs of stimuli, and (b) on one-third of the possible pairs. The new method was better. (Author/HG)
A revised Thai Multi-Dimensional Scale of Perceived Social Support.
Wongpakaran, Nahathai; Wongpakaran, Tinakon
2012-11-01
In order to ensure the construct validity of the three-factor model of the Multi-dimensional Scale of Perceived Social Support (MSPSS), and based on the assumption that it helps users differentiate between sources of social support, in this study a revised version was created and tested. The aim was to compare the level of model fit of the original version of the MSPSS against the revised version--which contains a minor change from the original. The study was conducted on 486 medical students who completed the original and revised versions of the MSPSS, as well as the Rosenberg Self-Esteem Scale (Rosenberg, 1965) and Beck Depression Inventory II (Beck, Steer, & Brown, 1996). Confirmatory factor analysis was performed to compare the results, showing that the revised version of MSPSS demonstrated a good internal consistency--with a Cronbach's alpha of .92 for the MSPSS questionnaire, and a significant correlation with the other scales, as predicted. The revised version provided better internal consistency, increasing the Cronbach's alpha for the Significant Others sub-scale from 0.86 to 0.92. Confirmatory factor analysis revealed an acceptable model fit: chi2 128.11, df 51, p < .001; TLI 0.94; CFI 0.95; GFI 0.90; PNFI 0.71; AGFI 0.85; RMSEA 0.093 (0.073-0.113) and SRMR 0.042, which is better than the original version. The tendency of the new version was to display a better level of fit with a larger sample size. The limitations of the study are discussed, as well as recommendations for further study. PMID:23156952
AstroMD: A Multi Dimensional Visualization and Analysis Toolkit for Astrophysics
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Gheller, C.; Calori, L.; Buonomo, F.; Imboden, S.
2010-10-01
Over the past few years, the role of visualization for scientific purpose has grown up enormously. Astronomy makes an extended use of visualization techniques to analyze data, and scientific visualization has became a fundamental part of modern researches in Astronomy. With the evolution of high performance computers, numerical simulations have assumed a great role in the scientific investigation, allowing the user to run simulation with higher and higher resolution. Data produced in these simulations are often multi-dimensional arrays with several physical quantities. These data are very hard to manage and to analyze efficiently. Consequently the data analysis and visualization tools must follow the new requirements of the research. AstroMD is a tool for data analysis and visualization of astrophysical data and can manage different physical quantities and multi-dimensional data sets. The tool uses virtual reality techniques by which the user has the impression of travelling through a computer-based multi-dimensional model.
How Fitch-Margoliash Algorithm can Benefit from Multi Dimensional Scaling
Lespinats, Sylvain; Grando, Delphine; Maréchal, Eric; Hakimi, Mohamed-Ali; Tenaillon, Olivier; Bastien, Olivier
2011-01-01
Whatever the phylogenetic method, genetic sequences are often described as strings of characters, thus molecular sequences can be viewed as elements of a multi-dimensional space. As a consequence, studying motion in this space (ie, the evolutionary process) must deal with the amazing features of high-dimensional spaces like concentration of measured phenomenon. To study how these features might influence phylogeny reconstructions, we examined a particular popular method: the Fitch-Margoliash algorithm, which belongs to the Least Squares methods. We show that the Least Squares methods are closely related to Multi Dimensional Scaling. Indeed, criteria for Fitch-Margoliash and Sammon’s mapping are somewhat similar. However, the prolific research in Multi Dimensional Scaling has definitely allowed outclassing Sammon’s mapping. Least Square methods for tree reconstruction can now take advantage of these improvements. However, “false neighborhood” and “tears” are the two main risks in dimensionality reduction field: “false neighborhood” corresponds to a widely separated data in the original space that are found close in representation space, and neighbor data that are displayed in remote positions constitute a “tear”. To address this problem, we took advantage of the concepts of “continuity” and “trustworthiness” in the tree reconstruction field, which limit the risk of “false neighborhood” and “tears”. We also point out the concentration of measured phenomenon as a source of error and introduce here new criteria to build phylogenies with improved preservation of distances and robustness. The authors and the Evolutionary Bioinformatics Journal dedicate this article to the memory of Professor W.M. Fitch (1929–2011). PMID:21697992
ERIC Educational Resources Information Center
Chiou, Guo-Li; Anderson, O. Roger
2010-01-01
This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…
Multi-dimensional residual analysis of point process models for earthquake occurrences.
Schoenberg, Frederic Paik (Rick)
Multi-dimensional residual analysis of point process models for earthquake occurrences. Frederic models for the space-time-magnitude distribution of earthquake oc- currences, using in particular catalog of 580 earthquakes occurring in Bear Valley, California. One method involves rescaled residuals
M&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security
-of-Networks Framework for Cyber Security." IEEE Intelligence and Security Informatics, 2013. 2. "Towards a MultiscaleM&Ms4Graphs: Multi-scale, Multi-dimensional Graph Analytics Tools for Cyber-Security Objective We very fast computation of essential security postures and cost/benefit metrics. By accounting for both
Nguyen, Lan K.; Degasperi, Andrea; Cotter, Philip; Kholodenko, Boris N.
2015-01-01
Biochemical networks are dynamic and multi-dimensional systems, consisting of tens or hundreds of molecular components. Diseases such as cancer commonly arise due to changes in the dynamics of signalling and gene regulatory networks caused by genetic alternations. Elucidating the network dynamics in health and disease is crucial to better understand the disease mechanisms and derive effective therapeutic strategies. However, current approaches to analyse and visualise systems dynamics can often provide only low-dimensional projections of the network dynamics, which often does not present the multi-dimensional picture of the system behaviour. More efficient and reliable methods for multi-dimensional systems analysis and visualisation are thus required. To address this issue, we here present an integrated analysis and visualisation framework for high-dimensional network behaviour which exploits the advantages provided by parallel coordinates graphs. We demonstrate the applicability of the framework, named “Dynamics Visualisation based on Parallel Coordinates” (DYVIPAC), to a variety of signalling networks ranging in topological wirings and dynamic properties. The framework was proved useful in acquiring an integrated understanding of systems behaviour. PMID:26220783
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
Raj Chari; Bradley P. Coe; Craig Wedseltoft; Marie Benetti; Ian M. Wilson; Emily A. Vucic; Calum Macaulay; Raymond T. Ng; Wan L. Lam
2008-01-01
Background: High throughput microarray technologies have afforded the investigation of genomes, epigenomes, and transcriptomes at unprecedented resolution. However, software packages to handle, analyze, and visualize data from these multiple 'omics disciplines have not been adequately developed. Results: Here, we present SIGMA2, a system for the integrative genomic multi-dimensional analysis of cancer genomes, epigenomes, and transcriptomes. Multi-dimensional datasets can be simultaneously
Multi-Dimensional K-Factor Analysis for V2V Radio Channels in Open Sub-urban Street Crossings
Zemen, Thomas
Multi-Dimensional K-Factor Analysis for V2V Radio Channels in Open Sub-urban Street Crossings Laura@ftw.at Abstract--In this paper we analyze the Ricean K-factor for vehicle-to-vehicle (V2V) communications MHz for a duration of 20 s. We performed two kind of evaluations. For the first analysis we
Convergence Analysis of On-Policy LSPI for Multi-Dimensional Continuous State and Action-Space
Powell, Warren B.
Convergence Analysis of On-Policy LSPI for Multi-Dimensional Continuous State and Action-Space MDPs the optimization problem. We provide a formal convergence analysis of the algorithm under the assumption that value functions are spanned by finitely many known basis functions. Furthermore, the convergence result extends
Spectral analysis of multi-dimensional self-similar Markov processes
NASA Astrophysics Data System (ADS)
Modarresi, N.; Rezakhah, S.
2010-03-01
In this paper we consider a discrete scale invariant (DSI) process {X(t), t in R+} with scale l > 1. We consider a fixed number of observations in every scale, say T, and acquire our samples at discrete points ?k, k in W, where ? is obtained by the equality l = ?T and W = {0, 1, ...}. We thus provide a discrete time scale invariant (DT-SI) process X(sdot) with the parameter space {?k, k in W}. We find the spectral representation of the covariance function of such a DT-SI process. By providing the harmonic-like representation of multi-dimensional self-similar processes, spectral density functions of them are presented. We assume that the process {X(t), t in R+} is also Markov in the wide sense and provide a discrete time scale invariant Markov (DT-SIM) process with the above scheme of sampling. We present an example of the DT-SIM process, simple Brownian motion, by the above sampling scheme and verify our results. Finally, we find the spectral density matrix of such a DT-SIM process and show that its associated T-dimensional self-similar Markov process is fully specified by {RHj(1), RjH(0), j = 0, 1, ..., T - 1}, where RHj(?) is the covariance function of jth and (j + ?)th observations of the process.
Effective use of metadata in the integration and analysis of multi-dimensional optical data
NASA Astrophysics Data System (ADS)
Pastorello, G. Z.; Gamon, J. A.
2012-12-01
Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.
NASA Astrophysics Data System (ADS)
Liu, Yong; Gao, Yuan; Lu, Qinghua; Zhou, Yongfeng; Yan, Deyue
2011-12-01
As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed.As inspired from nature's strategy to prepare collagen, herein we report a hierarchical solution self-assembly method to prepare multi-dimensional and multi-scale supra-structures from the building blocks of pristine titanate nanotubes (TNTs) around 10 nm. With the help of amylose, the nanotubes was continuously self-assembled into helically wrapped TNTs, highly aligned fibres, large bundles, 2D crystal facets and 3D core-shell hybrid crystals. The amyloses work as the glue molecules to drive and direct the hierarchical self-assembly process extending from microscopic to macroscopic scale. The whole self-assembly process as well as the self-assembly structures were carefully characterized by the combination methods of 1H NMR, CD, Hr-SEM, AFM, Hr-TEM, SAED pattern and EDX measurements. A hierarchical self-assembly mechanism was also proposed. Electronic supplementary information (ESI) available: Characterization of the A/TNTs and TNT crystals. See DOI: 10.1039/c1nr11151e
Seismic fragility analysis of highway bridges considering multi-dimensional performance limit state
NASA Astrophysics Data System (ADS)
Wang, Qi'ang; Wu, Ziyan; Liu, Shukui
2012-03-01
Fragility analysis for highway bridges has become increasingly important in the risk assessment of highway transportation networks exposed to seismic hazards. This study introduces a methodology to calculate fragility that considers multi-dimensional performance limit state parameters and makes a first attempt to develop fragility curves for a multispan continuous (MSC) concrete girder bridge considering two performance limit state parameters: column ductility and transverse deformation in the abutments. The main purpose of this paper is to show that the performance limit states, which are compared with the seismic response parameters in the calculation of fragility, should be properly modeled as randomly interdependent variables instead of deterministic quantities. The sensitivity of fragility curves is also investigated when the dependency between the limit states is different. The results indicate that the proposed method can be used to describe the vulnerable behavior of bridges which are sensitive to multiple response parameters and that the fragility information generated by this method will be more reliable and likely to be implemented into transportation network loss estimation.
Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.
2007-01-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as deficits in homework performance. The sample included 163 students attending two school districts in the Northeast. Parents completed the 36-item Homework Performance Questionnaire – Parent Scale (HPQ-PS). Teachers completed the 22-item teacher scale (HPQ-TS) for each student for whom the HPQ-PS had been completed. A common factor analysis with principal axis extraction and promax rotation was used to analyze the findings. The results of the factor analysis of the HPQ-PS revealed three salient and meaningful factors: student task orientation/efficiency, student competence, and teacher support. The factor analysis of the HPQ-TS uncovered two salient and substantive factors: student responsibility and student competence. The findings of this study suggest that the HPQ is a promising set of measures for assessing student homework functioning and contextual factors that may influence performance. Directions for future research are presented. PMID:18516211
The use of multi-dimensional flow and morphodynamic models for restoration design analysis
NASA Astrophysics Data System (ADS)
McDonald, R.; Nelson, J. M.
2013-12-01
River restoration projects with the goal of restoring a wide range of morphologic and ecologic channel processes and functions have become common. The complex interactions between flow and sediment-transport make it challenging to design river channels that are both self-sustaining and improve ecosystem function. The relative immaturity of the field of river restoration and shortcomings in existing methodologies for evaluating channel designs contribute to this problem, often leading to project failures. The call for increased monitoring of constructed channels to evaluate which restoration techniques do and do not work is ubiquitous and may lead to improved channel restoration projects. However, an alternative approach is to detect project flaws before the channels are built by using numerical models to simulate hydraulic and sediment-transport processes and habitat in the proposed channel (Restoration Design Analysis). Multi-dimensional models provide spatially distributed quantities throughout the project domain that may be used to quantitatively evaluate restoration designs for such important metrics as (1) the change in water-surface elevation which can affect the extent and duration of floodplain reconnection, (2) sediment-transport and morphologic change which can affect the channel stability and long-term maintenance of the design; and (3) habitat changes. These models also provide an efficient way to evaluate such quantities over a range of appropriate discharges including low-probability events which often prove the greatest risk to the long-term stability of restored channels. Currently there are many free and open-source modeling frameworks available for such analysis including iRIC, Delft3D, and TELEMAC. In this presentation we give examples of Restoration Design Analysis for each of the metrics above from projects on the Russian River, CA and the Kootenai River, ID. These examples demonstrate how detailed Restoration Design Analysis can be used to guide design elements and how this method can point out potential stability problems or other risks before designs proceed to the construction phase.
ERIC Educational Resources Information Center
Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.
2007-01-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as…
ERIC Educational Resources Information Center
Bruning, Stephen D.; Ledingham, John A.
1999-01-01
Attempts to design a multiple-item, multiple-dimension organization/public relationship scale. Finds that organizations and key publics have three types of relationships: professional, personal, and community. Provides an instrument that can be used to measure the influence that perceptions of the organization/public relationship have on consumer…
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Large-Scale Multi-Dimensional Document Clustering on GPU Clusters
Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E
2010-01-01
Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.
Nitrogen deposition and multi-dimensional plant diversity at the landscape scale.
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat
2015-04-01
Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1?km(2) plots covering most habitat types of Central Europe and an elevational range of 2900?m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640
2014-01-01
Background Lack of social support is an important risk factor for antenatal depression and anxiety in low- and middle-income countries. We translated, adapted and validated the Multi-dimensional Scale of Perceived Social Support (MSPSS) in order to study the relationship between perceived social support, intimate partner violence and antenatal depression in Malawi. Methods The MSPSS was translated and adapted into Chichewa and Chiyao. Five hundred and eighty-three women attending an antenatal clinic were administered the MSPSS, depression screening measures, and a risk factor questionnaire including questions about intimate partner violence. A sub-sample of participants (n?=?196) were interviewed using the Structured Clinical Interview for DSM-IV to diagnose major depressive episode. Validity of the MSPSS was evaluated by assessment of internal consistency, factor structure, and correlation with Self Reporting Questionnaire (SRQ) score and major depressive episode. We investigated associations between perception of support from different sources (significant other, family, and friends) and major depressive episode, and whether intimate partner violence was a moderator of these associations. Results In both Chichewa and Chiyao, the MSPSS had high internal consistency for the full scale and significant other, family, and friends subscales. MSPSS full scale and subscale scores were inversely associated with SRQ score and major depression diagnosis. Using principal components analysis, the MSPSS had the expected 3-factor structure in analysis of the whole sample. On confirmatory factor analysis, goodness–of-fit indices were better for a 3-factor model than for a 2-factor model, and met standard criteria when correlation between items was allowed. Lack of support from a significant other was the only MSPSS subscale that showed a significant association with depression on multivariate analysis, and this association was moderated by experience of intimate partner violence. Conclusions The MSPSS is a valid measure of perceived social support in Malawi. Lack of support by a significant other is associated with depression in pregnant women who have experienced intimate partner violence in this setting. PMID:24938124
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
NASA Astrophysics Data System (ADS)
Miltin Mboh, Cho; Montzka, Carsten; Baatz, Roland; Vereecken, Harry
2014-05-01
The integration of satellite data with physically based models can enable the characterization of earth systems and lead to improved management of natural resources at the catchment and regional scales. The reliability of simulations from physically based models depends on the accuracy of the forcing data and the model parameters. Forcing data obtained from satellites or other sources are often plagued with uncertainties and the model parameters require updates to capture the ever-changing environmental conditions. Although comprehensive data assimilation schemes for dual state and parameter updating have been proposed for improving the reliability of model simulations, their computational cost is sometimes prohibitively high. In this contribution, we propose a cost-effective and efficient alternative to handling complex multi-dimensional parameter and state improvement at the catchment scale. Our approach demystifies the complex multi-dimensional parameter estimation and state improvement problem by combining 1-dimensional exhaustive gridding with sensitivity-pushing, Newton-Raphson based guided random sampling and feedback from historical inverse estimates. In a numerical case study in the joint Rur and Erft Catchments in Germany, we apply our novel partial grid search approach to the estimation of soil surface roughness and vegetation opacity from disaggregated SMOS (Soil Moisture and Ocean Salinity Satellite) brightness temperature using the Community Microwave Emission Modeling platform (CMEM). Besides plausibly good estimates of the soil surface roughness and vegetation opacity at the catchment scale, our method also leads to improvement of the system states like soil surface moisture and soil temperature profile. Our method therefore has data assimilation capabilities without the associated computational cost incurred in ensemble-based data assimilation approaches. The partial grid search approach to parameter estimation is therefore a promising tool for multi-dimensional parameter estimation and state improvement in earth systems.
Merritt, Cullen
2014-05-31
This study specifies and tests a multi-dimensional model of publicness, building upon extant literature in this area. Publicness represents the degree to which an organization has "public" ties. An organization's degree ...
NASA Astrophysics Data System (ADS)
Carkin, Susan
The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being relatively less marked for information density, abstraction, and non-overt argumentation. In addition to the findings mentioned here, numerous other relationships among the texts exhibit complex patterns of variation related to a number of situational variables. Pedagogical implications are discussed in relation to General Education courses, differing student populations, and the reading and listening demands which students encounter in large introductory classes in the university.
NASA Astrophysics Data System (ADS)
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-06-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states.
Mai, Junyu; Sommer, Gregory Jon; Hatch, Anson V.
2010-10-01
We report on advancements of our microscale isoelectric fractionation ({mu}IEFr) methodology for fast on-chip separation and concentration of proteins based on their isoelectric points (pI). We establish that proteins can be fractionated depending on posttranslational modifications into different pH specific bins, from where they can be efficiently transferred to downstream membranes for additional processing and analysis. This technology can enable on-chip multidimensional glycoproteomics analysis, as a new approach to expedite biomarker identification and verification.
Clinton S. Potter; Patrick J. Moran
1992-01-01
We have developed a general purpose multidimensional image processing, analysis, and visualization software system for biomedical applications. This system, called Viewit, is an interpreter for an image processing language based upon a stack calculator paradigm where each stack element is a multidimensional array. Over two hundred primary functions are available for general purpose multidimensional image processing applications including Fourier transforms,
Multi-dimensional analysis of the chemical and physical properties of spiral galaxies
NASA Astrophysics Data System (ADS)
Rosales-Ortega, F. F.
2010-06-01
In this thesis, wide-field 2D spectroscopy is employed in order to characterise the nebular properties of late-type field galaxies. The observations performed for this dissertation represent the first endeavour to obtain full 2D coverage of the disks of a sample of nearby spiral galaxies, by the application of the Integral Field Spectroscopy (IFS) technique, under the PPAK IFS Nearby Galaxies Survey: PINGS. A self-consistent methodology is defined in terms of observation, data reduction and analysis techniques for this and upcoming IFS surveys, as well as providing a whole new set of IFS visualization and analysis software made available for the public domain (PINGSoft). The scientific analysis comprises the study of the integrated properties of the ionized gas and a detailed 2D study from the emission line spectra of four selected galaxies. Evidence is found suggesting that measurements of emission lines of classical HII regions are not only aperture, but spatial dependent, and therefore, the derived physical parameters and metallicity content may significantly depend on the morphology of the region, on the extraction aperture and on the signal-to-noise of the observed spectrum. Furthermore, observational evidence of non-linear multi-modal abundance gradients in normal spiral galaxies is found, consistent with a flattening in the innermost and outermost parts of the galactic discs, with important implications in terms of the chemical evolution of galaxies.
Zheng, Xiwei; Yoo, Michelle J.; Hage, David S.
2013-01-01
A multi-dimensional chromatographic approach was developed to measure the free fractions of drug enantiomers in samples that also contained a binding protein or serum. This method, which combined ultrafast affinity extraction with a chiral stationary phase, was demonstrated using the drug warfarin and the protein human serum albumin. PMID:23979112
Gordon, Scott M.; Deng, Jingyuan; Tomann, Alex B.; Shah, Amy S.; Lu, L. Jason; Davidson, W. Sean
2013-01-01
The distribution of circulating lipoprotein particles affects the risk for cardiovascular disease (CVD) in humans. Lipoproteins are historically defined by their density, with low-density lipoproteins positively and high-density lipoproteins (HDLs) negatively associated with CVD risk in large populations. However, these broad definitions tend to obscure the remarkable heterogeneity within each class. Evidence indicates that each class is composed of physically (size, density, charge) and compositionally (protein and lipid) distinct subclasses exhibiting unique functionalities and differing effects on disease. HDLs in particular contain upward of 85 proteins of widely varying function that are differentially distributed across a broad range of particle diameters. We hypothesized that the plasma lipoproteins, particularly HDL, represent a continuum of phospholipid platforms that facilitate specific protein–protein interactions. To test this idea, we separated normal human plasma using three techniques that exploit different lipoprotein physicochemical properties (gel filtration chromatography, ionic exchange chromatography, and preparative isoelectric focusing). We then tracked the co-separation of 76 lipid-associated proteins via mass spectrometry and applied a summed correlation analysis to identify protein pairs that may co-reside on individual lipoproteins. The analysis produced 2701 pairing scores, with the top hits representing previously known protein–protein interactions as well as numerous unknown pairings. A network analysis revealed clusters of proteins with related functions, particularly lipid transport and complement regulation. The specific co-separation of protein pairs or clusters suggests the existence of stable lipoprotein subspecies that may carry out distinct functions. Further characterization of the composition and function of these subspecies may point to better targeted therapeutics aimed at CVD or other diseases. PMID:23882025
Contributions to the computational analysis of multi-dimensional stochastic dynamical systems
NASA Astrophysics Data System (ADS)
Wojtkiewicz, Steven F., Jr.
2000-12-01
Several contributions in the area of computational stochastic dynamics are discussed; specifically, the response of stochastic dynamical systems by high order closure, the response of Poisson and Gaussian white noise driven systems by solution of a transformed generalized Kolmogorov equation, and control of nonlinear systems by response moment specification. Statistical moments of response are widely used in the analysis of stochastic dynamical systems of engineering interest. It is known that, if the inputs to the system are Gaussian or filtered Gaussian white noise, Ito's rule can be used to generate a system of first order linear differential equations governing the evolution of the moments. For nonlinear systems, the moment equations form an infinite hierarchy, necessitating the application of a closure procedure to truncate the system at some finite dimension at the expense of making the moment equations nonlinear. Various methods to close these moment equations have been developed. The efficacy of cumulant-neglect closure methods for complex dynamical systems is examined. Various methods have been developed to determine the response of dynamical systems subjected to additive and/or multiplicative Gaussian white noise excitations. While Gaussian white noise and filtered Gaussian white noise provide efficient and useful models of various environmental loadings, a broader class of random processes, filtered Poisson processes, are often more realistic in modeling disturbances that originate from impact-type loadings. The response of dynamical systems to combinations of Poisson and Gaussian white noise forms a Markov process whose transition density satisfies a pair of initial-boundary value problem termed the generalized Kolmogorov equations. A numerical solution algorithm for these IBVP's is developed and applied to several representative systems. Classical covariance control theory is extended to the case of nonlinear systems using the method of statistical linearization. The design procedure is applied to several nonlinear systems of civil engineering interest including hysteretic oscillators. The idea of covariance control is then generalized to the problem of response moment specification where higher order response moments are prescribed with the hope of having more authority over response extremes. The algorithm is then demonstrated by application to a Duffing oscillator.
NASA Astrophysics Data System (ADS)
De Masi, A.
2015-09-01
The paper describes reading criteria for the documentation for important buildings in Milan, Italy, as a case study of the research on the integration of new technologies to obtain 3D multi-scale representation architectures. In addition, affords an overview of the actual optical 3D measurements sensors and techniques used for surveying, mapping, digital documentation and 3D modeling applications in the Cultural Heritage field. Today new opportunities for an integrated management of data are given by multiresolution models, that can be employed for different scale of representation. The goal of multi-scale representations is to provide several representations where each representation is adapted to a different information density with several degrees of detail. The Digital Representation Platform, along with the 3D City Model, are meant to be particularly useful to heritage managers who are developing recording, documentation, and information management strategies appropriate to territories, sites and monuments. Digital Representation Platform and 3D City Model are central activities in a the decision-making process for heritage conservation management and several urban related problems. This research investigates the integration of the different level-of-detail of a 3D City Model into one consistent 4D data model with the creation of level-of-detail using algorithms from a GIS perspective. In particular, such project is based on open source smart systems, and conceptualizes a personalized and contextualized exploration of the Cultural Heritage through an experiential analysis of the territory.
Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis
Jeong, Myong K; Kong, Seong G; Omitaomu, Olufemi A
2008-09-01
Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
Khan, Zareen S; Ghosh, Rakesh Kumar; Girame, Rushali; Utture, Sagar C; Gadgil, Manasi; Banerjee, Kaushik; Reddy, D Damodar; Johnson, Nalli
2014-05-23
A selective and sensitive multiresidue analysis method, comprising 4 7pesticides, was developed and validated in tobacco matrix. The optimized sample preparation procedure in combination with gas chromatography mass spectrometry in selected-ion-monitoring (GC-MS/SIM) mode offered limits of detection (LOD) and quantification (LOQ) in the range of 3-5 and 7.5-15ng/g, respectively, with recoveries between 70 and 119% at 50-100ng/g fortifications. In comparison to the modified QuEChERS (Quick-Easy-Cheap-Effective-Rugged-Safe method: 2g tobacco+10ml water+10ml acetonitrile, 30min vortexing, followed by dispersive solid phase extraction cleanup), the method performed better in minimizing matrix co-extractives e.g. nicotine and megastigmatrienone. Ambiguity in analysis due to co-elution of target analytes (e.g. transfluthrin-heptachlor) and with matrix co-extractives (e.g. ?-HCH-neophytadiene, 2,4-DDE-linolenic acid) could be resolved by selective multi-dimensional (MD)GC heart-cuts. The method holds promise in routine analysis owing to noticeable efficiency of 27 samples/person/day. PMID:24746872
ERIC Educational Resources Information Center
Papay, John P.; Willett, John B.; Murnane, Richard J.
2011-01-01
We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,…
Coulson, Irene Katherina; Galenza, Shirley; Bratt, Sharon; Foisy-Doll, Colette R; Haase, Mary
2015-01-01
A significant transformation occurring in the continuing care industry is an attempt to shift the culture from impersonal institutions into true person-centred care (PCC) homes. This approach re-orients the facility's values, attitudes, norms and hierarchies while creating flexible role descriptions to promote collaborative teamwork. PCC practices will require healthcare teams to develop new approaches that empower residents and families to become partners in the development of a plan of care. This report outlines a study, which will gather data from an organizational policy analysis and interviews with residents and healthcare staff. These data will be examined through a sociological lens to identify areas for team improvement. The results will guide the design of a training curriculum to be delivered using traditional and multi-modal hi-fidelity simulation methods. PMID:24831267
Inference for Multi-Dimensional High-Frequency Data: Equivalence of Methods,
Mykland, Per A.
-based trading. The availability of recorded asset prices at such high frequencies magnifies the appeal of assetInference for Multi-Dimensional High-Frequency Data: Equivalence of Methods, Central Limit Theorems of the multi-dimensional multi-scale and kernel estimators for high-frequency financial data
Parallel Multi-dimensional Range Query Processing with R-Trees on GPU
Nam, Beomseok
as a new cost effective parallel computing paradigm in high performance computing research that enables pattern into such scientific data analysis applications is multi-dimensional range query, but not much and improved. However, not much research has been conducted to deploy multi-dimensional in- dexing structures
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
Computer Aided Data Analysis in Sociometry
ERIC Educational Resources Information Center
Langeheine, Rolf
1978-01-01
A computer program which analyzes sociometric data is presented. The SDAS program provides classical sociometric analysis. Multi-dimensional scaling and cluster analysis techniques may be combined with the MSP program. (JKS)
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
Gabbouj, Moncef
Perceptual Dominant Color Extraction by Multi-Dimensional Particle Swarm Optimization 1 Abstract. Extracting dominant colors that are prominent in a visual scenery is of utmost importance since the human dominant color extraction as a dynamic clustering problem and use techniques based on Particle Swarm
Procedural shape generation for multi-dimensional data visualization
David S. Ebert; Randall M. Rohrer; Christopher D. Shaw; Pradyut Panda; James M. Kukla; D. Aaron Roberts
2000-01-01
Visualization of multi-dimensional data is a challenging task. The goal is not the display of multiple data dimensions, but user comprehension of the multi-dimensional data. This paper explores several techniques for perceptually motivated procedural generation of shapes to increase the comprehension of multi-dimensional data. Our glyph-based system allows the visualization of both regular and irregular grids of volumetric data. A
Statistical Downscaling in Multi-dimensional Wave Climate Forecast
NASA Astrophysics Data System (ADS)
Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.
2009-04-01
Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the suitability of this methodology to be used for seasonal forecast and for long-term climate change scenario projection of wave climate.
Multi-Dimensional Construct of Self-Esteem: Tools for Developmental Counseling.
ERIC Educational Resources Information Center
Norem-Hebeisen, Ardyth A.
A multi-dimensional construct of self-esteem has been proposed and subjected to initial testing through design of a self-report instrument. Item clusters derived from Rao's canonical and principal axis factor analyses are consistent with the hypothesized construct and have substantial internal reliability. Factor analysis of item clusters produced…
Vlasov multi-dimensional model dispersion relation
NASA Astrophysics Data System (ADS)
Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia
2014-07-01
A hybrid model of the Vlasov equation in multiple spatial dimension D > 1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?N, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.
Vlasov multi-dimensional model dispersion relation
Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia
2014-07-15
A hybrid model of the Vlasov equation in multiple spatial dimension D?>?1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?{sup N}, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.
A Multi-Dimensional Data Model for Personal Photo Browsing
Jonsson, BjÃ¶rn
A Multi-Dimensional Data Model for Personal Photo Browsing BjÃ¶rn Ã?Ã³r JÃ³nsson1 , GrÃmur TÃ³masson1 to provide effective browsing tools for photo collections. Learning from the resounding success of multi) applications, we propose a multi-dimensional model for media browsing, called M3 , that combines MDA concepts
Voice dysfunction in dysarthria: application of the Multi-Dimensional Voice Program.
Kent, R D; Vorperian, H K; Kent, J F; Duffy, J R
2003-01-01
Phonatory dysfunction is a frequent component of dysarthria and often is a primary feature noted in clinical assessment. But the vocal impairment can be difficult to assess because (a). the analysis of voice disorder of any kind can be challenging, and (b). the voice disorder in dysarthria often occurs along with other impairments affecting articulation, resonance, and respiration. A promising assessment tool is multi-parameter acoustic analysis, such as the Multi-Dimensional Voice Program (MDVP). Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice, including (1). selection of the sample to be analyzed, (2). signal quality requirements, (3). availability of normative data for both genders and different ages of speakers, (4). reliability of analysis, and (5). correlation of acoustic results with results from other methods of analysis. In Part 2, acoustic data are reviewed for the dysarthria associated with Parkinson disease (PD), cerebellar disease, amyotrophic lateral sclerosis (ALS), traumatic brain injury (TBI), unilateral hemispheric stroke, and essential tremor. Tentative profiles of voice disorder are described for these conditions. These profiles may serve as hypotheses for future research. Although several issues remain to be resolved in the acoustic analysis of voice disorder in dysarthria, steps can be taken now to promote the reliability, validity, and clinical utility of such analyses. (1). As a result of this activity, the participant will be able to describe ways in which an optimal multi-dimensional analysis of voice can be performed with modern acoustic analysis systems. (2). As a result of this activity, the participant will be able to apply multi-dimensional acoustic analysis of voice to individuals who have a dysarthria-related voice disorder. (3). As a result of this activity, the participant will be able to identify major sources of normative data on the Multi-Dimensional Voice Program. PMID:12837587
Discovering Imitation Strategies through Categorization of Multi-Dimensional Data
Schaal, Stefan
Discovering Imitation Strategies through Categorization of Multi-Dimensional Data Aude Billard-02 Kyoto, Japan Corresponding Author: aude.billard@epfl.chÂ¡ Abstract-- An essential problem of imitation
Adaptive Control Variates for Pricing Multi-Dimensional American Options
Henderson, Shane
Adaptive Control Variates for Pricing Multi-Dimensional American Options Samuel M. T. Ehrlichman Shane G. Henderson July 25, 2007 Abstract We explore a class of control variates for the American option
Towards a genuinely multi-dimensional upwind scheme
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Vanleer, Bram; Roe, Philip L.
1990-01-01
Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.
Multi-dimensional hybrid-simulation techniques in plasma physics
Hewett, D.W.
1982-01-01
Multi-dimensional hybrid simulation models have been developed for use in studying plasma phenomena on extended time and distance scales. The models make fundamental use of the small Debye length or quasi-neutrality assumption. The ions are modeled by particle-in-cell (PIC) techniques while the electrons are considered a collision-dominated fluid. The fields are calculated in the nonradiative Darwin limit. Some electron inertial effects are retained in the Finite Electron Mass model (FEM). In this model, the quasi-neutral counterpart of Poisson's equation is obtained by first summing the electron and ion momentum equations and then taking the quasi-neutral limit. In the Zero Electron Mass (ZEM) model explicit use is made of the axisymmetric properties of the model to decouple the components of the model equations. Equations to self-consistently advance the electron temperature have recently been added to the scheme. The model equations which result from these considerations are two coupled, nonlinear, second order partial differential equations.
High-value energy storage for the grid: a multi-dimensional look
Culver, Walter J.
2010-12-15
The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)
SHANE DARKE; ALEX WODAK; NICK HEATHER; JEFF WARD
This article presents a new instrument with which to assess the effects of opiate treatment. The Opiate Treatment Index (OTI) is multi-dimensional in structure, with scales measuring six independently measured outcome domains: drug use; HIV risk-taking behaviour; social functioning; criminality; health; and psychological adjustment. Psychometric properties of the Index are excellent, suggesting that the OTI is a relatively quick, efficient
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
Fast Packet Classification Using Multi-Dimensional Encoding
NASA Astrophysics Data System (ADS)
Huang, Chi Jia; Chen, Chien
Internet routers need to classify incoming packets quickly into flows in order to support features such as Internet security, virtual private networks and Quality of Service (QoS). Packet classification uses information contained in the packet header, and a predefined rule table in the routers. Packet classification of multiple fields is generally a difficult problem. Hence, researchers have proposed various algorithms. This study proposes a multi-dimensional encoding method in which parameters such as the source IP address, destination IP address, source port, destination port and protocol type are placed in a multi-dimensional space. Similar to the previously best known algorithm, i.e., bitmap intersection, multi-dimensional encoding is based on the multi-dimensional range lookup approach, in which rules are divided into several multi-dimensional collision-free rule sets. These sets are then used to form the new coding vector to replace the bit vector of the bitmap intersection algorithm. The average memory storage of this encoding is ? (L · N · log N) for each dimension, where L denotes the number of collision-free rule sets, and N represents the number of rules. The multi-dimensional encoding practically requires much less memory than bitmap intersection algorithm. Additionally, the computation needed for this encoding is as simple as bitmap intersection algorithm. The low memory requirement of the proposed scheme means that it not only decreases the cost of packet classification engine, but also increases the classification performance, since memory represents the performance bottleneck in the packet classification engine implementation using a network processor.
Multi-dimensional fission model with a complex absorbing potential
Guillaume Scamps; Kouichi Hagino
2015-02-16
We study the dynamics of multi-dimensional quantum tunneling by introducing a complex absorbing potential to a two-dimensional model for spontaneous fission. We fist diagonalize the Hamiltonian with the complex potential to determine a resonance state as well as its life-time. We then solve the time-dependent Schr\\"odinger equation with such basis in order to investigate the tunneling path. We compare this method with the semi-classical method for multi-dimensional tunneling with imaginary time. A good agreement is found both for the life-time and for the tunneling path.
A multi-dimensional constitutive model for shape memory alloys
C. Liang; C. A. Rogers
1992-01-01
This paper presents a multi-dimensional thermomechanical constitutive model for shape memory alloys (SMAs). This constitutive relation is based upon a combination of both micromechanics and macromechanics. The martensite fraction is introduced as a variable in this model to reflect the martensitic transformation that determines the unique characteristics of shape memory alloys. This constitutive relation can be used to study the
Sensing Multi-dimensional Human Behavior in Opportunistic Networks
Pagani, Elena
Sensing Multi-dimensional Human Behavior in Opportunistic Networks Sabrina Gaito Università degli are becoming highly personalized and influenced by user loca- tion, mobility, social attitudes and interests 1: Three dimensions of human behavior.1 name 3-dimensionality of human behavior (for short H3D
Multi-Dimensional Fragment Classification in Biomedical Text
Shatkay, Hagit
Multi-Dimensional Fragment Classification in Biomedical Text By Fengxia Pan A thesis submitted's University Kingston, Ontario, Canada September 2006 Copyright © Fengxia Pan, 2006 #12;Abstract Automated text categorization is the task of automatically assigning input text to a set of categories. With the increasing
ESTABLISHING CORRELATIONS IN MULTI-DIMENSIONAL GIS DATABASES Peggy AGOURIS*
Mountrakis, Giorgos
are typically indexed according to their metadata information. For example, an image may be indexed according Group WG IV/5 KEY WORDS: Multi-dimensional, GIS, databases, queries, metadata, multimedia ABSTRACT.g. imagery, maps, vector data, video, and text), huge volumes of data (e.g. numerous satellite images
Application of Multi-Dimensional Sensing Technologies in Production Systems
NASA Astrophysics Data System (ADS)
Shibuya, Hisae; Kimachi, Akira; Suwa, Masaki; Niwakawa, Makoto; Okuda, Haruhisa; Hashimoto, Manabu
Multi-dimensional sensing has been used for various purposes in the field of production systems. The members of the IEEJ MDS committee investigated the trends in sensing technologies and their applications. In this paper, the result of investigations of auto-guided vehicles, cell manufacturing robots, safety, maintenance, worker monitoring, and sensor networks are discussed.
Image matrix processor for fast multi-dimensional computations
Roberson, George P. (Tracy, CA); Skeate, Michael F. (Livermore, CA)
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
The Multi-Dimensional Demands of Reading in the Disciplines
ERIC Educational Resources Information Center
Lee, Carol D.
2014-01-01
This commentary addresses the complexities of reading comprehension with an explicit focus on reading in the disciplines. The author proposes reading as entailing multi-dimensional demands of the reader and posing complex challenges for teachers. These challenges are intensified by restrictive conceptions of relevant prior knowledge and experience…
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context
Portable laser synthesizer for high-speed multi-dimensional spectroscopy
Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)
2012-05-29
Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.
Efficient Subtorus Processor Allocation in a Multi-Dimensional Torus
Weizhen Mao; Jie Chen; William Watson
2005-11-30
Processor allocation in a mesh or torus connected multicomputer system with up to three dimensions is a hard problem that has received some research attention in the past decade. With the recent deployment of multicomputer systems with a torus topology of dimensions higher than three, which are used to solve complex problems arising in scientific computing, it becomes imminent to study the problem of allocating processors of the configuration of a torus in a multi-dimensional torus connected system. In this paper, we first define the concept of a semitorus. We present two partition schemes, the Equal Partition (EP) and the Non-Equal Partition (NEP), that partition a multi-dimensional semitorus into a set of sub-semitori. We then propose two processor allocation algorithms based on these partition schemes. We evaluate our algorithms by incorporating them in commonly used FCFS and backfilling scheduling policies and conducting simulation using workload traces from the Parallel Workloads Archive. Specifically, our simulation experiments compare four algorithm combinations, FCFS/EP, FCFS/NEP, backfilling/EP, and backfilling/NEP, for two existing multi-dimensional torus connected systems. The simulation results show that our algorithms (especially the backfilling/NEP combination) are capable of producing schedules with system utilization and mean job bounded slowdowns comparable to those in a fully connected multicomputer.
Bajorath, Jürgen
2013-09-01
The analysis of structure–activity relationships (SARs) is a central task in medicinal chemistry. Traditionally, SAR exploration has concentrated on individual compound series. This conventional approach is complemented by large-scale SAR analysis, which puts strong emphasis on data mining and SAR visualization. This contribution reviews recent concepts for large-scale SAR analysis including numerical functions to characterize global and local SAR information content of compound data sets, alternative activity landscape representations and data mining strategies. PMID:24050139
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2013-01-01
In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the…
Development of a Scale Measuring Trait Anxiety in Physical Education
ERIC Educational Resources Information Center
Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos
2012-01-01
The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…
Recent Advances in Fragment-Based QSAR and Multi-Dimensional QSAR Methods
Myint, Kyaw Zeyar; Xie, Xiang-Qun
2010-01-01
This paper provides an overview of recently developed two dimensional (2D) fragment-based QSAR methods as well as other multi-dimensional approaches. In particular, we present recent fragment-based QSAR methods such as fragment-similarity-based QSAR (FS-QSAR), fragment-based QSAR (FB-QSAR), Hologram QSAR (HQSAR), and top priority fragment QSAR in addition to 3D- and nD-QSAR methods such as comparative molecular field analysis (CoMFA), comparative molecular similarity analysis (CoMSIA), Topomer CoMFA, self-organizing molecular field analysis (SOMFA), comparative molecular moment analysis (COMMA), autocorrelation of molecular surfaces properties (AMSP), weighted holistic invariant molecular (WHIM) descriptor-based QSAR (WHIM), grid-independent descriptors (GRIND)-based QSAR, 4D-QSAR, 5D-QSAR and 6D-QSAR methods. PMID:21152304
Multi-dimensional coordination in cross-country skiing analyzed using self-organizing maps.
Lamb, Peter F; Bartlett, Roger; Lindinger, Stefan; Kennedy, Gavin
2014-02-01
This study sought to ascertain how multi-dimensional coordination patterns changed with five poling speeds for 12 National Standard cross-country skiers during roller skiing on a treadmill. Self-organizing maps (SOMs), a type of artificial neural network, were used to map the multi-dimensional time series data on to a two-dimensional output grid. The trajectories of the best-matching nodes of the output were then used as a collective variable to train a second SOM to produce attractor diagrams and attractor surfaces to study coordination stability. Although four skiers had uni-modal basins of attraction that evolved gradually with changing speed, the other eight had two or three basins of attraction as poling speed changed. Two skiers showed bi-modal basins of attraction at some speeds, an example of degeneracy. What was most clearly evident was that different skiers showed different coordination dynamics for this skill as poling speed changed: inter-skier variability was the rule rather than an exception. The SOM analysis showed that coordination was much more variable in response to changing speeds compared to outcome variables such as poling frequency and cycle length. PMID:24060219
Fourier transform assisted deconvolution of skewed peaks in complex multi-dimensional chromatograms.
Hanke, Alexander T; Verhaert, Peter D E M; van der Wielen, Luuk A M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2015-05-15
Lower order peak moments of individual peaks in heavily fused peak clusters can be determined by fitting peak models to the experimental data. The success of such an approach depends on two main aspects: the generation of meaningful initial estimates on the number and position of the peaks, and the choice of a suitable peak model. For the detection of meaningful peaks in multi-dimensional chromatograms, a fast data scanning algorithm was combined with prior resolution enhancement through the reduction of column and system broadening effects with the help of two-dimensional fast Fourier transforms. To capture the shape of skewed peaks in multi-dimensional chromatograms a formalism for the accurate calculation of exponentially modified Gaussian peaks, one of the most popular models for skewed peaks, was extended for direct fitting of two-dimensional data. The method is demonstrated to successfully identify and deconvolute peaks hidden in strongly fused peak clusters. Incorporation of automatic analysis and reporting of the statistics of the fitted peak parameters and calculated properties allows to easily identify in which regions of the chromatograms additional resolution is required for robust quantification. PMID:25841612
Construction of Multi-Dimensional Periodic Complementary Array Sets
NASA Astrophysics Data System (ADS)
Zeng, Fanxin; Zhang, Zhenyu
Multi-dimensional (MD) periodic complementary array sets (CASs) with impulse-like MD periodic autocorrelation function are naturally generalized to (one dimensional) periodic complementary sequence sets, and such array sets are widely applied to communication, radar, sonar, coded aperture imaging, and so forth. In this letter, based on multi-dimensional perfect arrays (MD PAs), a method for constructing MD periodic CASs is presented, which is carried out by sampling MD PAs. It is particularly worth mentioning that the numbers and sizes of sub-arrays in the proposed MD periodic CASs can be freely changed within the range of possibilities. In particular, for arbitrarily given positive integers M and L, two-dimensional periodic polyphase CASs with the number M2 and size L × L of sub-arrays can be produced by the proposed method. And analogously, pseudo-random MD periodic CASs can be given when pseudo-random MD arrays are sampled. Finally, the proposed method's validity is made sure by a given example.
Pauly, Anne; Wolf, Carolin; Mayr, Andreas; Lenz, Bernd; Kornhuber, Johannes; Friedland, Kristina
2015-01-01
Background In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy. Methods 269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013) received usual care, whereas intervention patients (05/2013-12/2013) underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component) during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component). Adherence was measured by the “Medication Adherence Report Scale” (MARS) and the “Drug Attitude Inventory” (DAI). Results The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73–1.93) higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15–2.72) more for intervention patients. Conclusion These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program. Trial Registration German Clinical Trials Register DRKS00006358 PMID:26437449
Multi-Dimensional Damage Detection for Surfaces and Structures
NASA Technical Reports Server (NTRS)
Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah
2013-01-01
Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or sensory system, which may include a time domain reflectometer, resistivity monitoring hardware, or other resistance-based systems. To begin, a layered composite consisting of thin-film damage detection layers separated by non-damage detection layers is fabricated. The damage detection layers are attached to a detector that provides details regarding the physical health of each detection layer individually. If damage occurs to any of the detection layers, a change in the electrical properties of the detection layers damaged occurs, and a response is generated. Real-time analysis of these responses will provide details regarding the depth, location, and size estimation of the damage. Multiple damages can be detected, and the extent (depth) of the damage can be used to generate prognostic information related to the expected lifetime of the layered composite system. The detection system can be fabricated very easily using off-the-shelf equipment, and the detection algorithms can be written and updated (as needed) to provide the level of detail needed based on the system being monitored. Connecting to the thin film detection layers is very easy as well. The truly unique feature of the system is its flexibility; the system can be designed to gather as much (or as little) information as the end user feels necessary. Individual detection layers can be turned on or off as necessary, and algorithms can be used to optimize performance. The system can be used to generate both diagnostic and prognostic information related to the health of layer composite structures, which will be essential if such systems are utilized for space exploration. The technology is also applicable to other in-situ health monitoring systems for structure integrity.
2011-01-01
Background The concept of resilience has captured the imagination of researchers and policy makers over the past two decades. However, despite the ever growing body of resilience research, there is a paucity of relevant, comprehensive measurement tools. In this article, the development of a theoretically based, comprehensive multi-dimensional measure of resilience in adolescents is described. Methods Extensive literature review and focus groups with young people living with chronic illness informed the conceptual development of scales and items. Two sequential rounds of factor and scale analyses were undertaken to revise the conceptually developed scales using data collected from young people living with a chronic illness and a general population sample. Results The revised Adolescent Resilience Questionnaire comprises 93 items and 12 scales measuring resilience factors in the domains of self, family, peer, school and community. All scales have acceptable alpha coefficients. Revised scales closely reflect conceptually developed scales. Conclusions It is proposed that, with further psychometric testing, this new measure of resilience will provide researchers and clinicians with a comprehensive and developmentally appropriate instrument to measure a young person's capacity to achieve positive outcomes despite life stressors. PMID:21970409
ROOT — An object oriented data analysis framework
Rene Brun; Fons Rademakers
1997-01-01
The ROOT system in an Object Oriented framework for large scale data analysis. ROOT written in C++, contains, among others, an efficient hierarchical OO database, a C++ interpreter, advanced statistical analysis (multi-dimensional histogramming, fitting, minimization, cluster finding algorithms) and visualization tools. The user interacts with ROOT via a graphical user interface, the command line or batch scripts. The command and
Multi-dimensional thermomechanical model for pseudoelastic response of SMA
NASA Astrophysics Data System (ADS)
Azadi, B.; Rajapakse, R. K. N. D.; Maijer, D. M.
2006-03-01
A multi-dimensional thermomechanical model has been developed to simulate the localized phase transformation and propagation of transformation front(s) in SMA materials. The current model is an extension of the one-dimensional model previously developed by the authors which consists of a constitutive relation and a transformation evolution rule (kinetic relation). The constitutive relation is constructed based on continuum mechanics principles, and the kinetic of transformation is expressed in terms of various transformation surfaces in stress-temperature space. The model captures the localized deformations in both forward and reverse transformation. The finite element simulation of the forward and reverse transformations in a short NiTi strip under quasi-static extension is presented.
A Multi-Dimensional Classification Model for Scientific Workflow Characteristics
Ramakrishnan, Lavanya; Plale, Beth
2010-04-05
Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
Indexing Multi-Dimensional Time-Series with Support for Multiple Distance Measures
Zordan, Victor
Indexing Multi-Dimensional Time-Series with Support for Multiple Distance Measures Michail Vlachos with mobile devices move in space and register their loca- tion at different time instances to spatiotemporal. Human motion data generated by tracking simultaneously various body joints are also multi- dimensional
Uncertain Location based Range Aggregates in a multi-dimensional space Ying Zhang #
Lin, Xuemin
Uncertain Location based Range Aggregates in a multi-dimensional space Ying Zhang # , Xuemin Lin the problem of processing the uncertain location based range aggregate in a multi-dimensional space. We first, we focus on the distance based range aggregates computation where the location of the query point
Balance properties of multi-dimensional words Val erie Berth e and Robert Tijdeman y
Tijdeman, Robert
Balance properties of multi-dimensional words Val#19;erie Berth#19;e #3; and Robert Tijdeman y Abstract A word u is called 1-balanced if for any two factors v and w of u of equal length, we have 1 #20 v. The aim of this paper is to extend the notion of balance to multi-dimensional words. We #12;rst
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets
Maitra, Ranjan
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets Ranjan Maitra in the clustering of multi-dimensional datasets. The developed procedure compares two models and declares the more of the procedure is illustrated on two well-known classification datasets and comprehensively evaluated in terms
Modelling Culture with Complex, Multi-dimensional, Multi-agent Systems
Ulieru, Mihaela
Modelling Culture with Complex, Multi-dimensional, Multi-agent Systems Alexis Morris, William Ross of New Brunswick {alexis.morris, william.ross, ulieru}@unb.ca, h5hosseini@uwaterloo.ca Abstract. Culture explores culture and cultural modelling from a complex systems, multi-dimensional, and multi
NASA Astrophysics Data System (ADS)
Duy, Truong Vinh Truong; Ozaki, Taisuke
2014-01-01
The fast Fourier transform (FFT) is undoubtedly an essential primitive that has been applied in various fields of science and engineering. In this paper, we present a decomposition method for the parallelization of multi-dimensional FFTs with the smallest communication amounts for all ranges of the number of processes compared to previously proposed methods. This is achieved by two distinguishing features: adaptive decomposition and transpose order awareness. In the proposed method, the FFT data is decomposed based on a row-wise basis that maps the multi-dimensional data into one-dimensional data, and translates the corresponding coordinates from multi-dimensions into one dimension so that the one-dimensional data can be divided and allocated equally to the processes using a block distribution. As a result and different from previous works that have the dimensions of decomposition pre-defined, our method can adaptively decompose the FFT data on the lowest possible dimensions depending on the number of processes. In addition, this row-wise decomposition provides plenty of alternatives in data transpose, and different transpose order results in different amounts of communication. We identify the best transpose orders with the smallest communication amounts for the 3-D, 4-D, and 5-D FFTs by analyzing all possible cases. We also develop a general parallel software package for the most popular 3-D FFT based on our method using the 2-D domain decomposition. Numerical results show good performance and scaling properties of our implementation in comparison with other parallel packages. Given both communication efficiency and scalability, our method is promising in the development of highly efficient parallel packages for the FFT.
Multi-Dimensional Analysis of Dynamic Human Information Interaction
ERIC Educational Resources Information Center
Park, Minsoo
2013-01-01
Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…
Multi-Dimensional Uncertainty Analysis in Secure and Dependable Domain
of Information Science and Engineering University of Trento, Italy Email: yudis.asnar@disi.unitn.it Paolo Giorgini Department of Information Science and Engineering University of Trento, Italy Email: paolo in the area of risk management and safety&reliability engineering. However, what is still missing is a clear
Multi-dimensional process hypercube for signal validation
Holbert, K.E.; Upadhyaya, B.R.
1989-07-01
The optimal control and safe operation of a nuclear power plant requires reliable information concerning the state of the process. Signal validation is the detection, isolation and characterization of faulty signals. Properly validated process signals are beneficial from the standpoint of increased plant availability and reliability of operator actions. A signal validation technique utilizing a process hypercube comparison (PHC) was developed during this research. The hypercube is a multi-dimensional joint histogram of the process conditions. The hypercube is created off-line during a learning phase. In the event that a newly observed plant state does not match with those in the learned hypercube, the PHC algorithm performs signal validation by progressively hypothesizing that one or more signals is in error. This assumption is then either substantiated or denied. In the case where many signals are found to be in error, a conclusion that the process conditions are abnormal is reached. The hypercube signal validation methodology was tested using operational data from both a commercial pressurized water reactor (PWR) and the Experimental Breeder Reactor II (EBR-II). 11 refs., 26 figs., 3 tabs.
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory
NASA Astrophysics Data System (ADS)
Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.
2014-01-01
Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.
Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy
2011-01-01
Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268
Multi-dimensional ultra-high frequency passive radio frequency identification tag antenna designs
Delichatsios, Stefanie Alkistis
2006-01-01
In this thesis, we present the design, simulation, and empirical evaluation of two novel multi-dimensional ultra-high frequency (UHF) passive radio frequency identification (RFID) tag antennas, the Albano-Dipole antenna ...
Accurate Multi-Dimensional Poisson-Disk Sampling MANUEL N. GAMITO
Maddock, Steve
.3 [Computer Graphics]: Pic- ture/Image Generation--Antialiasing; I.3.7 [Computer Graphics]: Three- Dimensional. Accurate Multi-Dimensional Poisson-Disk Sampling. ACM Trans. Graph. VV, N, Article XXX (Month YYYY), 20
Generalized multipartitioning of multi-dimensional arrays for parallelizing line-sweep computations
Alain Darte; John M. Mellor-crummey; Robert J. Fowler; Daniel G. Chavarría-miranda
2003-01-01
Multipartitioning is a strategy for decomposing multi-dimensional arrays into tiles and map- ping the resulting tiles onto a collection of processors. This class of partitionings enables e! cient parallelization of \\
Chemistry and Transport in a Multi-Dimensional Model
NASA Technical Reports Server (NTRS)
Yung, Yuk L.
2004-01-01
Our work has two primary scientific goals, the interannual variability (IAV) of stratospheric ozone and the hydrological cycle of the upper troposphere and lower stratosphere. Our efforts are aimed at integrating new information obtained by spacecraft and aircraft measurements to achieve a better understanding of the chemical and dynamical processes that are needed for realistic evaluations of human impact on the global environment. A primary motivation for studying the ozone layer is to separate the anthropogenic perturbations of the ozone layer from natural variability. Using the recently available merged ozone data (MOD), we have carried out an empirical orthogonal function EOF) study of the temporal and spatial patterns of the IAV of total column ozone in the tropics. The outstanding problem about water in the stratosphere is its secular increase in the last few decades. The Caltech/PL multi-dimensional chemical transport model (CTM) photochemical model is used to simulate the processes that control the water vapor and its isotopic composition in the stratosphere. Datasets we will use for comparison with model results include those obtained by the Total Ozone Mapping Spectrometer (TOMS), the Solar Backscatter Ultraviolet (SBUV and SBUV/2), Stratosphere Aerosol and Gas Experiment (SAGE I and II), the Halogen Occultation Experiment (HALOE), the Atmospheric Trace Molecular Spectroscopy (ATMOS) and those soon to be obtained by the Cirrus Regional Study of Tropical Anvils and Cirrus Layers Florida Area Cirrus Experiment (CRYSTAL-FACE) mission. The focus of the investigations is the exchange between the stratosphere and the troposphere, and between the troposphere and the biosphere.
Multi-dimensional Multiphase Modeling of Sediment Transport
NASA Astrophysics Data System (ADS)
Cheng, Z.; d'Albignac, S.; Yu, X.; Hsu, T.; Sou, I.; Calantoni, J.
2012-12-01
Sediment transport driven by waves and currents is of great significance to further predict coastal morphodynamics. Eulerian two-phase models have been shown effective to study sheet flow sediment transport, though most of them are limited to Reynolds-averaged one-dimensional-vertical formulation. Hence, bedform, plug flow and turbulence cannot be resolved. Our goal is to develop four-way coupled multiphase models for multi-dimensional sediment transport under the numerical framework of OpenFOAM for Eulerian modeling and CFDEM for Euler-Lagrangian modeling. In the Eulerian modeling, particle-particle interaction is modeled using the kinetic theory for granular flow for binary collision and phenomenological closure for stresses of enduring contact. To improve the capability of the model for a range of grain sizes, a new closure for the fluid-particle velocity fluctuation correlation in the k-? equations is proposed. The model is validated by comparing the numerical results with laboratory experiments under steady flow and oscillatory flow for grain size ranging from 0.13~0.51 mm. To improve the closure of particle stress and studying poly-dispersed sediment transport processes, an Euler-Lagrangian solver called CFDEM, which couples OpenFOAM for the fluid phase and LIGGGHTS for particle phase, is modified for sand transport in oscillatory flow. Preliminary investigation suggests that even under sheet flow condition, small bed irregularities are observed during flow reversal. These small irregularities later encourage the formation of large sediment cloud during peak flow. 2D/3D simulation of the recent U-tube experiments at Naval Research Laboratory will be carried out to study instabilities in sheet flow and the poly-dispersed effects.
On the behaviour near expiry for multi-dimensional American options
NASA Astrophysics Data System (ADS)
Nyström, Kaj
2008-03-01
In this paper we analyse the behaviour, near expiry, of the free boundary appearing in the pricing of multi-dimensional American options in a financial market driven by a general multi-dimensional Ito diffusion. In particular, we prove regularity for the pricing function up to the terminal state and we establish a sufficient criteria for the conclusion that the optimal exercise boundary approaches the terminal state faster than parabolically.
Scaling analysis of stock markets.
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis. PMID:24985421
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Pricing of options on stocks driven by multi-dimensional operator stable Levy processes
Przemyslaw Repetowicz; Peter Richmond
2005-02-04
We model the price of a stock via a Lang\\'{e}vin equation with multi-dimensional fluctuations coupled in the price and in time. We generalize previous models in that we assume that the fluctuations conditioned on the time step are compound Poisson processes with operator stable jump intensities. We derive exact relations for Fourier transforms of the jump intensity in case of different scaling indices $\\underline{\\underline{E}}$ of the process. We express the Fourier transform of the joint probability density of the process to attain given values at several different times and to attain a given maximal value in a given time period through Fourier transforms of the jump intensity. Then we consider a portfolio composed of stocks and of options on stocks and we derive the Fourier transform of a random variable $\\mathfrak{D}_t$ (deviation of the portfolio) that is defined as a small temporal change of the portfolio diminished by the the compound interest earned. We show that if the price of the option at time $t$ satisfies a certain functional equation specified in text then the deviation of the portfolio has a zero mean $E[ \\mathfrak{D}_t ] = 0$ and the option pricing problem may have a solution. We compare our approach to other approaches that assumed the log-characteristic function of the fluctuations that drive the stock price to be an analytic function.
NASA Astrophysics Data System (ADS)
Suga, Shinsuke
2014-11-01
We propose accurate explicit numerical schemes based on the lattice Boltzmann (LB) method for multi-dimensional diffusion equations. In LB schemes, the velocity models D2Q9 and D2Q13 are used for two-dimensional equations and D3Q19 and D3Q25 for three-dimensional equations. We introduce free parameters that characterize the weight of the equilibrium distribution functions to reduce numerical errors. Consistency analysis through the fourth-order Chapman-Ensgok expansion of the distribution functions gives an approximate diffusion equation with error terms up to fourth-order. The relaxation parameter and weight parameters are determined so that second-order error terms are eliminated in the approximate equation. Stability analysis shows that we can find a relaxation parameter so that each of the presented schemes is stable for given diffusion coefficients and discretizing parameters. Numerical experiments for the isotropic and anisotropic benchmark problems show that the presented schemes derived from the velocity models D2Q13 and D3Q25 are useful for numerical simulations of practical problems governed by two- and three-dimensional diffusion equations, respectively. In particular, schemes in which the value of the relaxation parameter is set to be 1 demonstrate a fourth-order accuracy under the stability condition.
A Comment on "On Some Contradictory Computations in Multi-dimensional Mathematics"
E. Capelas de Oliveira; W. A. Rodrigues Jr
2006-03-27
In this paper we analyze the status of some `unbelievable results' presented in the paper `On Some Contradictory Computations in Multi-Dimensional Mathematics' [1] published in Nonlinear Analysis, a journal indexed in the Science Citation Index. Among some of the unbelievable results `proved' in the paper we can find statements like that: (i) a linear transformation which is a rotation in R^2 with rotation angle theta different from nphi/2, is inconsistent with arithmetic, (ii) complex number theory is inconsistent. Besides these 'results' of mathematical nature [1],offers also a `proof' that Special Relativity is inconsistent. Now, we are left with only two options (a) the results of [1] are correct and in this case we need a revolution in Mathematics (and also in Physics) or (b) the paper is a potpourri of nonsense. We show that option (b) is the correct one. All `proofs' appearing in [1] are trivially wrong, being based on a poor knowledge of advanced calculus notions. There are many examples (some of them discussed in [2,3,4,5,6]of complete wrong papers using nonsequitur Mathematics in the Physics literature. Taking into account also that a paper like [1] appeared in a Mathematics journal we think that it is time for editors and referees of scientific journals to become more careful in order to avoid the dissemination of nonsense.
Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling
ERIC Educational Resources Information Center
Kim, Se-Kang; Davison, Mark L.; Frisby, Craig L.
2007-01-01
This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major…
Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics
Dai, William W; Woodward, Paul R
2009-01-01
An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.
Towards Optimal Multi-Dimensional Query Processing with BitmapIndices
Rotem, Doron; Stockinger, Kurt; Wu, Kesheng
2005-09-30
Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the access costs for processing multi-dimensional queries using bitmap indices with binning. Innovative features of our algorithm include (a) optimally placing the bin boundaries and (b) dynamically reordering the evaluation of the query terms. In addition, we derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.
The Multi-Dimensional Character and Mechanisms of Core-Collapse Supernovae
NASA Astrophysics Data System (ADS)
Burrows, Adam; Dessart, Luc; Livne, Eli
2007-10-01
On this twentieth anniversary of the epiphany of SN1987A, we summarize various proposed explosion mechanisms for the generic core-collapse supernova. Whether the agency is neutrinos, acoustic power, magnetohydrodynamics, or some hybrid combination of these three, both multi-dimensional simulations and constraining astronomical measurements point to a key role for asphericities and instabilities in collapse and explosion dynamics. Moreover, different progenitors may explode in different ways, and have different observational signatures. Whatever these are, the complex phenomena being revealed through modern multi-dimensional numerical simulations manifest a richness that was little anticipated in the early years of theoretical supernova research, a richness that continues to challenge us today.
SCALE DRAM Subsystem Power Analysis Vimal Bhalodia
Asanovi?, Krste
SCALE DRAM Subsystem Power Analysis by Vimal Bhalodia Submitted to the Department of Electrical by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur C. Smith Chairman, Department Committee on Graduate Theses #12;2 #12;SCALE DRAM Subsystem Power tradeoffs. The SCALE DRAM Subsystem is an energy-aware DRAM system with various system policies that make
Gabbouj, Moncef
Dominant Color Extraction based on Dynamic Clustering by Multi-Dimensional Particle Swarm dominant colors that are prominent in a visual scenery is of utter importance since human visual system primarily uses them for perception. In this paper we address dominant color extraction as a dynamic
Measurement of Low Level Explosives Reaction in Gauged MultiDimensional Steven Impact Tests
A. M. Niles; J. W. Forbes; C. M. Tarver; S. K. Chidester; F. Garcia; D. W. Greenwood; R. G. Garza; L L Swizter
2001-01-01
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times.
Circuit-Switched Broadcasting in Multi-Port Multi-Dimensional Torus Networks
Tseng, Yu-Chee
Circuit-Switched Broadcasting in Multi-Port Multi-Dimensional Torus Networks #3; San-Yuan Wang 1 this problem in a circuit-switched torus with #11;-port capability, where a node can simultaneously send: broadcast, circuit switching, collective communication, interconnection network, parallel processing, torus
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems
Pedram, Massoud
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems Hadi,pedram}@usc.edu Abstract--With increasing demand for computing and memory, distributed computing systems have attracted for multi-tier applications in the cloud computing is considered. An upper bound on the total profit
ERIC Educational Resources Information Center
Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen
2011-01-01
Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…
Multi-Dimensional Deep Memory Go-Player for Parameter Exploring Policy Gradients
Schmidhuber, Juergen
Universit¨at M¨unchen, Germany 2 Istituto Dalle Molle di Studi sull'Intelligenza Artificiale, Lugano a combination of two recent de- velopments in Machine Learning. We employ Multi-Dimensional Recur- rent Neural the controller network's parameters) can be done in a number of ways. Recent work [5] has used state
Efficient Addressing of Multi-dimensional Signal Constellations Using a Lookup Table
Kabal, Peter
Efficient Addressing of Multi-dimensional Signal Constellations Using a Lookup Table A. K a lookup table for the addressing of an optimally shaped constellation. The method is based on partitioning constellation. 1 Introduction In shaping, one tries to reduce the average energy of a signal constellation
Shaping of Multi-dimensional Signal Constellations Using a Lookup Table
Kabal, Peter
Shaping of Multi-dimensional Signal Constellations Using a Lookup Table A. I of a signal constellation to reduce its average energy. Addressing is the assignment of the data bits use a lookup table for addressing. The method is based on partitioning the two-dimensional sul~constel
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations
Kabal, Peter
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations A. K constellation. This scheme, called as the ad- dress decomposition, is based on decomposing the addressing. This is called a signal constel- lation. The constellation points are usually selected as a finite subset
Multi-Dimensional Screening Device (MDSD) for the Identification of Gifted/Talented Children.
ERIC Educational Resources Information Center
Kranz, Bella
The monograph presents a model for identifying gifted/talented children which is based on a multidimensional concept of intelligence, designed to include the less accepted school population in its initial search, and tied to a staff development program for teachers who must be part of the screening process. Rationale for the Multi-Dimensional…
Workshop on Multi-Dimensional Separation of Concerns in Software Engineering
Perry, Dewayne E.
of the software lifecycle. For example, the prevalent kind of concern in object-oriented programming is data or class; each concern in this dimension is a data type defined and encapsulated by a class. Features [7Workshop on Multi-Dimensional Separation of Concerns in Software Engineering Peri Tarr, William
Hardware/Software Interface for Multi-Dimensional Processor Arrays Alain Darte
Risset, Tanguy
Hardware/Software Interface for Multi-Dimensional Processor Arrays Alain Darte CNRS, LIP, ENS a large communication bandwidth between the processor and graphic hardware ac- celerators, hence important gains in hardware area. 1 Introduction With the widespread development of systems on a chip (SoC
A Practical System Approach for Fully Autonomous Multi-Dimensional Structural Health Monitoring
Ha, Dong S.
A Practical System Approach for Fully Autonomous Multi- Dimensional Structural Health Monitoring propagation method on a single board, while sharing the same DSP (Digital Signal Processing) processor and the piezoelectric patches. Three functional blocks, such as signal excitation/generation, signal sensing, and data
Dredze, Mark
Experimenting with Drugs (and Topic Models): Multi-Dimensional Exploration of Recreational Drug of new recreational drugs and trends re- quires mining current information from non-traditional text components. The resulting model learns factors that correspond to drug type, delivery method (smoking
Potma, Eric Olaf
Multi-dimensional differential imaging with FE-CARS microscopy Vishnu Vardhan Krishnamachari, Eric-Stokes Raman scattering (CARS) microscopy [911], provide high resolu- tion images with contrast based to CARS microscopy and was shown to add a series of new contrast mechanisms to the existing palette
Multi-Dimensional Adaptive Simulation of Shock-Induced Detonation in a Shock Tube
Texas at Arlington, University of
Multi-Dimensional Adaptive Simulation of Shock-Induced Detonation in a Shock Tube P. Ravindran applications, primarily in propulsion.1 Detonations use a reacting flow mechanism wherein a strong shock wave of intense chemical re- actions. The leading shock causes a compression of the combustible mixture, which
A NEW CLASS OF ENTROPY ESTIMATORS FOR MULTI-DIMENSIONAL DENSITIES Erik G. Miller
Massachusetts at Amherst, University of
Processing, 2003 ABSTRACT We present a new class of estimators for approximating the entropy of multiA NEW CLASS OF ENTROPY ESTIMATORS FOR MULTI-DIMENSIONAL DENSITIES Erik G. Miller EECS Department, UC Berkeley Berkeley, CA 94720, USA International Conference on Acoustics, Speech, and Signal
Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter
ERIC Educational Resources Information Center
Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.
2010-01-01
We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…
Example for Exponential Growth of Complexity in a Finite Horizon Multi-dimensional Dispersing
Bálint, Péter
Budapest, Hungary and Institute of Mathematics, Budapest University of Technology and Economics Egry J the complexity of singularities grows exponentially with the iteration of the map. This implies the existence with singularities, complexity of the singularity set, astigmatism, multi- dimensional dispersing billiards 1
MULTI-DIMENSIONAL VISUALIZATION OF PROJECT CONTROL DATA Anthony D. Songer1
MULTI-DIMENSIONAL VISUALIZATION OF PROJECT CONTROL DATA Anthony D. Songer1 , Benjamin Hays2 of this data is created during the controls phase of projects and relates to cost, schedule, and administrative project control systems. However, changes in project control methods have been slow to evolve. The lack
Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5
Davis, C.B.
1998-01-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3.
Multi-dimensional reduction using self-organizing map
NASA Astrophysics Data System (ADS)
Kim, Kho Pui; Yusof, Fadhilah; Daud, Zalina binti Mohd
2014-07-01
Self-Organising Map (SOM) is found to be a useful tool for climatological synoptic, analysis in extreme and rainfall pattern, cloud classification and climate change analysis. In data preprocessing for use in statistical downscaling, Principal Component Analysis (PCA) or empirical orthogonal function (EOF) analysis is used to select the mode criterion for the predictor and predictand fields for building a model. However, EOF contributes less total variance for most cases of which 70% to 90% of total population variance is accounted in the analysis. Therefore, SOM is proposed to obtain a nonlinear mapping for the preprocessing process. This study examines the dimension reduction of NCEP variable using SOM during the periods of November-December-January-February (NDJF). The NCEP data used is the 20 grids point atmospheric data for variable Sea Level Pressure (SLP). The result showed that SOM had extracted the high dimensional data onto a low dimensional representation.
SCALE DRAM subsystem power analysis
Bhalodia, Vimal
2005-01-01
To address the needs of the next generation of low-power systems, DDR2 SDRAM offers a number of low-power modes with various performance and power consumption tradeoffs. The SCALE DRAM Subsystem is an energy-aware DRAM ...
FRW Bulk Viscous Cosmology in Multi Dimensional Space-Time
NASA Astrophysics Data System (ADS)
Katore, S. D.; Shaikh, A. Y.; Kapse, D. V.; Bhaskar, S. A.
2011-09-01
The exact solutions of the field equations are obtained by using the gamma law equation of state p=( ?-1) ? in which the parameter ? depends on scale factor R. The fundamental form of ?( R) is used to analyze a wide range of phases in cosmic history: inflationary phase and radiation-dominated phase. The corresponding physical interpretations of cosmological solutions are also discussed in the framework of ( n+2) dimensional space time.
Reply to Adams: Multi-Dimensional Edge Interference
Eagle, Nathan N.
We completely agree with adams that, in social network analysis, the particular research question should drive the definition of what constitutes a tie ( 1). However, we believe that even studies of inherently social ...
Turbocharged Speed Scaling: Analysis and Evaluation
Williamson, Carey
Turbocharged Speed Scaling: Analysis and Evaluation Maryam Elahi Carey Williamson Philipp Woelfel, woelfel}@cpsc.ucalgary.ca Abstract--In speed scaling systems, the execution speed of a processor can-based turbocharging, applied in conjunction with Fair Sojourn Protocol (FSP) scheduling and job-count-based speed
Anusha, L. S.; Nagendra, K. N.
2011-01-01
The solution of the polarized line radiative transfer (RT) equation in multi-dimensional geometries has been rarely addressed and only under the approximation that the changes of frequencies at each scattering are uncorrelated (complete frequency redistribution). With the increase in the resolution power of telescopes, being able to handle RT in multi-dimensional structures becomes absolutely necessary. In the present paper, our first aim is to formulate the polarized RT equation for resonance scattering in multi-dimensional media, using the elegant technique of irreducible spherical tensors T{sub Q}{sup K}(i, {Omega}). Our second aim is to develop a numerical method of a solution based on the polarized approximate lambda iteration (PALI) approach. We consider both complete frequency redistribution and partial frequency redistribution (PRD) in the line scattering. In a multi-dimensional geometry, the radiation field is non-axisymmetrical even in the absence of a symmetry breaking mechanism such as an oriented magnetic field. We generalize here to the three-dimensional (3D) case, the decomposition technique developed for the Hanle effect in a one-dimensional (1D) medium which allows one to represent the Stokes parameters I, Q, U by a set of six cylindrically symmetrical functions. The scattering phase matrix is expressed in terms of T{sub Q}{sup K}(i, {Omega}) (i=0,1,2, K=0,1,2, -K {<=} Q {<=} +K), with {Omega} being the direction of the outgoing ray. Starting from the definition of the source vector, we show that it can be represented in terms of six components S{sup K}{sub Q} independent of {Omega}. The formal solution of the multi-dimensional transfer equation shows that the Stokes parameters can also be expanded in terms of T{sub Q}{sup K}(i, {Omega}). Because of the 3D geometry, the expansion coefficients I{sup K}{sub Q} remain {Omega}-dependent. We show that each I{sup K}{sub Q} satisfies a simple transfer equation with a source term S{sup K}{sub Q} and that this transfer equation provides an efficient approach for handling the polarized transfer in multi-dimensional geometries. A PALI method for 3D, associated with a core-wing separation method for treating PRD, is developed. It is tested by comparison with 1D solutions, and several benchmark solutions in the 3D case are given.
Uniqueness of the multi-dimensional inverse scattering problem for ...
[18] and the references given there for the history and the recent progress in ... The situation considerably changes when we deal with time dependent poten- tials. ..... An important role in our analysis plays the function u~c defined by usa(t, x;.
Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws
NASA Astrophysics Data System (ADS)
Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu
2013-11-01
We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi-domain strategy. For the same RMI test, the hybrid computations on a single GPU, in double precision arithmetics, displays five- to six-fold speedup over the hybrid computations on a single CPU. The relative speedup of the hybrid computation over the WENO computations on GPUs is similar to that on CPUs, demonstrating the advantage of hybrid schemes technique on both CPUs and GPUs.
Low back pain in adolescent female rowers: a multi-dimensional intervention study
Debra Perich; Angus Burnett; Peter O’Sullivan; Chris Perkin
2011-01-01
The aim of this study was to determine whether a multi-dimensional intervention programme was effective in reducing the incidence\\u000a of low back pain (LBP) and the associated levels of pain and disability in schoolgirl rowers. This non-randomised controlled\\u000a trial involved an intervention (INT) group consisting of 90 schoolgirl rowers from one school and a control (CTRL) group consisting\\u000a of 131
NASA Astrophysics Data System (ADS)
Anku, Sitsofe E.
1997-09-01
Using the reform documents of the National Council of Teachers of Mathematics (NCTM) (NCTM, 1989, 1991, 1995), a theory-based multi-dimensional assessment framework (the "SEA" framework) which should help expand the scope of assessment in mathematics is proposed. This framework uses a context based on mathematical reasoning and has components that comprise mathematical concepts, mathematical procedures, mathematical communication, mathematical problem solving, and mathematical disposition.
Thomas Garrity
2012-05-25
A new classification scheme for pairs of real numbers is given, generalizing earlier work of the author that used continued fraction, which in turn was motivated by ideas from statistical mechanics in general and work of Knauf and Fiala and Kleban in particular. Critical for this classification are the number theoretic and geometric properties of the triangle map, a type of multi-dimensional continued fraction.
Garrity, Thomas
2012-01-01
A new classification scheme for pairs of real numbers is given, generalizing earlier work of the author that used continued fraction, which in turn was motivated by ideas from statistical mechanics in general and work of Knauf and Fiala and Kleban in particular. Critical for this classification are the number theoretic and geometric properties of the triangle map, a type of multi-dimensional continued fraction.
Frank S. C. Tseng; Annie Y. H. Chou
2006-01-01
During the past decade, data warehousing has been widely adopted in the business community. It provides multi-dimensional analyses on cumulated historical business data for helping contemporary administrative decision-making. Nevertheless, it is believed that only about 20% information can be extracted from data warehouses concerning numeric data only, the other 80% information is hidden in non-numeric data or even in documents.
Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices
Rotem, Doron; Stockinger, Kurt; Wu, Kesheng
2006-03-30
Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.
Hitchhiker's guide to multi-dimensional plant pathology.
Saunders, Diane G O
2015-02-01
Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800
NASA Astrophysics Data System (ADS)
Falissard, F.
2013-11-01
This paper addresses the extension of one-dimensional filters in two and three space dimensions. A new multi-dimensional extension is proposed for explicit and implicit generalized Shapiro filters. We introduce a definition of explicit and implicit generalized Shapiro filters that leads to very simple formulas for the analyses in two and three space dimensions. We show that many filters used for weather forecasting, high-order aerodynamic and aeroacoustic computations match the proposed definition. Consequently the new multi-dimensional extension can be easily implemented in existing solvers. The new multi-dimensional extension and the two commonly used methods are compared in terms of compactness, robustness, accuracy and computational cost. Benefits of the genuinely multi-dimensional extension are assessed for various computations using the compressible Euler equations.
Takhtajan's floristic regions and foliicolous lichen biogeography: a compatibility analysis
Robert Lücking
2003-01-01
Abstract:Takhtajan's floristic regions of the world, based on vascular plant distribution, were used for a comparative analysis of foliicolous lichen biogeography. Of the 35 regions distinguished by that author, 23 feature foliicolous lichens. The South-East African, Fijian, Polynesian and Hawaiian regions lack sufficient information and were excluded from further analysis. Using multi-dimensional scaling and cluster and cladistic analyses, the remaining
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Dr?gu? et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Dr?gu?, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.
Scale Free Reduced Rank Image Analysis.
ERIC Educational Resources Information Center
Horst, Paul
In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Anusha, L. S.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)
2012-02-10
The solution of polarized radiative transfer equation with angle-dependent (AD) partial frequency redistribution (PRD) is a challenging problem. Modeling the observed, linearly polarized strong resonance lines in the solar spectrum often requires the solution of the AD line transfer problems in one-dimensional or multi-dimensional (multi-D) geometries. The purpose of this paper is to develop an understanding of the relative importance of the AD PRD effects and the multi-D transfer effects and particularly their combined influence on the line polarization. This would help in a quantitative analysis of the second solar spectrum (the linearly polarized spectrum of the Sun). We consider both non-magnetic and magnetic media. In this paper we reduce the Stokes vector transfer equation to a simpler form using a Fourier decomposition technique for multi-D media. A fast numerical method is also devised to solve the concerned multi-D transfer problem. The numerical results are presented for a two-dimensional medium with a moderate optical thickness (effectively thin) and are computed for a collisionless frequency redistribution. We show that the AD PRD effects are significant and cannot be ignored in a quantitative fine analysis of the line polarization. These effects are accentuated by the finite dimensionality of the medium (multi-D transfer). The presence of magnetic fields (Hanle effect) modifies the impact of these two effects to a considerable extent.
Petrov, G. M.; Davis, J. [Naval Research Laboratory, Plasma Physics Division, 4555 Overlook Ave. SW, Washington, DC 20375 (United States)
2011-07-15
An implicit multi-dimensional particle-in-cell (PIC) code is developed to study the interaction of ultrashort pulse lasers with matter. The algorithm is based on current density decomposition and is only marginally more complicated compared to explicit PIC codes, but it completely eliminates grid heating and possesses good energy conserving properties with relaxed time step and grid resolution. This is demonstrated in a test case study, in which high-energy protons are generated from a thin carbon foil at solid density using linear and circular polarizations. The grid heating rate is estimated to be 1-10 eV/ps.
NASA Astrophysics Data System (ADS)
Zeng, Wen; Xie, Maozhao
2006-12-01
The detailed surface reaction mechanism of methane on rhodium catalyst was analyzed. Comparisons between numerical simulation and experiments showed a basic agreement. The combustion process of homogeneous charge compression ignition (HCCI) engine whose piston surface has been coated with catalyst (rhodium and platinum) was numerically investigated. A multi-dimensional model with detailed chemical kinetics was built. The effects of catalytic combustion on the ignition timing, the temperature and CO concentration fields, and HC, CO and NOx emissions of the HCCI engine were discussed. The results showed the ignition timing of the HCCI engine was advanced and the emissions of HC and CO were decreased by the catalysis.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
Giant Leaps and Minimal Branes in Multi-Dimensional Flux Landscapes
Adam R. Brown; Alex Dahlen
2011-09-14
There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'minimal' brane that wraps the internal manifold and exhibits behavior not visible in the effective theory. We discuss the implications for the cosmological constant.
An optimization approach to multi-dimensional time domain acoustic inverse problems.
Gustafsson, M; He, S
2000-10-01
An optimization approach to a multi-dimensional acoustic inverse problem in the time domain is considered. The density and/or the sound speed are reconstructed by minimizing an objective functional. By introducing dual functions and using the Gauss divergence theorem, the gradient of the objective functional is found as an explicit expression. The parameters are then reconstructed by an iterative algorithm (the conjugate gradient method). The reconstruction algorithm is tested with noisy data, and these tests indicate that the algorithm is stable and robust. The computation time for the reconstruction is greatly improved when the analytic gradient is used. PMID:11051483
Toward a Mininum Criteria of Multi Dimensional Instanton Formation for Condensed Matter Systems?
A. W. Beckwith
2008-02-05
Our paper generalizes techniques initially explicitly developed for CDW applications only with respect to what is needed for multi dimensional instantons forming in complex condensed matter applications. This involves necessary conditions for formulation of a soliton- anti soliton pair,assuming a minimum distance between charge centers, and discusses the prior density wave physics example as to why a Pierels gap term is added to the tilted washboard potential for insuring the formation of scalar potential fields . We state that much the same methodology is needed for higher dimensional condensed matter systems/ giving an explicit reference to two dimensional instantons as presented by Lu, and indicating further development in higher dimensions is warranted
Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F
2008-12-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
Hu, Shao-Ying; Feng, Liang; Zhang, Ming-Hua; Gu, Jun-Fei; Jia, Xiao-Bin
2013-12-01
As the preparation process from Salvia miltiorrhiz herbs to S. miltiorrhiz injection involves complicated technology and has relatively more factors impacting quality safety, the overall quality control is required for its effectiveness and safety. On the basis of the component structure theory, and according to the material basis of S. miltiorrhiz injection, we discussed the multi-dimensional structure and process dynamic quality control technology system of the preparation, in order to achieve the quality control over the material basis with safety and effectiveness of S. miltiorrhiz injection, and provide new ideas and methods for production quality standardization of S. miltiorrhis injection. PMID:24791548
2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2001-01-01
A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.
Computational multi-dimensional imaging based on compound-eye optics
NASA Astrophysics Data System (ADS)
Horisaki, Ryoichi; Nakamura, Tomoya; Tanida, Jun
2014-11-01
Artificial compound-eye optics have been used for three-dimensional information acquisition and display. It also enables us to realize a diversity of coded imaging process in each elemental optics. In this talk, we introduce our single-shot compound-eye imaging system to observe multi-dimensional information including depth, spectrum, and polarization based on compressive sensing. Furthermore it is applicable to increase the dynamic range and field-of-view. We also demonstrate an extended depth-of-field (DOF) cameras based on compound-eye optics. These extended DOF cameras physically or computationally implement phase modulations to increase the focusing range.
Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.
Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen
2014-05-01
Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg. PMID:24740619
A scaling analysis of ozone photochemistry
NASA Astrophysics Data System (ADS)
Ainslie, B.; Steyn, D. G.
2006-09-01
A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO-O3 titration reaction (kNO); temperature by the NO-O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative jav?t photolysis rate. The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.
2015-01-01
Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
A multi-dimensional scale for repositioning public park and recreation services
Kaczynski, Andrew Thomas
2004-09-30
........................ 60 III METHODOLOGY ???????????????...?......... 62 Item Generation and Initial Content Validity Check ..........??? 63 Pretest of Instrument ???????.......????????... 66 Instrument Validation ?????????????.....??? 73...
Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions
Li, Haoran; Xiong, Li; Jiang, Xiaoqian
2014-01-01
Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s ? estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241
Differentially Private Synthesization of Multi-Dimensional Data using Copula Functions.
Li, Haoran; Xiong, Li; Jiang, Xiaoqian
2014-01-01
Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall's ? estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques. PMID:25405241
Multi-Dimensional Hydrodynamic Simulations with Non-Equilibrium Radiative Cooling Calculations
NASA Astrophysics Data System (ADS)
Kwak, Kyujin
2015-01-01
In the optically thin gas within the temperature range of 104 to a few times 106 K, radiative cooling due to line emission from abundant metal ions such as carbon, nitrogen, oxygen, neon, silicon, and iron ions can affect the gas dynamics and it becomes important to calculate the cooling rates accurately while running the hydrodynamic simulations. The accurate calculation should trace together the detailed processes of ionization and recombination for all the relevant ions of each metal at each hydrodynamic time step, i.e., in a non-equilibrium fashion. So far, due to the computational cost, it has been delayed to implement this non-equilibrium cooling calculation in the multi-dimensional hydrodynamic simulations, but it is now possible to do this thanks to the rapidly growing computing powers. By using the platform of the FLASH code, we have implemented the non-equilibrium radiative cooling calculation in the multi-dimensional hydrodynamic simulations. Here we present the code development process and the results of some test problems.
Multi-dimensional NMR without coherence transfer: Minimizing losses in large systems
NASA Astrophysics Data System (ADS)
Liu, Yizhou; Prestegard, James H.
2011-10-01
Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient 15N- 1H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of 15N- 1H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with 15N in all alanine residues. Operation at 4 °C mimics observation of a 50 kDa protein at 35 °C.
Large-Scale Visual Data Analysis
NASA Astrophysics Data System (ADS)
Johnson, Chris
2014-04-01
Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
E. van der Weide; H. Deconinck; E. Issman; G. Degrez
1999-01-01
A multi-dimensional cell-vertex upwind discretization technique for the Navier-Strokes equations on unstructured grids is\\u000a presented. The grids are composed of linear triangles in two and linear tetrahedra in three space dimensions. The nonlinear\\u000a upwind schemes for the inviscid part can be viewed as a multi-dimensional generalization of the Roe-scheme, but also as a\\u000a special class of Petrov-Galerkin schemes. They share
Akhter, T.; Hossain, M. M.; Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka-1342 (Bangladesh)
2012-09-15
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
Large-Scale Parametric Survival Analysis†
Mittal, Sushil; Madigan, David; Cheng, Jerry; Burd, Randall S.
2013-01-01
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power has led to considerable interest in analyzing very high-dimensional data where the number of predictor variables and the number of observations range between 104 – 106. In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models. PMID:23625862
Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae
Tanaka, Masaomi; Maeda, Keiichi; Mazzali, Paolo A.; Nomoto, Ken'ichi
2008-05-21
We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.
NASA Astrophysics Data System (ADS)
Nardin, Gaël; Autry, Travis M.; Moody, Galan; Singh, Rohan; Li, Hebin; Cundiff, Steven T.
2015-03-01
We review our recent work on multi-dimensional coherent optical spectroscopy (MDCS) of semiconductor nanostructures. Two approaches, appropriate for the study of semiconductor materials, are presented and compared. A first method is based on a non-collinear geometry, where the Four-Wave-Mixing (FWM) signal is detected in the form of a radiated optical field. This approach works for samples with translational symmetry, such as Quantum Wells (QWs) or large and dense ensembles of Quantum Dots (QDs). A second method detects the FWM in the form of a photocurrent in a collinear geometry. This second approach extends the horizon of MDCS to sub-diffraction nanostructures, such as single QDs, nanowires, or nanotubes, and small ensembles thereof. Examples of experimental results obtained on semiconductor QW structures are given for each method. In particular, it is shown how MDCS can assess coupling between excitons confined in separated QWs.
Multi-dimensional fiber-optic radiation sensor for ocular proton therapy dosimetry
NASA Astrophysics Data System (ADS)
Jang, K. W.; Yoo, W. J.; Moon, J.; Han, K. T.; Park, B. G.; Shin, D.; Park, S.-Y.; Lee, B.
2012-12-01
In this study, we fabricated a multi-dimensional fiber-optic radiation sensor, which consists of organic scintillators, plastic optical fibers and a water phantom with a polymethyl methacrylate structure for the ocular proton therapy dosimetry. For the purpose of sensor characterization, we measured the spread out Bragg-peak of 120 MeV proton beam using a one-dimensional sensor array, which has 30 fiber-optic radiation sensors with a 1.5 mm interval. A uniform region of spread out Bragg-peak using the one-dimensional fiber-optic radiation sensor was obtained from 20 to 25 mm depth of a phantom. In addition, the Bragg-peak of 109 MeV proton beam was measured at the depth of 11.5 mm of a phantom using a two-dimensional sensor array, which has 10×3 sensor array with a 0.5 mm interval.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
Evaluation of multi-dimensional flux models for radiative transfer in combustion chambers: A review
NASA Astrophysics Data System (ADS)
Selcuk, N.
1984-01-01
In recent years, flux methods have been widely employed as alternative, albeit intrinsically less accurate, procedures to the zone or Monte Carlo methods in complete prediction procedures. Flux models of radiation fields take the form of partial differential equations, which can conveniently and economically be solved simultaneously with the equations representing flow and reaction. The flux models are usually tested and evaluated from the point of view of predictive accuracy by comparing their predictions with "exact' values produced using the zone or Monte Carlo models. Evaluations of various multi-dimensional flux-type models, such as De Marco and Lockwood, Discrete-Ordinate, Schuster-Schwarzschild and moment, are reviewed from the points of view of both accuracy and computational economy. Six-flux model of Schuster-Schwarzschild type with angular subdivisions related to the enclosure geometry is recommended for incorporation into existing procedures for complete mathematical modelling of rectangular combustion chambers.
Giant Leaps and Monkey Branes in Multi-Dimensional Flux Landscapes
Brown, Adam R
2010-01-01
There is a standard story about decay in multi-dimensional flux landscapes: that from any state, the fastest decay is to take a small step, discharging one flux unit at a time; that fluxes with the same coupling constant are interchangeable; and that states with N units of a given flux have the same decay rate as those with -N. We show that this standard story is false. The fastest decay is a giant leap that discharges many different fluxes in unison; this decay is mediated by a 'monkey brane' that wraps the internal manifold and exhibits behavior not visible in the effective theory. The implications for the Bousso-Polchinski landscape are discussed.
Multi-dimensional explicit solutions of the diffusion-absorption equation1)
NASA Astrophysics Data System (ADS)
Khedr, Waleed S.
2015-03-01
We consider the diffusion-absorption equation ut=?um-auq, where a, q>0, m>1 and m+q=2. In the case a=0 the equation reduces to the Porous Medium Equation for which Barenblatt has provided an explicit formula for a class of self-similar solutions. In the presence of the low order term (a ? 0) the explicit solutions of the considered equation were deduced by Kersner for the one-dimensional case. In this article we construct the multi-dimensional weak solutions of the diffusion-absorption equation which we can consider as a generalization of Kersner's solutions. The obtained result can also be considered for the case of diffusion-reaction equations (a < 0). At the request of the author, due to an overlap of content with previously published material this article is retracted from the scientific record with effect from 24 April 2015.
Multi-dimensional instability of multi-ion acoustic solitary waves in a degenerate magnetized plasma
NASA Astrophysics Data System (ADS)
Akter, S.; Haider, M. M.; Duha, S. S.; Salahuddin, M.; Mamun, A. A.
2013-07-01
The multi-dimensional instability of obliquely propagating multi-ion acoustic (MIA) solitary structures was studied theoretically by the small-k (long wavelength plane wave) perturbation expansion technique in an ultra-relativistic degenerate magnetized plasma, which consists of inertia less electrons, inertial ions and stationary arbitrarily charged heavy ions. The Zakharov-Kuznetsov equation is derived by the reductive perturbation method and its solitary wave solution is analyzed. The basic properties of small but finite-amplitude MIA solitary waves have been modified significantly by the combined effects of the degenerate electron number density, heavy ion number density, external magnetic field and obliqueness. The underlying physics of the MIA solitary waves, which are relevant to space plasma situations, and the basic features, such as amplitude, width and growth rate, are briefly discussed.
Multi-Dimensional, Non-Contact Metrology using Trilateration and High Resolution FMCW Ladar
Mateo, Ana Baselga
2015-01-01
Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multi-dimensional, non-contact length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration based design provides 3D resolution inherently independent of stand-off range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly-stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of < 200 microns was determined to be limited by laser speckle issues caused by diffuse scattering of the targets.
Ionizing shocks in argon. Part II: Transient and multi-dimensional effects
Kapper, M. G.; Cambier, J.-L.
2011-06-01
We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
Gattol, Valentin; Sääksjärvi, Maria; Carbon, Claus-Christian
2011-01-01
Background The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2], [3]. PMID:21246037
Scaling analysis of negative differential thermal resistance
NASA Astrophysics Data System (ADS)
Chan, Ho-Kei; He, Dahai; Hu, Bambi
2014-05-01
Negative differential thermal resistance (NDTR) can be generated for any one-dimensional heat flow with a temperature-dependent thermal conductivity. In a system-independent scaling analysis, the general condition for the occurrence of NDTR is found to be an inequality with three scaling exponents: n1n2<-(1+n3), where n1?(-?,+?) describes a particular way of varying the temperature difference, and n2 and n3 describe, respectively, the dependence of the thermal conductivity on an average temperature and on the temperature difference. For cases with a temperature-dependent thermal conductivity, i.e. n2?0, NDTR can always be generated with a suitable choice of n1 such that this inequality is satisfied. The results explain the illusory absence of a NDTR regime in certain lattices and predict new ways of generating NDTR, where such predictions have been verified numerically. The analysis will provide insights for a designing of thermal devices, and for a manipulation of heat flow in experimental systems, such as nanotubes.
Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport
NASA Astrophysics Data System (ADS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-02-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
ERIC Educational Resources Information Center
Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu
2012-01-01
Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…
Ya-Ping Ma; Fu-Quan Deng; Dai-Zhou Chen; Shou-Wei Sun
1995-01-01
A specific, sensitive capillary multi-dimensional gas chromatographic method with thermionic specific detection (TSD) combined with internal standard methodology to identify ethyl carbamate (EC), a well known carcinogen, in various fermented alcoholic beverages is described. The basic procedures for sample preparation were similar to a modification of the LCBO (Liquor Control Board of Ontario) procedure, except that isopropyl carbamate (i-PC) was
Shneiderman, Ben
ethnography methods used in HCI and provide guidelines for conducting MILCs for information visualization. We}@cs.umd.edu ABSTRACT After an historical review of evaluation methods, we describe an emerging research method called Multi-dimensional In-depth Long-term Case studies (MILCs) which seems well adapted to study the creative
Lushnikov, Pavel
Vlasov multi-dimensional model dispersion relation Pavel M. Lushnikov, Harvey A. Rose, Denis A relation Pavel M. Lushnikov,1,a) Harvey A. Rose,2,3 Denis A. Silantyev,1,3 and Natalia Vladimirova1,3 1 online 2 July 2014) A hybrid model of the Vlasov equation in multiple spatial dimension D > 1 [H. A. Rose
Zhao, Yuan; Zeng, Wencong; Tao, Zhuchen; Xiong, Penghui; Qu, Yan; Zhu, Yanwu
2015-01-18
We report an efficient surface-enhanced Raman scattering (SERS) substrate by utilizing the multi-dimensional plasmonic coupling in Au nanoparticle (NP)-graphene-Ag NP hybrid structures. An ultrasensitive SERS detection with a limit of down to 10(-13) M has been achieved when the sandwiched hybrid film is fabricated on an Ag substrate. PMID:25429404
Sadjadi, S. Masoud
Realizing Multi-Dimensional Software Adaptation P. K. McKinley, E. P. Kasten, S. M. Sadjadi, and Z. Adaptability can be implemented in different parts of the system. One popular approach introduces a layer, 24, 43]. An appropriate middleware platform can help to insulate application components from platform
Deng, Weiran; Yang, Cungeng; Stenger, V. Andrew
2010-01-01
Multi-dimensional RF pulses are of current interest due to their promise for improving high field imaging as well as for optimizing parallel transmission methods. One major drawback is that the computation time of numerically designed multi-dimensional RF pulses increases rapidly with their resolution and number of transmitters. This is critical because the construction of multi-dimensional RF pulses often needs to be in real time. The use of graphics processing units for computations is a recent approach for accelerating image reconstruction applications. We propose the use of graphics processing units for the design of multi-dimensional RF pulses including the utilization of parallel transmitters. Using a desktop computer with four NVIDIA Tesla C1060 computing processors, we found acceleration factors on the order of twenty for standard eight-transmitter 2D spiral RF pulses with a 64 × 64 excitation resolution and a ten-microsecond dwell time. We also show that even greater acceleration factors can be achieved for more complex RF pulses. PMID:21264929
NASA Astrophysics Data System (ADS)
Kuznetsova, T. F.; Eremenko, S. I.
2015-07-01
Samples of multi-dimensional nanoporous aluminosilicate with the composition (25% Al2O3-75% SiO2) are synthesized using the template effect of supramolecular cetylpyridinium chloride. The samples are studied by means of low-temperature nitrogen static adsorption-desorption, X-ray diffraction, scanning electron microscopy, and FT-IR spectroscopy. Changes in the specific surface area, volume, and DFT distribution of mesopores are shown to depend on the template concentration, annealing temperature, and sample training temperature prior to analysis. When using 5.0% of the template, we observe the formation of an aluminosilicate mesophase with a three-dimensional MCM-48 cubic pore system, homogeneous mesoporosity, and the excellent textural characteristics typical of a well-organized cellular structure.
Duncan, R. R.; Bergmann, A.; Cousin, M. A.; Apps, D. K.; Shipston, M. J.
2007-01-01
Summary We present a novel, multi-dimensional, time-correlated single photon counting (TCSPC) technique to perform fluorescence lifetime imaging with a laser-scanning microscope operated at a pixel dwell-time in the microsecond range. The unsurpassed temporal accuracy of this approach combined with a high detection efficiency was applied to measure the fluorescent lifetimes of enhanced cyan fluorescent protein (ECFP) in isolation and in tandem with EYFP (enhanced yellow fluorescent protein). This technique enables multi-exponential decay analysis in a scanning microscope with high intrinsic time resolution, accuracy and counting efficiency, particularly at the low excitation levels required to maintain cell viability and avoid photobleaching. Using a construct encoding the two fluorescent proteins separated by a fixed-distance amino acid spacer, we were able to measure the fluorescence resonance energy transfer (FRET) efficiency determined by the interchromophore distance. These data revealed that ECFP exhibits complex exponential fluorescence decays under both FRET and non-FRET conditions, as previously reported. Two approaches to calculate the distance between donor and acceptor from the lifetime delivered values within a 10% error range. To confirm that this method can be used also to quantify intermolecular FRET, we labelled cultured neurones with the styryl dye FM1-43, quantified the fluorescence lifetime, then quenched its fluorescence using FM4-64, an efficient energy acceptor for FM1-43 emission. These experiments confirmed directly for the first time that FRET occurs between these two chromophores, characterized the lifetimes of these probes, determined the interchromophore distance in the plasma membrane and provided high-resolution two-dimensional images of lifetime distributions in living neurones. PMID:15230870
NASA Astrophysics Data System (ADS)
Ono, Junichi; Ando, Koji
2012-11-01
A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed Gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role.
Yaman Güçlü; William N. G. Hitchon
2013-05-23
The term `Convected Scheme' (CS) refers to a family of algorithms, most usually applied to the solution of Boltzmann's equation, which uses a method of characteristics in an integral form to project an initial cell forward to a group of final cells. As such the CS is a `forward-trajectory' semi-Lagrangian scheme. For multi-dimensional simulations of neutral gas flows, the cell-centered version of this semi-Lagrangian (CCSL) scheme has advantages over other options due to its implementation simplicity, low memory requirements, and easier treatment of boundary conditions. The main drawback of the CCSL-CS to date has been its high numerical diffusion in physical space, because of the 2$^{\\text{nd}}$ order remapping that takes place at the end of each time step. By means of a Modified Equation Analysis, it is shown that a high order estimate of the remapping error can be obtained a priori, and a small correction to the final position of the cells can be applied upon remapping, in order to achieve full compensation of this error. The resulting scheme is 4$^{\\text{th}}$ order accurate in space while retaining the desirable properties of the CS: it is conservative and positivity-preserving, and the overall algorithm complexity is not appreciably increased. Two monotone (i.e. non-oscillating) versions of the fourth order CCSL-CS are also presented: one uses a common flux-limiter approach; the other uses a non-polynomial reconstruction to evaluate the derivatives of the density function. The method is illustrated in simple one- and two-dimensional examples, and a fully 3D solution of the Boltzmann equation describing expansion of a gas into vacuum through a cylindrical tube.
Yin, Rong; Zhu, Fen-Xia; Li, Xiu-Feng; Jia, Xiao-Bin
2013-11-01
Danmu is one of common medicines in folks of Li nationality, with such effects in clearing heat and removing toxicity, antisepsis and anti-inflammation. Danmu injection, which is developed with Danmu herbs, has been clinically applied for years and showed curative efficacy. Currently, though many studies have been conducted to analyze chemical constituents in Danmu in details, its pharmacodynamic material basis related to disease prevention and treatment has not been defined. Furthermore, as the quality control methods for Danmu and its preparations remain restricted to single index component and irrational to some extent, it fails to ensure their inherent quality. On the basis of the summary of previous study results, as well as the "component structural theory" of the material basis, we established a "multi-dimensional structure quality control technology system" that is capable of reflecting the integrity of effects of Danmu injection and component structure hierarchy, and performed a dynamic monitoring over the whole process from medicinal materials and preparation products, so as to ensure the inherent quality of Danmu injection. PMID:24494545
Nemoto, T; Funatogawa, T; Takeshi, K; Tobe, M; Yamaguchi, T; Morita, K; Katagiri, N; Tsujino, N; Mizuno, M
2012-09-01
Early intervention for psychosis in Japan has lagged behind that in western countries, but has rapidly begun to attract attention in recent years. As part of a worldwide trend, a multi-dimensional treatment centre for early psychosis consisting of a Youth Clinic, which specialises in young individuals with an at-risk mental state for psychosis, and Il Bosco, a special day-care service for individuals with early psychosis, was initiated at the Toho University Omori Medical Center in Japan in 2007. The treatment centre aims to provide early intervention to prevent the development of full-blown psychosis in patients with an at-risk mental state and intensive rehabilitation to enable first-episode schizophrenia patients to return to the community. We presently provide the same programmes for both groups at Il Bosco. However, different approaches may need to be considered for patients with an at-risk mental state and for those with first-episode schizophrenia. More phase-specific and need-specific services will be indispensable for early psychiatric interventions in the future. PMID:23019284
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4(th) year Secondary Education students--control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
Multi-dimensional permutation-modulation format for coherent optical communications.
Ishimura, Shota; Kikuchi, Kazuro
2015-06-15
We introduce the multi-dimensional permutation-modulation format in coherent optical communication systems and analyze its performance, focusing on the power efficiency and the spectral efficiency. In the case of four-dimensional (4D) modulation, the polarization-switched quadrature phase-shift keying (PS-QPSK) modulation format and the polarization quadrature-amplitude modulation (POL-QAM) format can be classified into the permutation modulation format. Other than these well-known modulation formats, we find novel modulation formats trading-off between the power efficiency and the spectral efficiency. With the increase in the dimension, the spectral efficiency can more closely approach the channel capacity predicted from the Shannon's theory. We verify these theoretical characteristics through computer simulations of the symbol-error rate (SER) and bit-error rate (BER) performances. For example, the newly-found eight-dimensional (8D) permutation-modulation format can improve the spectral efficiency up to 2.75 bit/s/Hz/pol/channel, while the power penalty against QPSK is about 1 dB at BER=10^{-3}. PMID:26193538
NASA Astrophysics Data System (ADS)
Cheng, Z.; Hsu, T. J.; Calantoni, J.
2014-12-01
In the past decade, researchers have clearly been making progress in predicting coastal erosion/recovery; however, evidences are also clear that existing coastal evolution models cannot predict coastal responses subject to extreme storm events. In this study, we investigate the dynamics of momentary bed failure driven by large horizontal pressure gradients, which may be the dominant sediment transport mechanism under intense storm condition. Recently, a multi-dimensional two-phase Eulerian sediment transport model has been developed and disseminated to the research community as an open-source code. The numerical model is based on extending an open-source CFD library of solvers, OpenFOAM. Model results were validated with published sediment concentration and velocity data measured in steady and oscillatory flow. The 2DV Reynolds-averaged model showed wave-like bed instabilities when the criteria of momentary bed failure was exceeded. These bed instabilities were responsible for the large transport rate observed during plug flow and the onset of the instabilities was associated with a large erosion depth. To better resolve the onset of bed instabilities, subsequent energy cascade and the resulting large sediment transport rate and sediment pickup flux, 3D turbulence-resolving simulations were also carried out. Detailed validation of the 3D turbulence-resolving Eulerian two-phase model will be presented along with the expanded investigation on the dynamics of momentary bed failure.
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Gregory K Miller; David A Petti; Dominic J Varacalle; John T Maki
2003-01-01
The fundamental design for a gas-cooled reactor relies on the behavior of the coated particle fuel. The coating layers, termed the TRISO coating, act as a mini-pressure vessel that retains fission products. Results of US irradiation experiments show that many more fuel particles have failed than can be attributed to one-dimensional pressure vessel failures alone. Post-irradiation examinations indicate that multi-dimensional
Klaus Peine; Daniel Wentzel; Andreas Herrmann
This research investigates consumer preferences for different multi-dimensional price profiles. Drawing on research on price\\u000a affect, we investigate whether consumers prefer descending monthly installments (e.g., 40, 30, 20, 10) over constant (e.g.,\\u000a 25, 25, 25, 25), or ascending ones (e.g., 10, 20, 30, 40). Results of a field experiment with a sample of 1,628 German car\\u000a buyers corroborate the hypothesized
Guttman Scale Analysis: An Application to Library Science.
ERIC Educational Resources Information Center
Burgin, Robert
1989-01-01
Outlines the general techniques of Guttman scale analysis and briefly describes its uses in social science research. To illustrate the potential application to library science, a Guttman scale of restrictiveness in dealing with overdue books is developed and data from a 1986 survey of public libraries are fitted into the scale. (23 references)…
Scale Analysis of Deep and Shallow Convection in the Atmosphere
Yoshimitsu Ogura; Norman A. Phillips
1962-01-01
The approximate equations of motion derived by Batchelor in 1953 are derived by a formal scale analysis, with the assumption that the percentage range in potential temperature is small and that the time scale is set by the Brunt-Väisälä frequency. Acoustic waves are then absent. If the vertical scale is small compared to the depth of an adiabatic atmosphere, the
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Markov Chain Analysis for Large-Scale Grid Systems
Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski Fern Hunt NISTIR 7566 #12;2 #12;3 NISTIR 7566 Markov Chain Analysis for Large-Scale Grid Systems Christopher Dabrowski Software and Systems Division Information Technology Laboratory National Institute of Standards and Technology
META-SIMULATION DESIGN AND ANALYSIS FOR LARGE SCALE NETWORKS
Kalyanaraman, Shivkumar
. . . . . . . . . . . . . . . . . . . . . 18 3.2 Overview of Full-Factorial Design of Experiments . . . . . . . . . . . 21 3.3 ROSSMETA-SIMULATION DESIGN AND ANALYSIS FOR LARGE SCALE NETWORKS By David W. Bauer Jr. A Thesis (For Graduation December 2005) #12;META-SIMULATION DESIGN AND ANALYSIS FOR LARGE SCALE NETWORKS
Large-Scale Quantitative Analysis of Painting Arts
Jeong, Hawoong
Large-Scale Quantitative Analysis of Painting Arts Daniel Kim1 , Seung-Woo Son2 & Hawoong Jeong3 to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts analysis of a large-scale database of artistic paints to make a bridge between art and science. Using
Effective Web-Scale Crawling Through Website Analysis Ivan Gonzalez
and offline analysis. The online analysis stage is comprised of the scheduler and the online analysis managerEffective Web-Scale Crawling Through Website Analysis Iv´an Gonz´alez Carnegie Mellon University IBM Almaden Research Center San Jose, California {dnm, lan}@us.ibm.com ABSTRACT The web crawler space
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
NASA Astrophysics Data System (ADS)
Wood, William Alfred, III
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. The scalar test cases include advected shear, circular advection, non-linear advection with coalescing shock and expansion fans, and advection-diffusion. For all scalar cases the fluctuation splitting scheme is more accurate, and the primary mechanism for the improved fluctuation splitting performance is shown to be the reduced production of artificial dissipation relative to DMFDSFV. The most significant scalar result is for combined advection-diffusion, where the present fluctuation splitting scheme is able to resolve the physical dissipation from the artificial dissipation on a much coarser mesh than DMFDSFV is able to, allowing order-of-magnitude reductions in solution time. Among the inviscid test cases the converging supersonic streams problem is notable in that the fluctuation splitting scheme exhibits superconvergent third-order spatial accuracy. For the inviscid cases of a supersonic diamond airfoil, supersonic slender cone, and incompressible circular bump the fluctuation splitting drag coefficient errors are typically half the DMFDSFV drag errors. However, for the incompressible inviscid sphere the fluctuation splitting drag error is larger than for DMFDSFV. A Blasius flat plate viscous validation case reveals a more accurate v-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.
Anusha, L. S.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)
2011-09-01
In two previous papers, we solved the polarized radiative transfer (RT) equation in multi-dimensional (multi-D) geometries with partial frequency redistribution as the scattering mechanism. We assumed Rayleigh scattering as the only source of linear polarization (Q/I, U/I) in both these papers. In this paper, we extend these previous works to include the effect of weak oriented magnetic fields (Hanle effect) on line scattering. We generalize the technique of Stokes vector decomposition in terms of the irreducible spherical tensors T{sup K}{sub Q}, developed by Anusha and Nagendra, to the case of RT with Hanle effect. A fast iterative method of solution (based on the Stabilized Preconditioned Bi-Conjugate-Gradient technique), developed by Anusha et al., is now generalized to the case of RT in magnetized three-dimensional media. We use the efficient short-characteristics formal solution method for multi-D media, generalized appropriately to the present context. The main results of this paper are the following: (1) a comparison of emergent (I, Q/I, U/I) profiles formed in one-dimensional (1D) media, with the corresponding emergent, spatially averaged profiles formed in multi-D media, shows that in the spatially resolved structures, the assumption of 1D may lead to large errors in linear polarization, especially in the line wings. (2) The multi-D RT in semi-infinite non-magnetic media causes a strong spatial variation of the emergent (Q/I, U/I) profiles, which is more pronounced in the line wings. (3) The presence of a weak magnetic field modifies the spatial variation of the emergent (Q/I, U/I) profiles in the line core, by producing significant changes in their magnitudes.
DiMattina, Christopher; Zhang, Kechen
2015-09-01
Numerous psychophysical studies have considered how subjects combine multiple sensory cues to make perceptual decisions, or how contextual information influences the perception of a target stimulus. In cases where cues interact in a linear manner, it is sufficient to characterize an observer's sensitivity along each individual feature dimension to predict perceptual decisions when multiple cues are varied simultaneously. However, in many situations sensory cues interact non-linearly, and therefore quantitatively characterizing subject behavior requires estimating a complex non-linear psychometric model which may contain numerous parameters. In this computational methods study, we analyze three efficient implementations of the well-studied PSI procedure (Kontsevich & Tyler, 1999) for adaptive psychophysical data collection which generalize well to psychometric models defined in multi-dimensional stimulus spaces where the standard implementation is intractable. Using generic multivariate logistic regression models as a test bed for our algorithms, we present two novel implementations of the PSI procedure which offer substantial speed-up compared to previously proposed implementations: (1) A look-up table method where optimal stimulus placements are pre-computed for various values of the (unknown) true model parameters and (2) A Laplace approximation method using a continuous Gaussian approximation to the evolving posterior density. We demonstrate the utility of these novel methods for quickly and accurately estimating the parameters of hypothetical nonlinear cue combination models in 2- and 3-dimensional stimulus spaces. In addition to these generic examples, we further illustrate our methods using a biologically derived model of how stimulus contrast influences orientation discrimination thresholds. Finally, we consider strategies for further speeding up experiments and extensions to models defined in dozens of dimensions. This work is potentially of great significance to investigators who are interested in quantitatively modeling the perceptual representations of complex naturalistic stimuli like textures and occlusion contours which are defined by multiple feature dimensions. Meeting abstract presented at VSS 2015. PMID:26326165
MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE
Sumiyoshi, K. [Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501 (Japan); Takiwaki, T. [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Matsufuru, H. [Computing Research Center, High Energy Accelerator Research Organization 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Yamada, S., E-mail: sumi@numazu-ct.ac.jp, E-mail: takiwaki.tomoya@nao.ac.jp, E-mail: hideo.matsufuru@kek.jp, E-mail: shoichi@heap.phys.waseda.ac.jp [Science and Engineering and Advanced Research Institute for Science and Engineering, Waseda University, Okubo, 3-4-1, Shinjuku, Tokyo 169-8555 (Japan)
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ?20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (?2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
NASA Astrophysics Data System (ADS)
Alizadeh, M.; Schuh, H.; Schmidt, M. G.
2012-12-01
In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT
Multi-dimensional Conjunctive Operation Rule for the Water Supply System
NASA Astrophysics Data System (ADS)
Chiu, Y.; Tan, C. A.; CHEN, Y.; Tung, C.
2011-12-01
In recent years, with the increment of floods and droughts, not only in numbers but also in intensities, floods were severer during the wet season and the droughts were more serious during the dry season. In order to reduce their impact on agriculture, industry, and even human being, the conjunctive use of surface water and groundwater has been paid much attention and become a new direction for the future research. Traditionally, the reservoir operation usually follows the operation rule curve to satisfy the water demand and considers only water levels at the reservoirs and time series. The strategy used in the conjunctive-use management model is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the deficit between demand and supply, if exists, is provided by the groundwater. In this study, we propose a new operation rule, named multi-dimensional conjunctive operation rule curve (MCORC), which is extended from the concept of reservoir operation rule curve. The MCORC is a three-dimensional curve and is applied to both surface water and groundwater. Three sets of parameters, water levels and the supply percentage at reservoirs, groundwater levels and the supply percentage, and time series, are considered simultaneously in the curve. The zonation method and heuristic algorithm are applied to optimize the curve subject to the constraints of the reservoir operation rules and the safety yield of groundwater. The proposed conjunctive operation rule was applied to the water supply system which is analogue to the area in northern Taiwan. The results showed that the MCORC could increase the efficiency of water use and reduce the risk of serious water deficits.
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Wenbo
A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.
Multi-dimensional analysis of the chemical and physical properties of spiral galaxies
Rosales Ortega, Fernando Fabián
2010-02-09
The PPAK Integral Field Spectroscopy (IFS) Nearby Galaxies Survey: PINGS, a 2-dimensional spectroscopic mosaicking of 17 nearby disk galaxies in the optical wavelength range. This project represents the first attempt to obtain continuous coverage...
NASA Technical Reports Server (NTRS)
Meier, D. L.
1998-01-01
A new field of numerical astrophysics is introduced which addresses the solution of large, multidimensional structural or slowly-evolving problems (rotating stars, interacting binaries, thick advective accretion disks, four dimensional spacetimes, etc.), as well as rapidly evolving systems.
David L. Meier
1998-11-10
A new field of numerical astrophysics is introduced which addresses the solution of large, multidimensional structural or slowly-evolving problems (rotating stars, interacting binaries, thick advective accretion disks, four dimensional spacetimes, etc.). The technique employed is the Finite Element Method (FEM), commonly used to solve engineering structural problems. The approach developed herein has the following key features: 1. The computational mesh can extend into the time dimension, as well as space, perhaps only a few cells, or throughout spacetime. 2. Virtually all equations describing the astrophysics of continuous media, including the field equations, can be written in a compact form similar to that routinely solved by most engineering finite element codes. 3. The transformations that occur naturally in the four-dimensional FEM possess both coordinate and boost features, such that (a) although the computational mesh may have a complex, non-analytic, curvilinear structure, the physical equations still can be written in a simple coordinate system independent of the mesh geometry. (b) if the mesh has a complex flow velocity with respect to coordinate space, the transformations will form the proper arbitrary Lagrangian- Eulerian advective derivatives automatically. 4. The complex difference equations on the arbitrary curvilinear grid are generated automatically from encoded differential equations. This first paper concentrates on developing a robust and widely-applicable set of techniques using the nonlinear FEM and presents some examples.
Building a foundation for human centric multi-dimensional data analysis
Ponto, Kevin
2010-01-01
transferred to the GPU to fully render a given viewpoint.where the render thread uploads it to the GPU. The textureRender performance as a function of volume complexity ex- pressed by the number of modified voxels, using a CPU and GPU
Multi-dimensional analysis of hdl: an approach to understanding atherogenic hdl
Johnson, Jr., Jeffery Devoyne
2009-05-15
-MS), capillary electrophoresis (CE), isoelectric focusing (IEF) and apoptosis studies involving cell cultures. It is becoming clearer that cholesterol concentrations themselves do not provide sufficient data to assess the quality of cardiovascular health. As a...
Correlation network analysis for multi-dimensional data in stocks market
NASA Astrophysics Data System (ADS)
Kazemilari, Mansooreh; Djauhari, Maman Abdurachman
2015-07-01
This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.
Nonlinear Analysis of Multi-Dimensional Signals: Local Adaptive Estimation of Complex
Garbe, Christoph S.
,4 , Ingo Stuke6 , Cicero Mota2,5 , Martin B¨ohme5 , Martin Haker5 , Tobias Schuchert3 , Hanno Scharr3 , Til L¨ubeck, Germany {boehme,haker,barth}@inb.uni-luebeck.de 6 University of L¨ubeck, Institute
Convective scale weather analysis and forecasting
NASA Technical Reports Server (NTRS)
Purdom, J. F. W.
1984-01-01
How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.
On SCALE Validation for PBR Analysis
Ilas; Germina
2010-01-01
Studies were performed to assess the capabilities of the SCALE code system to provide accurate cross sections for analyses of pebble bed reactor configurations. The analyzed configurations are representative of fuel in the HTR-10 reactor in the first critical core and at full power operation conditions. Relevant parameters-multiplication constant, spectral indices, few-group cross sections-are calculated with SCALE for the considered
Scale-Specific Multifractal Medical Image Analysis
Braverman, Boris
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. ...
Scaling analysis on Indian foreign exchange market
NASA Astrophysics Data System (ADS)
Sarkar, A.; Barat, P.
2006-05-01
In this paper, we investigate the scaling behavior of the average daily exchange rate returns of the Indian Rupee against four foreign currencies: namely, US Dollar, Euro, Great Britain Pound and Japanese Yen. The average daily exchange rate return of the Indian Rupee against US Dollar is found to exhibit a persistent scaling behavior and follow Levy stable distribution. On the contrary, the average daily exchange rate returns of the other three foreign currencies do not show persistency or antipersistency and follow Gaussian distribution.
Stochastic analysis on large scale interacting systems
Tadahisa Funaki
The evolutional laws in physical phenomena like the dynamics of uids are, in general, described by nonlinear partial dieren tial equations. Behind such physical phenomena, a microscopic world composed of atoms or molecules exists. It is a system with an enormous degree of freedom, and evolves in time making very complex interactions among them. We call it a large scale
Confirmatory Factor Analysis of the Resiliency Scale.
ERIC Educational Resources Information Center
Bennett, Ellen B.; Novotny, Jenny A.; Green, Kathy E.; Kluever, Raymond C.
The Resiliency Scale (C. Jew, 1992) is a recently developed measure intended to assess an individual's level of three facets of resiliency (optimism, skill acquisition, and risk-taking). Separate exploratory factor analyses with three diverse groups have led to definition of subscales bearing some similarities. In this study, items comparable…
Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data
NASA Astrophysics Data System (ADS)
Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.
2013-12-01
Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).
2013-01-01
Background Health assessment measurements for patients with Rheumatoid arthritis (RA) have to be meaningful, valid and relevant. A commonly used questionnaire for patients with RA is the Stanford Health Assessment Questionnaire Disability Index (HAQ), which has been available in Swedish since 1988. The HAQ has been revised and improved several times and the latest version is the Multi Dimensional Health Assessment Questionnaire (MDHAQ). The aim of this study was to translate the MDHAQ to Swedish conditions and to test the validity and reliability of this version for persons with RA. Methods Translation and adaption of the MDHAQ were performed according to guidelines by Guillemin et al. The translated version was tested for face validity and test-retest in a group of 30 patients with RA. Content validity, criterion validity and internal consistency were tested in a larger study group of 83 patients with RA. Reliability was tested with test-retest and Cronbach´s alpha for internal consistency. Two aspects of validity were explored: content and criterion validity. Content validity was tested with a content validity index. Criterion validity was tested with concurrent validity by exploring the correlation between the MDHAQ-S and the AIMS2-SF. Floor and ceiling effects were explored. Results Test-retest with intra-class correlation coefficient (ICC) gave a coefficient of 0.85 for physical function and 0.79 for psychological properties. Reliability test with Cronbach´s alpha gave an alpha of 0.65 for the psychological dimension and an alpha of 0.88 for the physical dimension of the MDHAQ-S. The average sum of the content validity index for each item was of the MDHAQ-S was 0.94. The MDHAQ-S had mainly a moderate correlation with the AIMS2-SF, except for the social dimension of the AIMS2-SF, which had a very low correlation with the MDHAQ-S. Conclusions The MDHAQ-S was considered to be reliable and valid, but further research is needed concerning sensitivity to change. PMID:23734791
Tounge, Brett A; Pfahler, Lori B; Reynolds, Charles H
2002-01-01
Scaling is a difficult issue for any analysis of chemical properties or molecular topology when disparate descriptors are involved. To compare properties across different data sets, a common scale must be defined. Using several publicly available databases (ACD, CMC, MDDR, and NCI) as a basis, we propose to define chemically meaningful scales for a number of molecular properties and topology descriptors. These chemically derived scaling functions have several advantages. First, it is possible to define chemically relevant scales, greatly simplifying similarity and diversity analyses across data sets. Second, this approach provides a convenient method for setting descriptor boundaries that define chemically reasonable topology spaces. For example, descriptors can be scaled so that compounds with little potential for biological activity, bioavailability, or other drug-like characteristics are easily identified as outliers. We have compiled scaling values for 314 molecular descriptors. In addition the 10th and 90th percentile values for each descriptor have been calculated for use in outlier filtering. PMID:12132889
Lorenzo Iorio
2005-08-27
An unexpected secular increase of the Astronomical Unit, the length scale of the Solar System, has recently been reported by three different research groups (Krasinsky and Brumberg, Pitjeva, Standish). The latest JPL measurements amount to 7+-2 m cy^-1. At present, there are no explanations able to accommodate such an observed phenomenon, neither in the realm of classical physics nor in the usual four-dimensional framework of the Einsteinian General Relativity. The Dvali-Gabadadze-Porrati braneworld scenario, which is a multi-dimensional model of gravity aimed to the explanation of the observed cosmic acceleration without dark energy, predicts, among other things, a perihelion secular shift, due to Lue and Starkman, of 5 10^-4 arcsec cy^-1 for all the planets of the Solar System. It yields a variation of about 6 m cy^-1 for the Earth-Sun distance which is compatible at 1-sigma level with the observed rate of the Astronomical Unit. The recently measured corrections to the secular motions of the perihelia of the inner planets of the Solar System are in agreement, at 1-sigma level, with the predicted value of the Lue-Starkman effect for Mercury and Mars and at 2-sigma level for the Earth.
Adsorption of random copolymers: A scaling analysis
NASA Astrophysics Data System (ADS)
Sumithra, K.; Baumgaertner, A.
1999-02-01
We report on results from Monte Carlo simulations of a single random copolymer adsorbed on a homogeneous planar surface. Although the critical crossover exponent is unaltered with respect to the case of homogeneous polymers, it is found that the scaling behavior is changed by the fraction of adsorptive monomers of the chain. In particular, we present some explicit expressions for energy and radius of gyration at low temperatures.
Rasch Analysis of the Geriatric Depression Scale--Short Form
ERIC Educational Resources Information Center
Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.
2009-01-01
Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…
Mokken Scale Analysis for Dichotomous Items Using Marginal Models
ERIC Educational Resources Information Center
van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas
2008-01-01
Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…
Logical Scaling in Formal Concept Analysis
Prediger, Susanne
.): Conceptual Structures: Ful lling Peirce's Dream. Proceedings of the ICCS'97, LNAI 1257, Springer, Berlin 1997-attribute-value-relationships are a frequently used data structure to code real-world problems. In formal concept analysis, they are formalized
Large-scale latent semantic analysis
Andrew McGregor Olney
2011-01-01
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making\\u000a semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is\\u000a created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of\\u000a that matrix through singular
SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION
John Crepeau
2011-03-01
Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.
Kaethner, Christian Ahlborg, Mandy; Buzug, Thorsten M.; Knopp, Tobias; Sattel, Timo F.
2014-01-28
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
NASA Astrophysics Data System (ADS)
Ducrot, Arnaud
2015-04-01
This paper is concerned with the study of the asymptotic behaviour of a multi-dimensional Fisher–KPP equation posed in an asymptotically homogeneous medium and supplemented together with a compactly supported initial datum. We derive precise estimates for the location of the front before proving the convergence of the solutions towards the travelling front. In particular, we show that the location of the front drastically depends on the rate at which the medium becomes homogeneous at infinity. Fast rate of convergence only changes the location by some constant while lower rate of convergence induces further logarithmic delay.
Scaling analysis of coral reef systems: an approach to problems of scale
NASA Astrophysics Data System (ADS)
Hatcher, Bruce G.; Imberger, Jorg; Smith, Stephen V.
1987-04-01
Dimensional analysis and scaling are related, semi-formal procedures for capturing the essential process(es) controlling the behaviour of a complex system, and for describing the functional relationships between them. The techniques involve the parameterization of natural processes, the identification of the temporal and spatial scales of variation of processes, and the evaluation of potential interactions between processes referenced to those scales using non-dimensional (scaled) parameters. Scaling approaches are increasingly being applied to a broad range of marine ecological problems, with the aims of assessing the relative importance of physical and biological parameters in controlling variation in process rates, and placing limits on the ability of one process to affect another. The value of the approach to coral reef research lies in the conceptualization of relationships between discipline-specific processes, and the evaluation of scale-dependent processes across the large range of spatial and temporal scales which pertain to coral reefs. Characteristic scales of physical, geological and biological processes exhibit different patterns of distribution along the temporal dimension. Scaling arguments based on examples from reef systems indicate that a large group of biological and biogeochemical processes are strongly influenced by hydrodynamic processe occuring at similar time scales within the range from about on hour to one year. We argue that scaling approaches to process-related problems are pre-requisite to interdisciplinary research on coral reefs.
Metal analysis of scales taken from Arctic grayling.
Farrell, A P; Hodaly, A H; Wang, S
2000-11-01
This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca, Ni, Zn, As, Se, Cd, Sb, Hg, and Pb) were measured in 10 to 16 ablation sites (5 microm radius) on each scale sample from Arctic grayling (Thymallus arcticus) (n = 10 fish). Ca, Mg, and Zn were at physiological levels in all scale samples. Se, Hg, and As were also detected in all scale samples. Only Cd was below detection limits of the LA-ICPMS for all samples, but some of the samples were below detection limits for Sb, Pb, and Ni. The mean scale concentrations for Se, Hg, and Pb were not significantly different from the muscle concentrations and individual fish values were within fourfold of each other. Cd was not detected in either muscle or scale tissue, whereas Sb was detected at low levels in some scale samples but not in any of the muscle samples. Similarly, As was detected in all scale samples but not in muscle, and Ni was detected almost all scale samples but only in one of the muscle samples. Therefore, there were good qualitative and quantitative agreements between the metal concentrations in scale and muscle tissues, with LA-ICPMS analysis of scales appearing to be a more sensitive method of detecting the body burden of Ni and As when compared with muscle tissue. Correlation analyses, performed for Pb, Hg, and Se concentrations, revealed that the scale concentrations for these three metals generally exceeded those of the muscle at low muscle concentrations. The LA-ICPMS analysis of scales had the capability to resolve significant spatial differences in metal concentrations within a fish scale. We conclude that metal analysis of fish scales using LA-ICPMS shows considerable promise as a nonlethal analytical tool to assess metal body burden in fish that could possibly generate a historic record of metal exposure. However, comprehensive validation experiments are still needed. PMID:11031313
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Multiple Time Scale Complexity Analysis of Resting State FMRI
Smith, Robert X.; Yan, Lirong; Wang, Danny J.J.
2014-01-01
The present study explored multi-scale entropy (MSE) analysis to investigate the entropy of resting state fMRI signals across multiple time scales. MSE analysis was developed to distinguish random noise from complex signals since the entropy of the former decreases with longer time scales while the latter signal maintains its entropy due to a self-resemblance” across time scales. A long resting state BOLD fMRI (rs-fMRI) scan with 1000 data points was performed on five healthy young volunteers to investigate the spatial and temporal characteristics of entropy across multiple time scales. A shorter rs-fMRI scan with 240 data points was performed on a cohort of subjects consisting of healthy young (age 23±2 years, n=8) and aged volunteers (age 66±3 years, n=8) to investigate the effect of healthy aging on the entropy of rs-fMRI. The results showed that MSE of gray matter, rather than white matter, resembles closely that of f?1 noise over multiple time scales. By filtering out high frequency random fluctuations, MSE analysis is able to reveal enhanced contrast in entropy between gray and white matter, as well as between age groups at longer time scales. Our data support the use of MSE analysis as a validation metric for quantifying the complexity of rs-fMRI signals. PMID:24242271
Some Fit Issues in Rating Scale Analysis.
ERIC Educational Resources Information Center
Masters, Geoff N.; Wright, Benjamin D.
The analysis of fit of data to a measurement model for graded responses is described. The model is an extension of Rasch's dichotomous model to formats which provide more than two levels of response to items. The model contains one parameter for each person and one parameter for each "step" in an item. A dichotomously-scored item provides only one…
Simple scaling analysis of active channel patterns in Fiumara environment
NASA Astrophysics Data System (ADS)
De Bartolo, Samuele; Fallico, Carmine; Ferrari, Ennio
2015-03-01
A simple scaling analysis was performed on experimental data relative to a riverbed reach of the Allaro Fiumara, a fluvial environment typical of Southern Italy. For this purpose, a simplified geometrical approach was followed to determine the spatial distribution of the number of active channels for the river stretch considered. In particular, the section lines crossing the braided network skeleton with distance ranging from 5 to 200 m were considered. Firstly, a probabilistic analysis of the experimental data was carried out by using a truncated Poisson distribution to characterize the examined river morphologically. Afterward, a scaling analysis was performed to investigate the existence of a possible multimodal behaviour of the number of active channels and to identify the corresponding cutoff values. For this second approach by the so-called standard coarse graining analysis, we defined a power law usable to give the probability distribution of the active channels number with varying spatial partition (distance between consecutive sections). In this way, verifying the existence of a bimodal scaling behaviour was possible. Moreover, the cutoff limits that characterize the bimodal behaviour of the active channels were found for all the partition distances from 5 to 100 m, while the corresponding shape and scale parameters were also determined. A comparison of the results obtained by the statistical approach and the scaling analysis was carried out. The variability of the characteristic parameters of the Poisson and power type laws with scale was also investigated.
Design and rigorous analysis of transformation-optics scaling devices.
Jiang, Wei Xiang; Xu, Bai Bing; Cheng, Qiang; Cui, Tie Jun; Yu, Guan Xia
2013-08-01
Scaling devices that can shrink or enlarge an object are designed using transformation optics. The electromagnetic scattering properties of such scaling devices with anisotropic parameters are rigorously analyzed using the eigenmode expansion method. If the radius of the virtual object is smaller than that of the real object, it is a shrinking device with positive material parameters; if the radius of the virtual object is larger than the real one, it is an enlarging device with positive or negative material parameters. Hence, a scaling device can make a dielectric or metallic object look smaller or larger. The rigorous analysis shows that the scattering coefficients of the scaling devices are the same as those of the equivalent virtual objects. When the radius of the virtual object approaches zero, the scaling device will be an invisibility cloak. In such a case, the scattering effect of the scaling device will be sensitive to material parameters of the device. PMID:24323231
Assessment of RELAP5-3D multi-dimensional component model using data from LOFT Test L2-5
Davis, C.B.
1998-07-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the Loss-of-Fluid Test (LOFT) L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident (LOCA). Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L21-5 experiment were performed using the RELAP5-3D computer code. The calculations simulated the blowdown, refill, and reflood portions of the transient. The calculated thermal-hydraulic response of the primary coolant system was generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP5/MOD3.
Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia
ERIC Educational Resources Information Center
Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal
2014-01-01
A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by…
Vigelius, Matthias; Meyer, Bernd
2012-01-01
For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001
Miller, Gregory Kent; Petti, David Andrew; Maki, John Thomas; Varacalle, Dominic Joseph
2003-04-01
The fundamental design for a gas-cooled reactor relies on the behavior of the coated particle fuel. The coating layers, termed the TRISO coating, act as a mini-pressure vessel that retains fission products. Results of US irradiation experiments show that many more fuel particles have failed than can be attributed to one-dimensional pressure vessel failures alone. Post-irradiation examinations indicate that multi-dimensional effects, such as the presence of irradiation-induced shrinkage cracks in the inner pyrolytic carbon layer, contribute to these failures. To address these effects, the methods of prior one-dimensional models are expanded to capture the stress intensification associated with multi-dimensional behavior. An approximation of the stress levels enables the treatment of statistical variations in numerous design parameters and Monte Carlo sampling over a large number of particles. The approach is shown to make reasonable predictions when used to calculate failure probabilities for irradiation experiments of the New Production – Modular High Temperature Gas Cooled Reactor Program.
NASA Astrophysics Data System (ADS)
Miller, Gregory K.; Petti, David A.; Varacalle, Dominic J.; Maki, John T.
2003-04-01
The fundamental design for a gas-cooled reactor relies on the behavior of the coated particle fuel. The coating layers, termed the TRISO coating, act as a mini-pressure vessel that retains fission products. Results of US irradiation experiments show that many more fuel particles have failed than can be attributed to one-dimensional pressure vessel failures alone. Post-irradiation examinations indicate that multi-dimensional effects, such as the presence of irradiation-induced shrinkage cracks in the inner pyrolytic carbon layer, contribute to these failures. To address these effects, the methods of prior one-dimensional models are expanded to capture the stress intensification associated with multi-dimensional behavior. An approximation of the stress levels enables the treatment of statistical variations in numerous design parameters and Monte Carlo sampling over a large number of particles. The approach is shown to make reasonable predictions when used to calculate failure probabilities for irradiation experiments of the New Production - Modular High Temperature Gas Cooled Reactor Program.
Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR
Ishii, M.; Ravankar, S.T.; Dowlati, R.
1996-04-01
The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A; Philip, Bobby; Pannala, Sreekanth
2014-01-01
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles
NASA Technical Reports Server (NTRS)
Brown, Clifford; Bridges, James
2003-01-01
Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445
Geographical scale effects on the analysis of leptospirosis determinants.
Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães
2014-01-01
Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536
Multi-scale analysis for environmental dispersion in wetland flow
NASA Astrophysics Data System (ADS)
Wu, Zi; Li, Z.; Chen, G. Q.
2011-08-01
Presented in this work is a multi-scale analysis for longitudinal evolution of contaminant concentration in a fully developed flow through a shallow wetland channel. An environmental dispersion model for the mean concentration is devised as an extension of Taylor's classical formulation by a multi-scale analysis. Corresponding environmental dispersivity is found identical to that determined by the method of concentration moments. For typical contaminant constituents of chemical oxygen demand, biochemical oxygen demand, total phosphorus, total nitrogen and heavy metal, the evolution of contaminant cloud is illustrated with the critical length and duration of the contaminant cloud with constituent concentration beyond some given environmental standard level.
Full-scale system impact analysis: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.
The arrow of time, complexity and the scale free analysis
Dhurjati Prasad Datta; Santanu Raut
2010-01-10
The origin of complex structures, randomness, and irreversibility are analyzed in the scale free SL(2,R) analysis, which is an extension of the ordinary analysis based on the recently uncovered scale free $C^{2^n-1}$ solutions to linear ordinary differential equations. The role of an intelligent decision making is discussed. We offer an explanation of the recently observed universal renormalization group dynamics at the edge of chaos in logistic maps. The present formalism is also applied to give a first principle explanation of 1/$f$ noise in electrical circuits and solid state devices. Its relevance to heavy tailed (hyperbolic) distributions is pointed out.
Scale analysis using X-ray microfluorescence and computed radiography
NASA Astrophysics Data System (ADS)
Candeias, J. P.; de Oliveira, D. F.; dos Anjos, M. J.; Lopes, R. T.
2014-02-01
Scale deposits are the most common and most troublesome damage problems in the oil field and can occur in both production and injection wells. They occur because the minerals in produced water exceed their saturation limit as temperatures and pressures change. Scale can vary in appearance from hard crystalline material to soft, friable material and the deposits can contain other minerals and impurities such as paraffin, salt and iron. In severe conditions, scale creates a significant restriction, or even a plug, in the production tubing. This study was conducted to qualify the elements present in scale samples and quantify the thickness of the scale layer using synchrotron radiation micro-X-ray fluorescence (SR?XRF) and computed radiography (CR) techniques. The SR?XRF results showed that the elements found in the scale samples were strontium, barium, calcium, chromium, sulfur and iron. The CR analysis showed that the thickness of the scale layer was identified and quantified with accuracy. These results can help in the decision making about removing the deposited scale.
Combined process automation for large-scale EEG analysis.
Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E
2012-01-01
Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. PMID:22136696
DEVELOPMENT OF MOTIVATION SCALE - CLINICAL VALIDATION WITH ALCOHOL DEPENDENTS
Neeliyara, Teresa; Nagalakshmi, S.V.
1994-01-01
This study focusses on the development of a comprehensive multi-dimensional scale for assessing motivation for change in the alcohol dependent population. After establishing face validity, the items evolved were administered to a normal sample of 600 male subjects in whom psychiatric illness was ruled out. The data thus obtained was subjected to factor analysis. Six factors were obtained which accounted for 55.2% of variance. These together formed a 80 item five point scale and norms were established on a sample of 600 normal subjects. Further clinical validation was established on 30 alcohol dependent subjects and 30 normals. The status of motivation was found to be inadequate in alcohol dependent individuals as compared to the normals. Split-half reliability was carried out and the tool was found to be highly reliable. PMID:21743674
NASA Astrophysics Data System (ADS)
Sielk, Jan; von Horsten, H. Frank; Hartke, Bernd; Rauhut, Guntram
2011-02-01
A multi-coordinate expansion of potential energy surfaces has been used to perform quantum dynamical calculations for reactions showing double-minimum potentials. Starting from the transition state, a fully automated algorithm for exploring the multi-dimensional potential energy surface represented by arbitrary internal or normal coordinates allows for an accurate description of the relevant regions for vibrational dynamics calculations. An interface to our multi-purpose quantum-dynamics program M RP ROPA enables routine calculations for simple chemical reactions. Illustrative calculations involving potential energy surfaces obtained from explicitly-correlated coupled-cluster calculations, CCSD(T)-F12a, are provided for the tunneling splittings in the isotopologues of hydrogen peroxide and for reaction dynamics based on the enantiomeric inversion of PHDCl.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
A rasch analysis of the statistical anxiety rating scale.
Teman, Eric D
2013-01-01
The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS. PMID:24064581
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei; Gu, Chunwei
2015-07-01
This paper presents a high-order discontinuous Galerkin (DG) method based on a multi-dimensional gas kinetic evolution model for viscous flow computations. Generally, the DG methods for equations with higher order derivatives must transform the equations into a first order system in order to avoid the so-called "non-conforming problem". In the traditional DG framework, the inviscid and viscous fluxes are numerically treated differently. Differently from the traditional DG approaches, the current method adopts a kinetic evolution model for both inviscid and viscous flux evaluations uniformly. By using a multi-dimensional gas kinetic formulation, we can obtain a spatial and temporal dependent gas distribution function for the flux integration inside the cell and at the cell interface, which is distinguishable from the Gaussian Quadrature point flux evaluation in the traditional DG method. Besides the initial higher order non-equilibrium states inside each control volume, a Linear Least Square (LLS) method is used for the reconstruction of smooth distributions of macroscopic flow variables around each cell interface in order to construct the corresponding equilibrium state. Instead of separating the space and time integrations and using the multistage Runge-Kutta time stepping method for time accuracy, the current method integrates the flux function in space and time analytically, which subsequently saves the computational time. Many test cases in two and three dimensions, which include high Mach number compressible viscous and heat conducting flows and the low speed high Reynolds number laminar flows, are presented to demonstrate the performance of the current scheme.
Rose, Donald; Bodor, J. Nicholas; Hutchinson, Paul L.; Swalm, Chris M.
2010-01-01
Research on neighborhood food access has focused on documenting disparities in the food environment and on assessing the links between the environment and consumption. Relatively few studies have combined in-store food availability measures with geographic mapping of stores. We review research that has used these multi-dimensional measures of access to explore the links between the neighborhood food environment and consumption or weight status. Early research in California found correlations between red meat, reduced-fat milk, and whole-grain bread consumption and shelf space availability of these products in area stores. Subsequent research in New York confirmed the low-fat milk findings. Recent research in Baltimore has used more sophisticated diet assessment tools and store-based instruments, along with controls for individual characteristics, to show that low availability of healthy food in area stores is associated with low-quality diets of area residents. Our research in southeastern Louisiana has shown that shelf space availability of energy-dense snack foods is positively associated with BMI after controlling for individual socioeconomic characteristics. Most of this research is based on cross-sectional studies. To assess the direction of causality, future research testing the effects of interventions is needed. We suggest that multi-dimensional measures of the neighborhood food environment are important to understanding these links between access and consumption. They provide a more nuanced assessment of the food environment. Moreover, given the typical duration of research project cycles, changes to in-store environments may be more feasible than changes to the overall mix of retail outlets in communities. PMID:20410084
Analysis of a scaling rate meter for geothermal systems
Kreid, D.K.
1980-03-01
A research project was conducted to investigate an experimental technique for measuring the rate of formation of mineral scale and corrosion in geothermal systems. A literature review was performed first to identify and evaluate available techniques for measuring scale in heat transfer equipment. As a result of these evaluations, a conceptual design was proposed for a geothermal Scaling Rate Meter (SRM) that would combine features of certain techniques used (or proposed for use) in other applications. An analysis was performed to predict the steady-state performance and expected experimental uncertainty of the proposed SRM. Sample computations were then performed to illustrate the system performance for conditions typical of a geothermal scaling application. Based on these results, recommendations are made regarding prototype SRM construction and testing.
Exploratory Factor Analysis of African Self-Consciousness Scale Scores
ERIC Educational Resources Information Center
Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.
2012-01-01
This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth
Smyth, Padhraic
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth Information and Computer Science for Statistical Methods in Medical Research, September 2000 1 #12;Abstract Modern data mininghas evolvedlargelyas aresult ofe orts bycomputer scientists to address the needs of data owners" in extracting useful
Galaxy: A platform for interactive large-scale genome analysis
Miller, Webb
but not in the dog genome") still rely on programming and database skills. To solve this problem we designed GalaxyGalaxy: A platform for interactive large-scale genome analysis Belinda Giardine,1 Cathy Riemer,1 and functional data pose a challenge for biomedical researchers. Here we describe an interactive system, Galaxy
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS
Eindhoven, Technische Universiteit
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS C. J. VAN DUIJN AND I. S in porous media. We consider first general domains, for which existence of weak solutions is proven in the class of porous media transport models with (non--) equilibrium adsorption. Such models received much
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS
Eindhoven, Technische Universiteit
CRYSTAL DISSOLUTION AND PRECIPITATION IN POROUS MEDIA: PORE SCALE ANALYSIS C. J. VAN DUIJN AND I. S in porous media. We consider first general domains, for which existence of weak solutions is proven in the class of porous media transport models with (non) equilibrium adsorption. Such models received much
Large-scale data analysis using the Wigner function
NASA Astrophysics Data System (ADS)
Earnshaw, R. A.; Lei, C.; Li, J.; Mugassabi, S.; Vourdas, A.
2012-04-01
Large-scale data are analysed using the Wigner function. It is shown that the 'frequency variable' provides important information, which is lost with other techniques. The method is applied to 'sentiment analysis' in data from social networks and also to financial data.
A System for Ranking Organizations using Social Scale Analysis
Davulcu, Hasan
transformations and change the existing social order fundamentally. Muslim radical movements have complex origins in Muslim societies exhibit distinct combinations of discrete states comprising various social, politicalA System for Ranking Organizations using Social Scale Analysis Sukru Tikves, Sujogya Banerjee, Hamy
LARGE-SCALE NORMAL COORDINATE ANALYSIS ON DISTRIBUTED
Raghavan, Padma
structures at the atomic level. In particular, molecular vibrations at low temperature can be characterized of NCA include characterizing thermal stability of polymer materials (Fukui et al. 200) and assessing to take advantage of the LARGE-SCALE NORMAL COORDINATE ANALYSIS 409 The International Journal of High
Dimensional Analysis, Scaling, and Similarity 1. Systems of units
Hunter, John K.
introduce current as a fundamental unit (measured, for example, in Amp`eres in the SI system) or charge SI system is often less convenient for theoretical work than the cgs system, and both systems remainLECTURE 2 Dimensional Analysis, Scaling, and Similarity 1. Systems of units The numerical value
Amadi, Ovid Charles
2013-01-01
The requirement that individual cells be able to communicate with one another over a range of length scales is a fundamental prerequisite for the evolution of multicellular organisms. Often diffusible chemical molecules ...
Rasch Analysis of the Fullerton Advanced Balance (FAB) Scale
Fiedler, Roger C.; Rose, Debra J.
2011-01-01
ABSTRACT Purpose: This cross-sectional study explores the psychometric properties and dimensionality of the Fullerton Advanced Balance (FAB) Scale, a multi-item balance test for higher-functioning older adults. Methods: Participants (n=480) were community-dwelling adults able to ambulate independently. Data gathering consisted of survey and balance performance assessment. Psychometric properties were assessed using Rasch analysis. Results: Mean age of participants was 76.4 (SD=7.1) years. Mean FAB Scale scores were 24.7/40 (SD=7.5). Analyses for scale dimensionality showed that 9 of the 10 items fit a unidimensional measure of balance. Item 10 (Reactive Postural Control) did not fit the model. The reliability of the scale to separate persons was 0.81 out of 1.00; the reliability of the scale to separate items in terms of their difficulty was 0.99 out of 1.00. Cronbach's alpha for a 10-item model was 0.805. Items of differing difficulties formed a useful ordinal hierarchy for scaling patterns of expected balance ability scoring for a normative population. Conclusion: The FAB Scale appears to be a reliable and valid tool to assess balance function in higher-functioning older adults. The test was found to discriminate among participants of varying balance abilities. Further exploration of concurrent validity of Rasch-generated expected item scoring patterns should be undertaken to determine the test's diagnostic and prescriptive utility. PMID:22210989
Using Qualitative Methods to Inform Scale Development
ERIC Educational Resources Information Center
Rowan, Noell; Wulff, Dan
2007-01-01
This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being…
Order Analysis: An Inferential Model of Dimensional Analysis and Scaling
ERIC Educational Resources Information Center
Krus, David J.
1977-01-01
Order analysis is discussed as a method for description of formal structures in multidimensional space. Its algorithm was derived using a combination of psychometric theory, formal logic theory, information theory, and graph theory concepts. The model provides for adjustment of its sensitivity to random variation. (Author/JKS)
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-01-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two…
McDonald, Richard; Nelson, Jonathan; Kinzel, Paul; Conaway, Jeff
2006-01-01
The Multi-Dimensional Surface-Water Modeling System (MD_SWMS) is a Graphical User Interface for surface-water flow and sediment-transport models. The capabilities of MD_SWMS for developing models include: importing raw topography and other ancillary data; building the numerical grid and defining initial and boundary conditions; running simulations; visualizing results; and comparing results with measured data.
ERIC Educational Resources Information Center
Kranz, Bella
Described are identification procedures, a multi-dimensional screening device (MDSD), and special curriculum for gifted and talented students in Fairfax County, Virginia. Noted are discriminatory aspects of intelligence testing and teacher nomination in identifying gifted children. Described is the development of the MDSD which involves rating…
Perry, Dewayne E.
-oriented programming is data or class; each concern in this dimension is a data type defined and encapsulatedACM SIGSOFT Software Engineering Notes vol 26 no 1 January 2001 Page 78 Workshop on Multi-Dimensional Separation of Concerns in Software Engineering Peri Tarr, William Harrison, Harold Ossher (IBM T. I. Watson
Sensitivity in metric scaling and analysis of distance.
Krzanowski, W J
2006-03-01
Assessing the sensitivity or sampling variability of multivariate ordination methods is essential if inferences are to be drawn from the analysis, but such assessment has to date been notably absent in many applications of multidimensional scaling (MDS). The only available technique seems to be the one by DeLeeuw and Meulman who proposed a special jackknife in a general MDS setting, but this method does not appear to have been widely used to date. A possible reason for this is that it is perceived to be computationally daunting. However, if attention is focused on classical metric scaling (principal coordinate analysis) then known analytical results can be used and the apparent computational complexity disappears. The purpose of this article is to set out these results, to indicate their use in more general analysis of distance, and to illustrate the methodology on some biometric examples. PMID:16542251
Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation
NASA Astrophysics Data System (ADS)
Marrasé, Celia
2004-03-01
Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.
Time scale analysis of a digital flight control system
NASA Technical Reports Server (NTRS)
Naidu, D. S.; Price, D. B.
1986-01-01
In this paper, consideration is given to the fifth order discrete model of an aircraft (longitudinal) control system which possesses three slow (velocity, pitch angle and altitude) and two fast (angle of attack and pitch angular velocity) modes and exhibits a two-time scale property. Using the recent results of the time scale analysis of discrete control systems, the high-order discrete model is decoupled into low-order slow and fast subsystems. The results of the decoupled system are found to be in excellent agreement with those of the original system.
SINEX: SCALE shielding analysis GUI for X-Windows
Browman, S.M.; Barnett, D.L.
1997-12-01
SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.
Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders
ERIC Educational Resources Information Center
Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.
2013-01-01
Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…
Efficient organization and access of multi-dimensional datasets on tertiary storage systems
Ling Tony Chen; R. Drach; M. Keating; S. Louis; Doron Rotem; Arie Shoshani
1995-01-01
This paper addresses the problem of urgently needed data management techniques for efficiently retrieving requested subsets of large datasets from mass storage devices. This problem is especially critical for scientific investigators who need ready access to the large volume of data generated by large-scale supercomputer simulations and physical experiments as well as the automated collection of observations by monitoring devices
Efficient High Order Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations: Talk Slides
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Brian R. (Technical Monitor)
2002-01-01
This viewgraph presentation presents information on the attempt to produce high-order, efficient, central methods that scale well to high dimension. The central philosophy is that the equations should evolve to the point where the data is smooth. This is accomplished by a cyclic pattern of reconstruction, evolution, and re-projection. One dimensional and two dimensional representational methods are detailed, as well.
A scaling analysis of ozone photochemistry: I Model development
NASA Astrophysics Data System (ADS)
Ainslie, B.; Steyn, D. G.
2005-12-01
A scaling analysis has been used to capture the integrated behaviour of several photochemical mechanisms for a wide range of precursor concentrations and a variety of environmental conditions. The Buckingham Pi method of dimensional analysis was used to express the relevant variables in terms of dimensionless groups. These grouping show maximum ozone, initial NOx and initial VOC concentrations are made non-dimensional by the average NO2 photolysis rate (jav) and the rate constant for the NO-O3 titration reaction (kNO); temperature by the NO-O3 activation energy (ENO) and Boltzmann constant (k) and total irradiation time by the cumulative jav?t photolysis rate (?3). The analysis shows dimensionless maximum ozone concentration can be described by a product of powers of dimensionless initial NOx concentration, dimensionless temperature, and a similarity curve directly dependent on the ratio of initial VOC to NOx concentration and implicitly dependent on the cumulative NO2 photolysis rate. When Weibull transformed, the similarity relationship shows a scaling break with dimensionless model output clustering onto two straight line segments, parameterized using four variables: two describing the slopes of the line segments and two giving the location of their intersection. A fifth parameter is used to normalize the model output. The scaling analysis, similarity curve and parameterization appear to be independent of the details of the chemical mechanism, hold for a variety of VOC species and mixtures and a wide range of temperatures and actinic fluxes.
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale
Nykanen, V.; Raines, G.L.
2006-01-01
A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.
New Criticality Safety Analysis Capabilities in SCALE 5.1
Bowman, Stephen M; DeHart, Mark D; Dunn, Michael E; Goluoglu, Sedat; Horwedel, James E; Petrie Jr, Lester M; Rearden, Bradley T; Williams, Mark L
2007-01-01
Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.
Bridgman crystal growth in low gravity - A scaling analysis
NASA Technical Reports Server (NTRS)
Alexander, J. I. D.; Rosenberger, Franz
1990-01-01
The results of an order-of-magnitude or scaling analysis are compared with those of numerical simulations of the effects of steady low gravity on compositional nonuniformity in crystals grown by the Bridgman-Stockbarger technique. In particular, the results are examined of numerical simulations of the effect of steady residual acceleration on the transport of solute in a gallium-doped germanium melt during directional solidification under low-gravity conditions. The results are interpreted in terms of the relevant dimensionless groups associated with the process, and scaling techniques are evaluated by comparing their predictions with the numerical results. It is demonstrated that, when convective transport is comparable with diffusive transport, some specific knowledge of the behavior of the system is required before scaling arguments can be used to make reasonable predictions.
Multiple scales analysis of interface dynamics in ^4He.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Ranjan; Prasad, Anoop; Weichman, Peter B.; Miller, Jonathan
1997-03-01
We describe theoretically the slow dynamics of the superfluid-normal interface that develops when a uniform heat current is passed through near-critical ^4He (R.V. Duncan, G. Ahlers and V. Steinberg, Phys. Rev. Lett. 60),1522 (1988) and references therein.. Using a multiple scales analysis, along with microscopically derived matching conditions that determine how heat transport is converted from conduction to superfluid counterflow as the interface is crossed, we derive an effective two-dimensional phase equation, resembling somewhat the KPZ equation, for the interface response to internal thermal and external vibrational noise sources, focusing especially on the question of large scale wandering and roughness. We also compare this work with a linear stability analysis which we have carried out. The results are relevant to the proposed NASA microgravity DYNAMX project(Czech. J. Phys. 46), Sup. 1, 87, (1996). We acknowledge financial support from the DYNAMX project..
Large-Scale Candidate Gene Analysis of HDL Particle Features
Bernhard M. Kaess; Maciej Tomaszewski; Peter S. Braund; Klaus Stark; Suzanne Rafelt; Marcus Fischer; Robert Hardwick; Christopher P. Nelson; Radoslaw Debiec; Fritz Huber; Werner Kremer; Hans Robert Kalbitzer; Lynda M. Rose; Daniel I. Chasman; Jemma Hopewell; Robert Clarke; Paul R. Burton; Martin D. Tobin; Christian Hengstenberg; Nilesh J. Samani
2011-01-01
BackgroundHDL cholesterol (HDL-C) is an established marker of cardiovascular risk with significant genetic determination. However, HDL particles are not homogenous, and refined HDL phenotyping may improve insight into regulation of HDL metabolism. We therefore assessed HDL particles by NMR spectroscopy and conducted a large-scale candidate gene association analysis.Methodology\\/Principal FindingsWe measured plasma HDL-C and determined mean HDL particle size and particle
Transient Analysis of Large-scale Stochastic Service Systems
Ko, Young Myoung
2012-07-16
of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, Natarajan Gautam Committee Members, Yu Ding Lewis Ntaimo Jean-Francois Chamberland... University Chair of Advisory Committee: Dr. Natarajan Gautam The transient analysis of large-scale systems is often difficult even when the sys- tems belong to the simplestM=M=n type of queues. To address analytical difficulties, previous studies have...
Bicoherence analysis of model-scale jet noise.
Gee, Kent L; Atchley, Anthony A; Falco, Lauren E; Shepherd, Micah R; Ukeiley, Lawrence S; Jansen, Bernard J; Seiner, John M
2010-11-01
Bicoherence analysis has been used to characterize nonlinear effects in the propagation of noise from a model-scale, Mach-2.0, unheated jet. Nonlinear propagation effects are predominantly limited to regions near the peak directivity angle for this jet source and propagation range. The analysis also examines the practice of identifying nonlinear propagation by comparing spectra measured at two different distances and assuming far-field, linear propagation between them. This spectral comparison method can lead to erroneous conclusions regarding the role of nonlinearity when the observations are made in the geometric near field of an extended, directional radiator, such as a jet. PMID:21110528
Empirical analysis of scaling and fractal characteristics of outpatients
NASA Astrophysics Data System (ADS)
Zhang, Li-Jiang; Liu, Zi-Xian; Guo, Jin-Li
2014-01-01
The paper uses power-law frequency distribution, power spectrum analysis, detrended fluctuation analysis, and surrogate data testing to evaluate outpatient registration data of two hospitals in China and to investigate the human dynamics of systems that use the “first come, first served” protocols. The research results reveal that outpatient behavior follow scaling laws. The results also suggest that the time series of inter-arrival time exhibit 1/f noise and have positive long-range correlation. Our research may contribute to operational optimization and resource allocation in hospital based on FCFS admission protocols.
Microbial community analysis of a full-scale DEMON bioreactor.
Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muñoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus
2015-03-01
Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398
On Multi-dimensional Steady Subsonic Flows Determined by Physical Boundary Conditions
NASA Astrophysics Data System (ADS)
Weng, Shangkun
In this thesis, we investigate an inflow-outflow problem for subsonic gas flows in a nozzle with finite length, aiming at finding intrinsic (physically acceptable) boundary conditions on upstream and downstream. We first characterize a set of physical boundary conditions that ensure the existence and uniqueness of a subsonic irrotational flow in a rectangle. Our results show that suppose we prescribe the horizontal incoming flow angle at the inlet and an appropriate pressure at the exit, there exists two positive constants m 0 and m1 with m0 < m1, such that a global subsonic irrotational flow exists uniquely in the nozzle, provided that the incoming mass flux m ? [m0, m 1). The maximum speed will approach the sonic speed as the mass flux m tends to m1. The new difficulties arise from the nonlocal term involved in the mass flux and the pressure condition at the exit. We first introduce an auxiliary problem with the Bernoulli's constant as a parameter to localize the nonlocal term and then establish a monotonic relation between the mass flux and the Bernoulli's constant to recover the original problem. To deal with the loss of obliqueness induced by the pressure condition at the exit, we employ the formulation in terms of the angular velocity and the density. A Moser iteration is applied to obtain the Linfinity estimate of the angular velocity, which guarantees that the flow possesses a positive horizontal velocity in the whole nozzle. As a continuation, we investigate the influence of the incoming flow angle and the geometry structure of the nozzle walls on subsonic flows in a finitely long curved nozzle. It turns out to be interesting that the incoming flow angle and the angles of inclination of nozzle walls play the same role as the end pressure. The curvatures of the nozzle walls play an important role. We also extend our results to subsonic Euler flows in the 2-D and 3-D asymmetric cases. Then it comes to the most interesting and difficult case--the 3-D subsonic Euler flow in a bounded nozzle, which is also the essential part of this thesis. The boundary conditions we have imposed in the 2-D case have a natural extension in the 3-D case. These important clues help us a lot to develop a new formulation to get some insights on the coupling structure between hyperbolic and elliptic modes in the Euler equations. The key idea in our new formulation is to use the Bernoulli's law to reduce the dimension of the velocity field by defining new variables (1,b2=u2u 1,b3=u3 u1) and replacing u1 by the Bernoulli's function B through u21=2B-h r1+ b22+b23 . In this way, we can explore the role of the Bernoulli's law in greater depth and hope that may simplify the Euler equations a little bit. We find a new conserved quantity for flows with a constant Bernoulli's function, which behaves like the scaled vorticity in the 2-D case. More surprisingly, a system of new conservation laws can be derived, which is never been observed before, even in the two dimensional case. We employ this formulation to construct a smooth subsonic Euler flow in a rectangular cylinder by assigning the incoming flow angles and the Bernoulli's function at the inlet and the end pressure at the exit, which is also required to be adjacent to some special subsonic states. The same idea can be applied to obtain similar information for the incompressible Euler equations, the self-similar Euler equations, the steady Euler equations with damping, the steady Euler-Poisson equations and the steady Euler-Maxwell equations. Last, we are concerned with the structural stability of some steady subsonic solutions for the Euler-Poisson system. A steady subsonic solution with subsonic background charge is proven to be structurally stable with respect to small perturbations of the background charge, the incoming flow angles and the end pressure, provided the background solution has a low Mach number and a small electric field. The new ingredient in our mathematical analysis is the solvability of a new second order elliptic system supplemented with oblique derivative conditio
ERIC Educational Resources Information Center
Reinke, William A.
1999-01-01
Presents an algorithm for integrating evaluative concerns of cost effectiveness, equity, quality, and sustainability in program evaluation and offers suggestions for refining the system of measurement and analysis. (SLD)
Swan Jr., Colby Corson
model to represent the medium. #12;5 Micro-/Macro-scale Notation · Periodic medium and unit cell Loading of Unit-CellPROCESS: Deformation-Controlled Loading of Unit-Cell · Specify an average state1 Multi-Scale Unit-Cell Analysis ofMulti-Scale Unit-Cell Analysis of Textile Composites
Multi-dimensional HPLC/MS of the nucleolar proteome using HPLC-chip/MS.
Vollmer, Martin; Hörth, Patric; Rozing, Gerard; Couté, Yohann; Grimm, Rudi; Hochstrasser, Denis; Sanchez, Jean-Charles
2006-03-01
The proteome of the human nucleolus was investigated in a single analysis using off-line strong cation exchange chromatography and microfraction collection combined with HPLC-chip/MS. The analysis was conducted either as a 1-D workflow with HPLC-chip alone or as a 2-D workflow. Two hundred and six unique proteins were identified in the International Protein Index human database corresponding to 2024 unique tryptic peptides identified in the 2-D analysis. In contrast, only 34 proteins and 151 corresponding tryptic peptides were found by applying a 1-D separation strategy. This clearly indicated that the complexity of the samples required the combination of more than one orthogonal separation technique. Stringent database search criteria, including reversal of sequences and therefore better exclusion of false-positive identifications, were applied for reliable protein identification. PMID:16583688
Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)
1999-01-01
We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.
Counting components of multi-dimensional sub-level sets for shape comparison
Frosini, Patrizio
is quite important. Without persistence (in space, time, with respect to the analysis level...) perception Comparison Informal position of the problem The perception properties depend on the subjective interpretation / 29 #12;A Metric Approach to Shape Comparison Informal position of the problem The perception
Voice Dysfunction in Dysarthria: Application of the Multi-Dimensional Voice Program.
ERIC Educational Resources Information Center
Kent, R. D.; Vorperian, H. K.; Kent, J. F.; Duffy, J. R.
2003-01-01
Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice in individuals with dysarthria. In Part 2, acoustic data are reviewed for dysarthria associated with Parkinson disease (PD), cerebellar disease, amytrophic lateral sclerosis, traumatic brain injury, unilateral hemispheric stroke, and essential tremor.…
Multi--dimensional Cosmological Radiative Transfer with a Variable Eddington Tensor Formalism
Nickolay Y. Gnedin; Tom Abel
2001-06-15
We present a new approach to numerically model continuum radiative transfer based on the Optically Thin Variable Eddington Tensor (OTVET) approximation. Our method insures the exact conservation of the photon number and flux (in the explicit formulation) and automatically switches from the optically thick to the optically thin regime. It scales as N logN with the number of hydrodynamic resolution elements and is independent of the number of sources of ionizing radiation (i.e. works equally fast for an arbitrary source function). We also describe an implementation of the algorithm in a Soften Lagrangian Hydrodynamic code (SLH) and a multi--frequency approach appropriate for hydrogen and helium continuum opacities. We present extensive tests of our method for single and multiple sources in homogeneous and inhomogeneous density distributions, as well as a realistic simulation of cosmological reionization.
Scaling analysis for the investigation of slip mechanisms in nanofluids
2011-01-01
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it. PMID:21791036
Scaling analysis for the investigation of slip mechanisms in nanofluids
NASA Astrophysics Data System (ADS)
Savithiri, S.; Pattamatta, Arvind; Das, Sarit K.
2011-07-01
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.
Scaling Analysis of Surfactant Templated Polyacrylamide Gel Surfaces
Mukundan Chakrapani; S. J. Mitchell; D. H. Van Winkle; P. A. Rikvold
2001-12-13
Surfaces of surfactant-templated polyacrylamide hydrogels were imaged by atomic force microscopy (AFM), and the surface morphology was studied by numerical scaling analysis. The templated gels were formed by polymerizing acrylamide plus a cross-linker in the presence of surfactants, which were then removed by soaking in distilled water. Gels formed in the presence of over 20% surfactant (by weight) formed clear, but became opaque upon removal of the surfactants. Untemplated gels formed and remained clear. The surface morphology of the gels was studied by several one- and two-dimensional numerical scaling methods. The surfaces were found to be self-affine on short length scales, with a roughness (Hurst) exponent in the range 0.85 to 1, crossing over to a constant root-mean-square surface width on long scales. Both the crossover length between these two regimes and the saturation value of the surface width increased significantly with increasing surfactant concentration, coincident with the increase in opacity. We propose that the changes in the surface morphology are due to a percolation transition in the system of voids formed upon removal of the surfactants from the bulk.
NASA Astrophysics Data System (ADS)
Schmitt, Michael; Dietzek, Benjamin; Krafft, Christoph; Rösch, Petra; Popp, Jürgen
2009-08-01
Here we will present challenges to be met in connection with the application of spectroscopic imaging and in particular Raman microspectroscopy for life sciences and biomedicine. We start with an introduction of a combinatorial approach between florescence imaging, Raman microspectroscopy and innovative statistical Raman data analysis methods for rapid diagnosis and prevention of infectious diseases. Furthermore we will report about the multimodal application of Raman and CARS imaging for an early diagnosis of cancer.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2008-04-01
Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )
Schoenberg, Poppy L A; Speckens, Anne E M
2015-02-01
To illuminate candidate neural working mechanisms of Mindfulness-Based Cognitive Therapy (MBCT) in the treatment of recurrent depressive disorder, parallel to the potential interplays between modulations in electro-cortical dynamics and depressive symptom severity and self-compassionate experience. Linear and nonlinear ? and ? EEG oscillatory dynamics were examined concomitant to an affective Go/NoGo paradigm, pre-to-post MBCT or natural wait-list, in 51 recurrent depressive patients. Specific EEG variables investigated were; (1) induced event-related (de-) synchronisation (ERD/ERS), (2) evoked power, and (3) inter-/intra-hemispheric coherence. Secondary clinical measures included depressive severity and experiences of self-compassion. MBCT significantly downregulated ? and ? power, reflecting increased cortical excitability. Enhanced ?-desynchronisation/ERD was observed for negative material opposed to attenuated ?-ERD towards positively valenced stimuli, suggesting activation of neural networks usually hypoactive in depression, related to positive emotion regulation. MBCT-related increase in left-intra-hemispheric ?-coherence of the fronto-parietal circuit aligned with these synchronisation dynamics. Ameliorated depressive severity and increased self-compassionate experience pre-to-post MBCT correlated with ?-ERD change. The multi-dimensional neural mechanisms of MBCT pertain to task-specific linear and non-linear neural synchronisation and connectivity network dynamics. We propose MBCT-related modulations in differing cortical oscillatory bands have discrete excitatory (enacting positive emotionality) and inhibitory (disengaging from negative material) effects, where mediation in the ? and ? bands relates to the former. PMID:26052359
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2015-03-01
Fifteen multi-dimensional diagrams for basic and ultrabasic rocks, based on log-ratio transformations, were used to infer tectonic setting for eight case studies of Borborema Province, NE Brazil. The applications of these diagrams indicated the following results: (1) a mid-ocean ridge setting for Forquilha eclogites (Central Ceará domain) during the Mesoproterozoic; (2) an oceanic plateau setting for Algodões amphibolites (Central Ceará domain) during the Paleoproterozoic; (3) an island arc setting for Brejo Seco amphibolites (Riacho do Pontal belt) during the Proterozoic; (4) an island arc to mid-ocean ridge setting for greenschists of the Monte Orebe Complex (Riacho do Pontal belt) during the Neoproterozoic; (5) within-plate (continental) setting for Vaza Barris domain mafic rocks (Sergipano belt) during the Neoproterozoic; (6) a less precise arc to continental rift for the Gentileza unit metadiorite/gabbro (Sergipano belt) during the Neoproterozoic; (7) an island arc setting for the Novo Gosto unit metabasalts (Sergipano belt) during Neoproterozoic; (8) continental rift setting for Rio Grande do Norte basic rocks during Miocene.
Barrett, Louise; Henzi, S. Peter; Lusseau, David
2012-01-01
Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054
Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi
2015-10-01
Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR=4-12kHz) at cryogenic temperatures (T=35-120K) for over a week without consuming helium at a cost for electricity of 16kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T=40K and B0=16.4T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100K. PMID:26302269
Analysis of the Spatial Scaling Characteristics of Snow Depth
NASA Astrophysics Data System (ADS)
Trujillo, E.; Ramírez, J. A.; Elder, K. J.
2005-12-01
Directional spectral analyses were conducted for LIDAR (LIght Detection And Ranging) snow depths measured in six of the nine 1-km2 Intensive Study Areas (ISA's) of NASA's Cold Land Processes Experiment (CLPX) in the spring of 2003 (8-9 of April, 2003). The six study areas analyzed are located in the Fraser and Rabbit Ears Mesoscale Study Areas of the project in the state of Colorado. The snow depth power spectra were compared to the spectra of bare ground elevations (topography) and elevations filtered to the top of vegetation (topography + elevation). The log power spectral density of snow depth versus log of frequency (f) presents two distinct slopes with scale breaks at wavelengths between 6 m and 45 m. The average fractal dimensions for the study areas range between 1.80 and 2.30 for the low frequencies intervals, and between 0.79 and 1.03 for the high frequencies intervals, indicating spatial self-similarity in the snow depth fields. The scale breaks observed in the power spectra of snow depth are not present in the power spectra of topography and/or topography + vegetation, and the slopes of the snow depth spectra differ from the slopes of the power spectra of topography and topography + vegetation. The observed breaks in the power spectra of snow depth are not explained by the power spectra of the underlying topography and vegetation. These scale breaks must be the product of a switch in the dominant process(es) driving the variability of the snow cover properties at these scales. Potential physical causes of the scale breaks will be presented based on further analysis of snow depth data and additional variables. The spatial variability of snow depth at scales smaller than the scale breaks observed is controlled, among other factors, by the interaction of wind, vegetation, and small topographic features. At larger scales, this variability is controlled by precipitation patterns, short and long wave radiation, aspect, slope, and wind, among others. These differences are analyzed to explain the characteristics observed in the power spectra of snow depth.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.
Reactor Physics Methods and Analysis Capabilities in SCALE
Mark D. DeHart; Stephen M. Bowman
2011-05-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Two-field analysis of no-scale supergravity inflation
Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu
2015-01-01
Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values || 0.1. We also calculate the non-Gaussianity measure f{sub NL}, finding that is well below the current experimental sensitivity.
SCALE 6: Comprehensive Nuclear Safety Analysis Code System
Bowman, Stephen M [ORNL
2011-01-01
Version 6 of the Standardized Computer Analyses for Licensing Evaluation (SCALE) computer software system developed at Oak Ridge National Laboratory, released in February 2009, contains significant new capabilities and data for nuclear safety analysis and marks an important update for this software package, which is used worldwide. This paper highlights the capabilities of the SCALE system, including continuous-energy flux calculations for processing multigroup problem-dependent cross sections, ENDF/B-VII continuous-energy and multigroup nuclear cross-section data, continuous-energy Monte Carlo criticality safety calculations, Monte Carlo radiation shielding analyses with automated three-dimensional variance reduction techniques, one- and three-dimensional sensitivity and uncertainty analyses for criticality safety evaluations, two- and three-dimensional lattice physics depletion analyses, fast and accurate source terms and decay heat calculations, automated burnup credit analyses with loading curve search, and integrated three-dimensional criticality accident alarm system analyses using coupled Monte Carlo criticality and shielding calculations.
Scaling and dimensional analysis of acoustic streaming jets
NASA Astrophysics Data System (ADS)
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.; Garandet, J.-P.
2014-09-01
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Scaling and dimensional analysis of acoustic streaming jets
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Dehazing method through polarimetric imaging and multi-scale analysis
NASA Astrophysics Data System (ADS)
Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin
2015-05-01
An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.
NASA Astrophysics Data System (ADS)
Zhou, Ning; Kolobov, Vladimir; Kudriavtsev, Vladimir
2001-10-01
The commercial CFD-ACE+ software has been extended to account for ion energy dependent surface reactions. The ion energy distribution function and the mean ion energy at a biased wafer were obtained using the Riley sheath model extended by the NASA group (Bose et al., J. Appl. Phys. v.87, 7176(2000)). The plasma chemistry model (by P. Ho et al., SAND2001-1292) consisting of 132-step gas-phase reactions and 55-step ion energy dependent surface reactions, was implemented to simulate the C2F6 plasma etching of silicon dioxide in an Inductively Coupled Plasma. Validation studies have been performed against the experimental data by Anderson et al. of UNM for a lab-scale GEC reactor. For a wide range of operating conditions (pressure: 5-25 mTorr; plasma power: 205-495 Watts; bias power: 22-148 Watts), the average etch rate calculated by CFD-ACE+ 2-D simulations agrees very well with those by 0-D AURORA predictions and the experimental data. The CFD-ACE+ simulations allow one to study the radial uniformity of the etch rate depending on discharge conditions.
NASA Astrophysics Data System (ADS)
Quezada, Cristhian R.; Clement, T. Prabhakar; Lee, Kang-Kun
2004-05-01
This paper presents a general method for solving coupled multi-dimensional, multi-species reactive transport equations. The new method can be used for solving multi-species transport problems involving first-order kinetic interactions and distinct retardation factors. The solution process employs Laplace transformation and linear transformation steps to uncouple the governing set of coupled partial differential equations. The uncoupled equations are solved using an elementary solution. The details of the solution algorithm are illustrated by deriving an explicit analytical solution to a two-species reactive transport problem. In addition, three one-dimensional problems and two three-dimensional problems are solved to illustrate the use of the method. The proposed solution scheme is a robust procedure for solving a variety of multi-dimensional, multi-species transport problems that are coupled with a first-order reaction network.
NASA Astrophysics Data System (ADS)
Tsai, Chin-Chung; Liu, Shiang-Yao
2005-10-01
The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of scientific epistemological views (SEVs) held by high school students. The instrument included five SEV dimensions (subscales): the role of social negotiation on science, the invented and creative reality of science, the theory-laden exploration of science, the cultural impacts on science, and the changing features of science. Six hundred and thirteen high school students in Taiwan responded to the instrument. Data analysis indicated that the instrument developed in this study had satisfactory validity and reliability measures. Correlation analysis and in-depth interviews supported the legitimacy of using multiple dimensions in representing student SEVs. Significant differences were found between male and female students, and between students’ and their teachers’ responses on some SEV dimensions. Suggestions were made about the use of the instrument to examine complicated interplays between SEVs and science learning, to evaluate science instruction, and to understand the cultural differences in epistemological views of science.
Three decades of multi-dimensional change in global leaf phenology
NASA Astrophysics Data System (ADS)
Buitenwerf, Robert; Rose, Laura; Higgins, Steven I.
2015-04-01
Changes in the phenology of vegetation activity may accelerate or dampen rates of climate change by altering energy exchanges between the land surface and the atmosphere and can threaten species with synchronized life cycles. Current knowledge of long-term changes in vegetation activity is regional, or restricted to highly integrated measures of change such as net primary productivity, which mask details that are relevant for Earth system dynamics. Such details can be revealed by measuring changes in the phenology of vegetation activity. Here we undertake a comprehensive global assessment of changes in vegetation phenology. We show that the phenology of vegetation activity changed severely (by more than 2 standard deviations in one or more dimensions of phenological change) on 54% of the global land surface between 1981 and 2012. Our analysis confirms previously detected changes in the boreal and northern temperate regions. The adverse consequences of these northern phenological shifts for land-surface-climate feedbacks, ecosystems and species are well known. Our study reveals equally severe phenological changes in the southern hemisphere, where consequences for the energy budget and the likelihood of phenological mismatches are unknown. Our analysis provides a sensitive and direct measurement of ecosystem functioning, making it useful both for monitoring change and for testing the reliability of early warning signals of change.
Multi-scale analysis and simulation of powder blending in pharmaceutical manufacturing
Ngai, Samuel S. H
2005-01-01
A Multi-Scale Analysis methodology was developed and carried out for gaining fundamental understanding of the pharmaceutical powder blending process. Through experiment, analysis and computer simulations, microscopic ...
NASA Astrophysics Data System (ADS)
West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman
2009-02-01
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.
NASA Astrophysics Data System (ADS)
Kiyan, D.; Jones, A. G.; Fullea, J.; Ledo, J.; Siniscalchi, A.; Romano, G.
2014-12-01
The PICASSO (Program to Investigate Convective Alboran Sea System Overturn) project and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROSCORES TOPO-EUROPE project) project were designed to collect high resolution, multi-disciplinary lithospheric scale data in order to understand the tectonic evolution and lithospheric structure of the western Mediterranean. The over-arching objectives of the magnetotelluric (MT) component of the projects are (i) to provide new electrical conductivity constraints on the crustal and lithospheric structure of the Atlas Mountains, and (ii) to test the hypotheses for explaining the purported lithospheric cavity beneath the Middle and High Atlas inferred from potential-field lithospheric modeling. We present the results of an MT experiment we carried out in Morocco along two profiles: an approximately N-S oriented profile crossing the Middle Atlas, the High Atlas and the eastern Anti-Atlas to the east (called the MEK profile, for Meknes) and NE-SW oriented profile through western High Atlas to the west (called the MAR profile, for Marrakech). Our results are derived from three-dimensional (3-D) MT inversion of the MT data set employing the parallel version of Modular system for Electromagnetic inversion (ModEM) code. The distinct conductivity differences between the Middle-High Atlas (conductive) and the Anti-Atlas (resistive) correlates with the South Atlas Front fault, the depth extent of which appears to be limited to the uppermost mantle (approx. 60 km). In all inverse solutions, the crust and the upper mantle show resistive signatures (approx. 1,000 ?m) beneath the Anti-Atlas, which is the part of stable West African Craton. Partial melt and/or exotic fluids enriched in volatiles produced by the melt can account for the high middle to lower crustal and uppermost mantle conductivity in the Folded Middle Atlas, the High Moulouya Plain and the central High Atlas.
Conservative-variable average states for equilibrium gas multi-dimensional fluxes
NASA Technical Reports Server (NTRS)
Iannelli, G. S.
1992-01-01
Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.
Levant, Ronald F; Hall, Rosalie J; Weigold, Ingrid K; McCurdy, Eric R
2015-07-01
Focusing on a set of 3 multidimensional measures of conceptually related but different aspects of masculinity, we use factor analytic techniques to address 2 issues: (a) whether psychological constructs that are theoretically distinct but require fairly subtle discriminations by survey respondents can be accurately captured by self-report measures, and (b) how to better understand sources of variance in subscale and total scores developed from such measures. The specific measures investigated were the: (a) Male Role Norms Inventory-Short Form (MRNI-SF); (b) Conformity to Masculine Norms Inventory-46 (CMNI-46); and (c) Gender Role Conflict Scale-Short Form (GRCS-SF). Data (N = 444) were from community-dwelling and college men who responded to an online survey. EFA results demonstrated the discriminant validity of the 20 subscales comprising the 3 instruments, thus indicating that relatively subtle distinctions between norms, conformity, and conflict can be captured with self-report measures. CFA was used to compare 2 different methods of modeling a broad/general factor for each of the 3 instruments. For the CMNI-46 and MRNI-SF, a bifactor model fit the data significantly better than did a hierarchical factor model. In contrast, the hierarchical model fit better for the GRCS-SF. The discussion addresses implications of these specific findings for use of the measures in research studies, as well as broader implications for measurement development and assessment in other research domains of counseling psychology which also rely on multidimensional self-report instruments. (PsycINFO Database Record PMID:26167651
Yun He; H. Q. Ding
2002-01-01
We investigate remapping multi-dimensional arrays on cluster of SMP architectures under OpenMP, MPI, and hybrid paradigms. Traditional method of array transpose needs an auxiliary array of the same size and a copy back stage. We recently developed an in-place method using vacancy tracking cycles. The vacancy tracking algorithm outperforms the traditional 2-array method as demonstrated by extensive comparisons. The independence
Yun He; Chris H. Q. Ding
2002-01-01
We investigate remapping multi-dimensional arrays on cluster of SMP architectures under OpenMP, MPI, and hybrid paradigms. Traditional method of array transpose needs an auxiliary array of the same size and a copy back stage. We recently developed an in-place method using vacancy tracking cycles. The vacancy tracking algorithm outperforms the traditional 2-array method as demonstrated by extensive comparisons. The independence
Technical note: a multi-dimensional description of knee laxity using radial basis functions.
Cyr, Adam J; Maletsky, Lorin P
2015-01-01
The net laxity of the knee is a product of individual ligament structures that provide constraint for multiple degrees of freedom (DOF). Clinical laxity assessments are commonly performed along a single axis of motion, and lack analyses of primary and coupled motions in terms of translations and rotations of the knee. Radial basis functions (RBFs) allow multiple DOF to be incorporated into a single method that accounts for all DOF equally. To evaluate this method, tibiofemoral kinematics were experimentally collected from a single cadaveric specimen during a manual laxity assessment. A radial basis function (RBF) analysis was used to approximate new points over a uniform grid space. The normalized root mean square errors of the approximated points were below 4% for all DOF. This method provides a unique approach to describing joint laxity that incorporates multiple DOF in a single model. PMID:25115564
Automated Sholl analysis of digitized neuronal morphology at multiple scales.
Kutzing, Melinda K; Langhammer, Christopher G; Luo, Vincent; Lakdawala, Hersh; Firestein, Bonnie L
2010-01-01
Neuronal morphology plays a significant role in determining how neurons function and communicate. Specifically, it affects the ability of neurons to receive inputs from other cells and contributes to the propagation of action potentials. The morphology of the neurites also affects how information is processed. The diversity of dendrite morphologies facilitate local and long range signaling and allow individual neurons or groups of neurons to carry out specialized functions within the neuronal network. Alterations in dendrite morphology, including fragmentation of dendrites and changes in branching patterns, have been observed in a number of disease states, including Alzheimer's disease, schizophrenia, and mental retardation. The ability to both understand the factors that shape dendrite morphologies and to identify changes in dendrite morphologies is essential in the understanding of nervous system function and dysfunction. Neurite morphology is often analyzed by Sholl analysis and by counting the number of neurites and the number of branch tips. This analysis is generally applied to dendrites, but it can also be applied to axons. Performing this analysis by hand is both time consuming and inevitably introduces variability due to experimenter bias and inconsistency. The Bonfire program is a semi-automated approach to the analysis of dendrite and axon morphology that builds upon available open-source morphological analysis tools. Our program enables the detection of local changes in dendrite and axon branching behaviors by performing Sholl analysis on subregions of the neuritic arbor. For example, Sholl analysis is performed on both the neuron as a whole as well as on each subset of processes (primary, secondary, terminal, root, etc.) Dendrite and axon patterning is influenced by a number of intracellular and extracellular factors, many acting locally. Thus, the resulting arbor morphology is a result of specific processes acting on specific neurites, making it necessary to perform morphological analysis on a smaller scale in order to observe these local variations. The Bonfire program requires the use of two open-source analysis tools, the NeuronJ plugin to ImageJ and NeuronStudio. Neurons are traced in ImageJ, and NeuronStudio is used to define the connectivity between neurites. Bonfire contains a number of custom scripts written in MATLAB (MathWorks) that are used to convert the data into the appropriate format for further analysis, check for user errors, and ultimately perform Sholl analysis. Finally, data are exported into Excel for statistical analysis. A flow chart of the Bonfire program is shown in Figure 1. PMID:21113115
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
A Multi-scale Approach to Urban Thermal Analysis
NASA Technical Reports Server (NTRS)
Gluch, Renne; Quattrochi, Dale A.
2005-01-01
An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.
Zhang, Zhiyong; Huang, Yuqing; Smith, Pieter E S; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong
2014-05-01
Heteronuclear NMR spectroscopy is an extremely powerful tool for determining the structures of organic molecules and is of particular significance in the structural analysis of proteins. In order to leverage the method's potential for structural investigations, obtaining high-resolution NMR spectra is essential and this is generally accomplished by using very homogeneous magnetic fields. However, there are several situations where magnetic field distortions and thus line broadening is unavoidable, for example, the samples under investigation may be inherently heterogeneous, and the magnet's homogeneity may be poor. This line broadening can hinder resonance assignment or even render it impossible. We put forth a new class of pulse sequences for obtaining high-resolution heteronuclear spectra in magnetic fields with unknown spatial variations based on distant dipolar field modulations. This strategy's capabilities are demonstrated with the acquisition of high-resolution 2D gHSQC and gHMBC spectra. These sequences' performances are evaluated on the basis of their sensitivities and acquisition efficiencies. Moreover, we show that by encoding and decoding NMR observables spatially, as is done in ultrafast NMR, an extra dimension containing J-coupling information can be obtained without increasing the time necessary to acquire a heteronuclear correlation spectrum. Since the new sequences relax magnetic field homogeneity constraints imposed upon high-resolution NMR, they may be applied in portable NMR sensors and studies of heterogeneous chemical and biological materials. PMID:24607822
Multi-Dimensional Cfd-Transmission Matrix Modelling of IC Engine Intake and Exhaust Systems
NASA Astrophysics Data System (ADS)
Chiavola, O.
2002-10-01
A method able to investigate the overall performances of internal combustion engine intake and exhaust systems is proposed. Such a method is based on the combination between a time domain non-linear model (used to perform fluid and thermodynamic analysis in cylinders, valves and manifolds) and a linear acoustic method (used to predict the spectral characteristics of the remainder of the system). The time domain approach is based on the simultaneous use of zero-, one- and three-dimensional fluid dynamic models, applied to different regions of the same geometry. The frequency domain approach is based on acoustic theory and uses the transfer matrix technique. Both the procedures used to couple the different calculation domains analyzed by time domain models and to interface fluid dynamic and linear acoustic models have been developed. In this paper, the comparison between the obtained results and the predictions of another simulation technique and experimental measurements, is shown, proving that the developed method is a reliable tool for the prediction of complete intake and exhaust systems.
Multi-Dimensional Density Estimation and Phase Space Structure Of Dark Matter Halos
Sanjib Sharma; Matthias Steinmetz
2008-03-03
We present a method to numerically estimate the densities of a discretely sampled data based on binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles,until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We apply this method to determine the phase space densities of dark matter halos obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function $v(f)$ of phase space density $f$ does not have a constant slope but rather a small hump at high phase space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of $v(f)$ vs $f$ relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behavior as seen in simulations. The use of the presented method is not limited to calculation of phase space densities, but can be used as a general-purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.
Investigation of Biogrout processes by numerical analysis at pore scale
NASA Astrophysics Data System (ADS)
Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.
2013-04-01
Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be studied and eventually observations may be extrapolated to a larger continuum scale.
Tera-scale astronomical data analysis and visualization
NASA Astrophysics Data System (ADS)
Hassan, A. H.; Fluke, C. J.; Barnes, D. G.; Kilborn, V. A.
2013-03-01
We present a high-performance, graphics processing unit (GPU) based framework for the efficient analysis and visualization of (nearly) terabyte (TB) sized 3D images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image (1) volume rendering using an arbitrary transfer function at 7-10 frames per second, (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s, (3) evaluation of the image histogram in 4 s and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching 1 teravoxel per second, and are 10-100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows that the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array Pathfinder radio telescopes.
Analysis of hydrological triggered clayey landslides by small scale experiments
NASA Astrophysics Data System (ADS)
Spickermann, A.; Malet, J.-P.; van Asch, T. W. J.; Schanz, T.
2010-05-01
Hydrological processes, such as slope saturation by water, are a primary cause of landslides. This effect can occur in the form of e.g. intense rainfall, snowmelt or changes in ground-water levels. Hydrological processes can trigger a landslide and control subsequent movement. In order to forecast potential landslides, it is important to know both the mechanism leading to failure, to evaluate whether a slope will fail or not, and the mechanism that control the movement of the failure mass, to estimate how much material will move in which time. Despite numerous studies which have been done there is still uncertainty in the explanation of the processes determining the failure and post-failure. Background and motivation of the study is the Barcelonnette area that is part of the Ubaye Valley in the South French Alps which is highly affected by hydrological-controlled landslides in reworked black marls. Since landslide processes are too complex to understand it only by field observation experiments and computer calculations are used. The main focus of this work is to analyse the initialization of failure and the post-failure behaviour of hydrological triggered landslides in clays by small-scale experiments, namely by small-scale flume tests and centrifuge tests. Although a lot of effort is made to investigate the landslide problem by either small-scale or even large-scale slope experiments there is still no optimal solution. Small-scale flume tests are often criticised because of their scale-effect problems dominant in dense sands and cohesive material and boundary problems. By means of centrifuge tests the scale problem with respect to stress conditions is overcome. But also centrifuge testing is accompanied with problems. The objectives of the work are 1) to review potential failure and post-failure mechanisms, 2) to evaluate small-scale experiments, namely flume and centrifuge tests in the analysis of the failure behaviour in clayey slopes and 3) to interpret the failure behaviour and possible mechanisms in tests on Zoelen clay and black marls by numerical calculations. After a general view of mechanisms that might initialise failure and mechanisms that might determine post-failure motion relevant for landslides occurring in non-cohesive and cohesive slopes, the performed tests on reworked black marls are presented. The problems and restrictions of both test methods are explained and discussed strategies for future tests given. The assumed mechanisms that might trigger failure and control post-failure motion that have been observed in the tests are examined by numerical modelling. It is shown that the results of the numerical simulation give an important contribution to the interpretation of the experimental observations and to the evaluation of the small-scale experiments.
NASA Astrophysics Data System (ADS)
Pandarinath, Kailasa
2014-12-01
Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of Precambrian rocks.
MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A
Anusha, L. S.; Nagendra, K. N. E-mail: knn@iiap.res.in
2013-04-20
Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
Political risk analysis in large-scale mineral investments
Proehl, T.S.
1985-01-01
This dissertation emphasizes problems encountered in applying current techniques within the framework of the expected-net-present-value investment evaluation paradigms commonly employed by mineral extraction firms. A method of political risk analysis consistent with expected-net-present-value paradigms is presented. This method of political risk analysis is grounded in the neoclassical tradition of economics which holds that economics should determine politics. The method of political risk analysis presented consists of direct and indirect portions. The direct portion of the method requires electoral polling to formulate support distributions for possible host nation policies toward foreign investors. It is applicable in freely politicized host nations. The indirect portion of the method presumes that abnormalities in economic trends produce political pressures intended to return a host nation economy to its normal state. Large-scale mineral investments are particularly vulnerable to political pressures and are at risk whenever economic abnormalities in a host nation manifest themselves. The degree of political risk present at any time is a direct function of the deviation of a host nation economy from its normal condition.
Tera-scale Astronomical Data Analysis and Visualization
Hassan, A H; Barnes, D G; Kilborn, V A
2012-01-01
We present a high-performance, graphics processing unit (GPU)-based framework for the efficient analysis and visualization of (nearly) terabyte (TB)-sized 3-dimensional images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image: (1) volume rendering using an arbitrary transfer function at 7--10 frames per second; (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s; (3) evaluation of the image histogram in 4 s; and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching one teravoxel per second, and are 10--100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly, we present our framework as a possible solut...
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
Large-scale dimension densities for heart rate variability analysis
NASA Astrophysics Data System (ADS)
Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen
2006-04-01
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (?ls?=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ?ls? (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.
Multidimensional Scaling Analysis of the Dynamics of a Country Economy
Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Cluster coarsening during polymer collapse: Finite-size scaling analysis
NASA Astrophysics Data System (ADS)
Majumder, Suman; Janke, Wolfhard
2015-06-01
We study the kinetics of the collapse of a single flexible polymer when it is quenched from a good solvent to a poor solvent. Results obtained from Monte Carlo simulations show that the collapse occurs through a sequence of events with the formation, growth and subsequent coalescence of clusters of monomers to a single compact globule. Particular emphasis is given in this work to the cluster growth during the collapse, analyzed via the application of finite-size scaling techniques. The growth exponent obtained in our analysis is suggestive of the universal Lifshitz-Slyozov mechanism of cluster growth. The methods used in this work could be of more general validity and applicable to other phenomena such as protein folding.
Large-Scale Quantitative Analysis of Painting Arts
NASA Astrophysics Data System (ADS)
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Global Mapping Analysis: Stochastic Gradient Algorithm in Multidimensional Scaling
NASA Astrophysics Data System (ADS)
Matsuda, Yoshitatsu; Yamaguchi, Kazunori
In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.
Parallel Index and Query for Large Scale Data Analysis
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.
Confirmatory Factor Analysis of the Educators' Attitudes toward Educational Research Scale
ERIC Educational Resources Information Center
Ozturk, Mehmet Ali
2011-01-01
This article reports results of a confirmatory factor analysis performed to cross-validate the factor structure of the Educators' Attitudes Toward Educational Research Scale. The original scale had been developed by the author and revised based on the results of an exploratory factor analysis. In the present study, the revised scale was given to…
Interactive Analysis of Web-Scale Data Christopher Olston, Edward Bortnikov,
Olston, Christopher
Interactive Analysis of Web-Scale Data Christopher Olston, Edward Bortnikov, Khaled Elmeleegy over web- scale data. The basic approach is to view querying as a two-phase activity: first supply-scale data sets. A web-scale data set has ta- bles of cardinality 1010 or more with perhaps hundreds
Spatial data analysis for exploration of regional scale geothermal resources
NASA Astrophysics Data System (ADS)
Moghaddam, Majid Kiavarz; Noorollahi, Younes; Samadzadegan, Farhad; Sharifi, Mohammad Ali; Itoi, Ryuichi
2013-10-01
Defining a comprehensive conceptual model of the resources sought is one of the most important steps in geothermal potential mapping. In this study, Fry analysis as a spatial distribution method and 5% well existence, distance distribution, weights of evidence (WofE), and evidential belief function (EBFs) methods as spatial association methods were applied comparatively to known geothermal occurrences, and to publicly-available regional-scale geoscience data in Akita and Iwate provinces within the Tohoku volcanic arc, in northern Japan. Fry analysis and rose diagrams revealed similar directional patterns of geothermal wells and volcanoes, NNW-, NNE-, NE-trending faults, hotsprings and fumaroles. Among the spatial association methods, WofE defined a conceptual model correspondent with the real world situations, approved with the aid of expert opinion. The results of the spatial association analyses quantitatively indicated that the known geothermal occurrences are strongly spatially-associated with geological features such as volcanoes, craters, NNW-, NNE-, NE-direction faults and geochemical features such as hotsprings, hydrothermal alteration zones and fumaroles. Geophysical data contains temperature gradients over 100 °C/km and heat flow over 100 mW/m2. In general, geochemical and geophysical data were better evidence layers than geological data for exploring geothermal resources. The spatial analyses of the case study area suggested that quantitative knowledge from hydrothermal geothermal resources was significantly useful for further exploration and for geothermal potential mapping in the case study region. The results can also be extended to the regions with nearly similar characteristics.
Pratt, Vaughan
`confession' text from the website ExperienceProject.com: I have a crush on my boss! *blush* eeek *back of this writ- ing, the above confession had received the following distribution of reactions: `that rocks': 1 to elicit less weighty rections as well. The comments on the confession reflect the summary offered
Gupta, S.K.; Kincaid, C.T.; Meyer, P.R.; Newbill, C.A.; Cole, C.R.
1982-08-01
The Seasonal Thermal Energy Storage Program is being conducted for the Department of Energy by Pacific Northwest Laboratory. A major thrust of this program has been the study of natural aquifers as hosts for thermal energy storage and retrieval. Numerical simulation of the nonisothermal response of the host media is fundamental to the evaluation of proposed experimental designs and field test results. This report represents the primary documentation for the coupled fluid, energy and solute transport (CFEST) code. Sections of this document are devoted to the conservation equations and their numerical analogues, the input data requirements, and the verification studies completed to date.
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...
ERIC Educational Resources Information Center
Sengupta, Atanu; Pal, Naibedya Prasun
2012-01-01
Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…
NASA Technical Reports Server (NTRS)
Cao, Yiding; Faghri, Amir; Chang, Won Soon
1989-01-01
An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.
Saravanabhavan, Gurusankar; Helferty, Anjali; Hodson, Peter V; Brown, R Stephen
2007-07-13
We report an offline multi-dimensional high performance liquid chromatography (HPLC) technique for the group separation and analysis of PAHs in a heavy gas oil fraction (boiling range 287-481 degrees C). Waxes present in the heavy gas oil fraction were precipitated using cold acetone at -20 degrees C. Recovery studies showed that the extract contained 93% (+/-1%; n=3) of the PAHs that were originally present while the wax residue contained only 6% (+/-0.5%; n=3). PAHs present in the extract were fractionated, based on number of rings, into five fractions using a semi-preparative silica column (normal-phase HPLC). These fractions were analyzed using reverse-phase HPLC (RP-HPLC) coupled to a diode array detector (DAD). The method separated alkyl and un-substituted PAHs on two reverse-phase columns in series using an acetonitrile/water mobile phase. UV spectra of the chromatographic peaks were used to differentiate among PAH groups. Further characterization of PAHs within a given group to determine the substituent alkyl carbon number used retention time matching with a suite of alkyl-PAH standards. Naphthalene, dibenzothiophene, phenanthrene and fluorene and their C1-C4 alkyl isomers were quantified. The concentrations of these compounds obtained using the current method were compared with that of a GC-MS analysis obtained from an independent oil chemistry laboratory. PMID:17482627
Raychaudhuri, Soumya
Application of user-guided automated cytometric data analysis to large-scale immunoprofiling analysis relies on manual gating, which is a major source of variability in large-scale studies. We devised, and the expression of phenotypic and functional markers correlates closely with CD4 expression. automated analysis
Performance analysis of ultra-scaled InAs HEMTs
del Alamo, Jesus A.
The scaling behavior of ultra-scaled InAs HEMTs is investigated using a 2-dimensional real-space effective mass ballistic quantum transport simulator. The simulation methodology is first benchmarked against experimental ...
Measuring Mathematics Anxiety: Psychometric Analysis of a Bidimensional Affective Scale
ERIC Educational Resources Information Center
Bai, Haiyan; Wang, LihShing; Pan, Wei; Frey, Mary
2009-01-01
The purpose of this study is to develop a theoretically and methodologically sound bidimensional affective scale measuring mathematics anxiety with high psychometric quality. The psychometric properties of a 14-item Mathematics Anxiety Scale-Revised (MAS-R) adapted from Betz's (1978) 10-item Mathematics Anxiety Scale were empirically analyzed on a…
Diffusion entropy analysis on the scaling behavior of financial markets
NASA Astrophysics Data System (ADS)
Cai, Shi-Min; Zhou, Pei-Ling; Yang, Hui-Jie; Yang, Chun-Xia; Wang, Bing-Hong; Zhou, Tao
2006-07-01
In this paper the diffusion entropy technique is applied to investigate the scaling behavior of financial markets. The scaling behaviors of four representative stock markets, Dow Jones Industrial Average, Standard&Poor 500, Heng Seng Index, and Shang Hai Stock Synthetic Index, are almost the same; with the scale-invariance exponents all in the interval [0.92,0.95]. We also estimate the local scaling exponents which indicate the financial time series is homogenous perfectly. In addition, a parsimonious percolation model for stock markets is proposed, of which the scaling behavior agrees with the real-life markets well.
Analysis of small-scale rotor hover performance data
NASA Technical Reports Server (NTRS)
Kitaplioglu, Cahit
1990-01-01
Rotor hover-performance data from a 1/6-scale helicopter rotor are analyzed and the data sets compared for the effects of ambient wind, test stand configuration, differing test facilities, and scaling. The data are also compared to full scale hover data. The data exhibited high scatter, not entirely due to ambient wind conditions. Effects of download on the test stand proved to be the most significant influence on the measured data. Small-scale data correlated resonably well with full scale data; the correlation did not improve with Reynolds number corrections.
MicroScale Thermophoresis: Interaction analysis and beyond
NASA Astrophysics Data System (ADS)
Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis
2014-12-01
MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.
Integrated Water and Energy Analysis at Decision Relevant Scales
NASA Astrophysics Data System (ADS)
Yates, D. N.; Sieber, J.; Heaps, C.; Purkey, D.; Mehta, V. K.
2012-12-01
While the energy-water nexus information-base has been growing, there remains few modeling tools able to evaluate the interactions and feedbacks between these sectors at decision relevant scales. In particular, a spatially explicit coupled modeling system would facilitate a more complete and accurate evaluation of the workability and consequences of alternative climate mitigation and adaptation strategies. For example, in order to evaluate a region's changing water supply sources, it will be important to represent the water side of a coupled modeling system in some detail. It would also be valuable to simultaneously work through the energy use and carbon emission consequences of the region's water adaptation alternatives, as well as the potential feedbacks associated with new energy system developments on system-wide water demands. The Water Evaluation and Planning (WEAP) and Long Range Energy Alternatives (LEAP) tools have been integrated to explicitly couple water and energy systems to conduct such analysis. We demonstrate the merits of this integration for the Southwestern U.S.he Integration of LEAP (left) and WEAP (right)
MIXREGLS: A Program for Mixed-Effects Location Scale Analysis.
Hedeker, Donald; Nordgren, Rachel
2013-03-01
MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062
Large-scale fault kinematic analysis in Noctis Labyrinthus (Mars)
NASA Astrophysics Data System (ADS)
Bistacchi, Nicola; Massironi, Matteo; Baggio, Paolo
2004-01-01
Noctis Labyrinthus (Mars) is characterized by many tectonic features, which represent brittle deformation of the crust. This tectonic setting was analysed by remote sensing of the Viking Mars Digital Image Model (MDIM) mosaic and Mars Orbiter Camera (MOC) global mosaic, in order to identify deformational events. The main features are normal faults producing horst-graben structures, strike-slip faults, and related en-echelon and pull-apart basins. Using the criterion of cross-cutting relationships and analysis of secondary structures, to infer sense of movement of faults, two deformational phases were identified in the Noctis Labyrinthus area. The first, D1, located mainly in the northern part, is characterized by transtensional faults (Noachian). The second, D2, recorded in the southern part of the Noctis Labyrinthus by an orthorhombic extensional fault pattern along NNE and WNW trends, is related to the Valles Marineris formation (Late Noachian-Early Hesperian). A third tectonic event, D3, represented by the partly known dextral NW strike-slip faults cross-cutting the Valles Marineris Canyon System (Late Hesperian?-Amazonian?), was not found in Noctis Labyrinthus at the scale and resolution considered.
Numerical Simulation and Scaling Analysis of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2011-11-01
Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.
Scaling Analysis of the Ganges-Brahmaputra River Discharge
NASA Astrophysics Data System (ADS)
Arulraj, Malarvizhi; Venugopal, V.; Papa, Fabrice; Bala, Sujit K.
2014-05-01
In this study, we characterize the scaling properties of the Ganges-Brahmaputra river discharge. Using 50 years (1950-2000) of in situ measurements of daily discharge at Hardinge (for the Ganges) and Bahadurabad (for the Brahmaputra), we first establish that there is no obvious evidence of the impact of climate change on the discharge of either river; specifically, we find that there is no significant change in the discharge seasonal cycle nor in the variance of their subseasonal fluctuations. Having established weak second order stationarity, we analyse and show that there exists a power-law scaling between 2 days and 60 days for both rivers' normalized discharge fluctuations. The utility of this type of scale-invariance will be illustrated with a temporal disaggregation model, which relates small-scale to large-scale variability (by just a ratio of scales) and enables us to disaggregate 10-day or 35-day discharge estimates from satellite altimetry to the daily scale.
ERIC Educational Resources Information Center
Redfield, Joel
1978-01-01
TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)
An Exploratory Factor Analysis of the Mathematics Self-Efficacy Scale Revised (MSES-R).
ERIC Educational Resources Information Center
Kranzler, John H.; Pajares, Frank
1997-01-01
Presents results of an exploratory factor analysis of the mathematics self-efficacy scale. Results, based on 522 undergraduates from three different colleges, indicate that the scale is a multidimensional measure of math self-efficacy with reliable subscales. Four first-order factors were identified, suggesting that the scale taps different…
Design and analysis of a scaled model of a high-rise, high-speed elevator
W. D. Zhu; L. J. Teppo
2003-01-01
A novel scaled model is developed to simulate the linear lateral dynamics of a hoist cable with variable length in a high-rise, high-speed elevator. The dimensionless groups used to formulate the scaling laws are derived through dimensional analysis. The model parameters are selected based on the scaling laws and are subject to the material, size, and hardware constraints. It is
Analysis of strong scattering at the micro-scale Kasper van Wijk
Boise State University
Analysis of strong scattering at the micro-scale Kasper van Wijk Physical Acoustics Laboratory. Scales Physical Acoustics Laboratory, Department of Geophysics, Colorado School of Mines, Golden features, and investigate subtle de- tails in the laboratory data on the scale of the individual scatterer
Decoupled Speed Scaling: Analysis and Evaluation Maryam Elahi, Carey Williamson, Philipp Woelfel
Williamson, Carey
Decoupled Speed Scaling: Analysis and Evaluation Maryam Elahi, Carey Williamson, Philipp Woelfel {bmelahi, carey, woelfel}@ucalgary.ca Abstract--In this paper, we introduce the notion of decoupled speed scaling, wherein the speed scaling function is completely decoupled from the scheduling policy used
Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)
ERIC Educational Resources Information Center
Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.
2004-01-01
Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2013-08-01
In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).
Regional Scale Analysis of Extremes in an SRM Geoengineering Simulation
NASA Astrophysics Data System (ADS)
Muthyala, R.; Bala, G.
2014-12-01
Only a few studies in the past have investigated the statistics of extreme events under geoengineering. In this study, a global climate model is used to investigate the impact of solar radiation management on extreme precipitation events on regional scale. Solar constant was reduced by 2.25% to counteract the global mean surface temperature change caused by a doubling of CO2 (2XCO2) from its preindustrial control value. Using daily precipitation rates, extreme events are defined as those which exceed 99.9th percentile precipitation threshold. Extremes are substantially reduced in geoengineering simulation: the magnitude of change is much smaller than those that occur in a simulation with doubled CO2. Regional analysis over 22 Giorgi land regions is also performed. Doubling of CO2 leads to an increase in intensity of extreme (99.9th percentile) precipitation by 17.7% on global-mean basis with maximum increase in intensity over South Asian region by 37%. In the geoengineering simulation, there is a global-mean reduction in intensity of 3.8%, with a maximum reduction over Tropical Ocean by 8.9%. Further, we find that the doubled CO2 simulation shows an increase in the frequency of extremes (>50 mm/day) by 50-200% with a global mean increase of 80%. In contrast, in geo-engineering climate there is a decrease in frequency of extreme events by 20% globally with a larger decrease over Tropical Ocean by 30%. In both the climate states (2XCO2 and geo-engineering) change in "extremes" is always greater than change in "means" over large domains. We conclude that changes in precipitation extremes are larger in 2XCO2 scenario compared to preindustrial climate while extremes decline slightly in the geoengineered climate. We are also investigating the changes in extreme statistics for daily maximum and minimum temperature, evapotranspiration and vegetation productivity. Results will be presented at the meeting.
GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY
Lee, S
2008-05-28
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.
A Guttman Scale Analysis of the Burger Court's Press Decisions.
ERIC Educational Resources Information Center
Stempel, Guido H., III
L. L. Guttman's scaling procedures were used to analyze United States Supreme Court decisions concerning the press for the period 1971-1981. Guttman scaling is essentially a means of determining whether a given set of responses is unidimensional, meaning that the responses are part of a single hierarchical continuum. The 44 press cases during the…
Reliability and Validity Analysis of the Multiple Intelligence Perception Scale
ERIC Educational Resources Information Center
Yesil, Rustu; Korkmaz, Ozgen
2010-01-01
This study mainly aims to develop a scale to determine individual intelligence profiles based on self-perceptions. The study group consists of 925 students studying in various departments of the Faculty of Education at Ahi Evran University. A logical and statistical approach was adopted in scale development. Expert opinion was obtained for the…
Confirmatory Factor Analysis of the Geriatric Depression Scale
ERIC Educational Resources Information Center
Adams, Kathryn Betts; Matto, Holly C.; Sanders, Sara
2004-01-01
Purpose: The Geriatric Depression Scale (GDS) is widely used in clinical and research settings to screen older adults for depressive symptoms. Although several exploratory factor analytic structures have been proposed for the scale, no independent confirmation has been made available that would enable investigators to confidently identify scores…
Guttman Facet Design and Analysis: A Technique for Attitude Scale Construction.
ERIC Educational Resources Information Center
Hamersma, Richard J.
The main import of the present paper is to discuss what Guttman facet design and analysis is and then to show how this technique can be used in attitude scale construction. Since Guttman is best known for his contribution to scaling theory known as scalogram analysis, a brief historical background is given to indicate how Guttman moved from a…
Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011
Home News and Events Multimedia Library Videos Large-Scale Cancer Genomics Data Analysis - David Haussler Large-Scale Cancer Genomics Data Analysis - David Haussler, TCGA Scientific Symposium 2011 You will need Adobe Flash Player 8 or later and JavaScript
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web
Caverlee, James
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering
QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web
Liu, Ling
1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping
Scale Development Research: A Content Analysis and Recommendations for Best Practices
ERIC Educational Resources Information Center
Worthington, Roger L.; Whittaker, Tiffany A.
2006-01-01
The authors conducted a content analysis on new scale development articles appearing in the "Journal of Counseling Psychology" during 10 years (1995 to 2004). The authors analyze and discuss characteristics of the exploratory and confirmatory factor analysis procedures in these scale development studies with respect to sample characteristics,…
Cipriani, Daniel J; Hensen, Francine E; McPeck, Danielle L; Kubec, Gina L D; Thomas, Julie J
2012-11-01
Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid inferences; however, its rating scale structure has not been evaluated for utility. The aims of this study were to evaluate the category response structure of the CSEST, test the utility of a revised rating scale structure, and confirm its psychometric properties. The Rasch Measurement Model was used for all analyses. Subjects included 175 adult caregivers recruited from multiple communities. Results confirm that a revised five-category rating scale structure yields reliable data and valid inferences. Given the relationship between self-efficacy and risk of physical and/or emotional stress, measuring parental self-efficacy for transfers is a proactive process in rehabilitation. PMID:22712478
JOHNSON, M.D.
2000-03-13
Fairbanks Weight Scales are used at the Waste Receiving and Processing (WRAP) facility to determine the weight of waste drums as they are received, processed, and shipped. Due to recent problems, discovered during calibration, the WRAP Engineering Department has completed this document which outlines both the investigation of the infeed conveyor scale failure in September of 1999 and recommendations for calibration procedure modifications designed to correct deficiencies in the current procedures.
Scaling parameters for PFBC cyclone separator system analysis
Gil, A.; Romeo, L.M.; Cortes, C.
1999-07-01
Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.
A Critical Analysis of the Concept of Scale Dependent Macrodispersivity
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Cvetkovic, Vladimir; Dagan, Gedeon; Dietrich, Peter; Fiori, Aldo; Rubin, Yoram; Teutsch, Georg
2015-04-01
Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity ?L (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally ?L is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime). It was surmised in the literature that ?L scales up with travel distance L following a universal scaling law. Attempts to define the scaling law were sursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992. Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011). Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported ?L values. Our investigation concludes that transport, and particularly ?L, is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.
A Critical Analysis of the Concept of Scale Dependent Macrodispersivity
NASA Astrophysics Data System (ADS)
Zech, A.; Attinger, S.; Cvetkovic, V.; Dagan, G.; Dietrich, P.; Fiori, A.; Rubin, Y.; Teutsch, G.
2014-12-01
Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity ?L (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally ?L is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime).It was surmised in the literature that ?L(L) scales up with travel distance following a universal scaling law. Attempts to define the scaling law were pursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992.Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011).Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported ?L values. Our investigation concludes that transport, and particularly ?L(L), is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.
Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis and
Mound, Jon
1 Asset-based poverty analysis in rural Bangladesh: A comparison of principal component analysis not be regarded as the views of SRI or The University of Leeds. #12;3 Asset-based poverty analysis in rural The trend towards multi-dimensional poverty assessment ..................... 5 Principal component analysis
Data mining techniques for large-scale gene expression analysis
Palmer, Nathan Patrick
2011-01-01
Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.
Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer
NASA Astrophysics Data System (ADS)
Schnieders, Jana; Garbe, Christoph
2014-05-01
The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.
A Study on Intermediate Scale Steam Explosion Experiments with Zirconia and Corium Melt
J. H. Kim; I. K. Park; Y. S. Shin; B. T. Min; S. W. Hong; J. H. Song; H. D. Kim
2002-01-01
Korea Atomic Energy Research Institute (KAERI) launched an intermediate scale steam explosion experiment using real reactor materials, named 'Test for Real corium Interaction with water (TROI)' to investigate the effect of material composition, multi-dimensional melt-water interaction, and hydrogen generation on a steam explosion. In the first series of the tests using several kg of molten zirconia where the melt was
Müller, Bernhard [Monash Center for Astrophysics, School of Mathematical Sciences, Building 28, Monash University, Victoria 3800 (Australia); Janka, Hans-Thomas, E-mail: bernhard.mueller@monash.edu, E-mail: bjmuellr@mpa-garching.mpg.de, E-mail: thj@mpa-garching.mpg.de [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany)
2014-06-10
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ?}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ?-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ? 10 M {sub ?} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of ?E{sub ?-bar{sub e}}? with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ?10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
NASA Astrophysics Data System (ADS)
Ivan, Lucian; De Sterck, Hans; Northrup, Scott A.; Groth, Clinton P. T.
2013-12-01
A scalable parallel and block-adaptive cubed-sphere grid simulation framework is described for solution of hyperbolic conservation laws in domains between two concentric spheres. In particular, the Euler and ideal magnetohydrodynamics (MHD) equations are considered. Compared to existing cubed-sphere grid algorithms, a novelty of the proposed approach involves the use of a fully multi-dimensional finite-volume method. This leads to important advantages when the treatment of boundaries and corners of the six sectors of the cubed-sphere grid is considered. Most existing finite-volume approaches use dimension-by-dimension differencing and require special interpolation or reconstruction procedures at ghost cells adjacent to sector boundaries in order to achieve an order of solution accuracy higher than unity. In contrast, in our multi-dimensional approach, solution blocks adjacent to sector boundaries can directly use physical cells from the adjacent sector as ghost cells while maintaining uniform second-order accuracy. This leads to important advantages in terms of simplicity of implementation for both parallelism and adaptivity at sector boundaries. Crucial elements of the proposed scheme are: unstructured connectivity of the six grid root blocks that correspond to the six sectors of the cubed-sphere grid, multi-dimensional k-exact reconstruction that automatically takes into account information from neighbouring cells isotropically and is able to automatically handle varying stencil size, and adaptive division of the solution blocks into smaller blocks of varying spatial resolution that are all treated exactly equally for inter-block communication, flux calculation, adaptivity and parallelization. The proposed approach is fully three-dimensional, whereas previous studies on cubed-sphere grids have been either restricted to two-dimensional geometries on the sphere or have grids and solution methods with limited capabilities in the third dimension in terms of adaptivity and parallelism. Numerical results for several problems, including systematic grid convergence studies, MHD bow-shock flows, and global modelling of solar wind flow are discussed to demonstrate the accuracy and efficiency of the proposed solution procedure, along with assessment of parallel computing scalability for up to thousands of computing cores.
Gibbons, Chris J.; Thornton, Everard W.; Ealing, John; Shaw, Pamela J.; Talbot, Kevin; Tennant, Alan; Young, Carolyn A.
2013-01-01
Objective Social withdrawal is described as the condition in which an individual experiences a desire to make social contact, but is unable to satisfy that desire. It is an important issue for patients with motor neurone disease who are likely to experience severe physical impairment. This study aims to reassess the psychometric and scaling properties of the MND Social Withdrawal Scale (MND-SWS) domains and examine the feasibility of a summary scale, by applying scale data to the Rasch model. Methods The MND Social Withdrawal Scale was administered to 298 patients with a diagnosis of MND, alongside the Hospital Anxiety and Depression Scale. The factor structure of the MND Social Withdrawal Scale was assessed using confirmatory factor analysis. Model fit, category threshold analysis, differential item functioning (DIF), dimensionality and local dependency were evaluated. Results Factor analysis confirmed the suitability of the four-factor solution suggested by the original authors. Mokken scale analysis suggested the removal of item five. Rasch analysis removed a further three items; from the Community (one item) and Emotional (two items) withdrawal subscales. Following item reduction, each scale exhibited excellent fit to the Rasch model. A 14-item Summary scale was shown to fit the Rasch model after subtesting the items into three subtests corresponding to the Community, Family and Emotional subscales, indicating that items from these three subscales could be summed together to create a total measure for social withdrawal. Conclusion Removal of four items from the Social Withdrawal Scale led to a four factor solution with a 14-item hierarchical Summary scale that were all unidimensional, free for DIF and well fitted to the Rasch model. The scale is reliable and allows clinicians and researchers to measure social withdrawal in MND along a unidimensional construct. PMID:24011605
Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling
Wu, Hua; Li, Zhao-Liang
2009-01-01
With the development of quantitative remote sensing, scale issues have attracted more and more the attention of scientists. Research is now suffering from a severe scale discrepancy between data sources and the models used. Consequently, both data interpretation and model application become difficult due to these scale issues. Therefore, effectively scaling remotely sensed information at different scales has already become one of the most important research focuses of remote sensing. The aim of this paper is to demonstrate scale issues from the points of view of analysis, processing and modeling and to provide technical assistance when facing scale issues in remote sensing. The definition of scale and relevant terminologies are given in the first part of this paper. Then, the main causes of scale effects and the scaling effects on measurements, retrieval models and products are reviewed and discussed. Ways to describe the scale threshold and scale domain are briefly discussed. Finally, the general scaling methods, in particular up-scaling methods, are compared and summarized in detail. PMID:22573986
Research article Effects of changing scale on landscape pattern analysis: scaling relations
Wu, Jianguo "Jingle"
Wu Landscape Ecology and Modeling Laboratory (LEML), Faculty of Ecology, Evolution, and Environmental, landscape ecology has come of age with a distinctive emphasis on the spatial dimen- sion of ecological in the Netherlands. 125Landscape Ecology 19: 125138, 2004. #12;patterns have distinctive "operational" scales sensu
ERIC Educational Resources Information Center
Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.
2010-01-01
The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…
Rating Scale Analysis and Psychometric Properties of the Caregiver Self-Efficacy Scale for Transfers
ERIC Educational Resources Information Center
Cipriani, Daniel J.; Hensen, Francine E.; McPeck, Danielle L.; Kubec, Gina L. D.; Thomas, Julie J.
2012-01-01
Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid…
CFD analysis of wind climate from human scale to urban scale
Shuzo Murakami; Ryozo Ooka; Akashi Mochida; Shinji Yoshida; Sangjin Kim
1999-01-01
The rapid growth of computational wind engineering (CWE) has led to an expansion of the research fields of wind engineering. CWE has made it possible to analyze various physical processes associated with wind climate around humans and in urban areas. This paper reviews recent achievements in CWE and its application to wind climate in scales ranging from human to urban
Kim, K. S.; Boyer, L. L.; Degelman, L. O.
1985-01-01
In the first part of this study, daylighting levles in an actualy classroom are compared to scale model measurements and to computer program predictions. Secondly, the daylighting effects in the building atrium are examined through the studies...
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Scale Free Analysis and the Prime Number Theorem
Dhurjati Prasad Datta; Anuja Roy Choudhuri
2010-08-13
We present an elementary proof of the prime number theorem. The relative error follows a golden ratio scaling law and respects the bound obtained from the Riemann's hypothesis. The proof is derived in the framework of a scale free nonarchimedean extension of the real number system exploiting the concept of relative infinitesimals introduced recently in connection with ultrametric models of Cantor sets. The extended real number system is realized as a completion of the field of rational numbers $Q$ under a {\\em new} nonarchimedean absolute value, which treats arbitrarily small and large numbers separately from a finite real number.
NASA Astrophysics Data System (ADS)
Shahmansouri, M.; Mamun, A. A.
2015-07-01
The effects of strong electrostatic interaction among highly charged dust on multi-dimensional instability of dust-acoustic (DA) solitary waves in a magnetized strongly coupled dusty plasma by small- k perturbation expansion method have been investigated. We found that a Zakharov-Kuznetsov equation governs the evolution of obliquely propagating small amplitude DA solitary waves in such a strongly coupled dusty plasma. The parametric regimes for which the obliquely propagating DA solitary waves become unstable are identified. The basic properties, viz., amplitude, width, instability criterion, and growth rate, of these obliquely propagating DA solitary structures are found to be significantly modified by the effects of different physical strongly coupled dusty plasma parameters. The implications of our results in some space/astrophysical plasmas and some future laboratory experiments are briefly discussed.
NASA Astrophysics Data System (ADS)
Hu, Zhiwen; Xu, Yongjian; Yu, Zengliang
2006-05-01
Single-particle microbeam as a powerful tool can open a research field to find answers to many enigmas in radiobiology. A single-particle microbeam facility has been constructed at the Key Laboratory of Ion Beam Bioengineering (LIBB), Chinese Academy of Sciences (CAS), China. However there has been less research activities in this field concerning the original process of the interaction between low-energy ions and complicated organisms. To address this challenge, an in situ multi-dimensional quantitative fluorescence microscopy system combined with the CAS-LIBB single-particle microbeam II endstation is proposed. In this article, the rationale, logistics and development of many aspects of the proposed system are discussed.
Feng, Liang; Zhang, Ming-Hua; Gu, Jun-Fei; Wang, Gui-You; Zhao, Zi-Yu; Jia, Xiao-Bin
2013-11-01
As traditional Chinese medicine (TCM) preparation products feature complex compounds and multiple preparation processes, the implementation of quality control in line with the characteristics of TCM preparation products provides a firm guarantee for the clinical efficacy and safety of TCM preparation products. Danshen infusion solution is a preparation commonly used in clinic, but its quality control is restricted to indexes of finished products, which can not guarantee its inherent quality. Our study group has proposed "multi-dimensional structure and process dynamics quality control system" on the basis of "component structure theory", for the purpose of controlling the quality of Danshen infusion solution at multiple levels and in multiple links from the efficacy-related material basis, the safety-related material basis, the characteristics of dosage form to the preparation process. This article, we bring forth new ideas and models to the quality control of TCM preparation products. PMID:24494543
Multidimensional scaling modelling approach to latent profile analysis in psychological research
Cody S. Ding
2006-01-01
Because profile analysis is widely used in studying types of people, we propose an alternative technique for such analysis in this article. As an application of the multidimensional scaling (MDS) model, MDS profile analysis is proposed as an approach for studying both group and\\/or individual profile patterns. This approach requires one to think of MDS solutions as profiles. The MDS
Introducing Scale Analysis by Way of a Pendulum
ERIC Educational Resources Information Center
Lira, Ignacio
2007-01-01
Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…
Confirmatory Factor Analysis of the Work Locus of Control Scale
ERIC Educational Resources Information Center
Oliver, Joseph E.; Jose, Paul E.; Brough, Paula
2006-01-01
Original formulations of the Work Locus of Control Scale (WLCS) proposed a unidimensional structure of this measure; however, more recently, evidence for a two-dimensional structure has been reported, with separate subscales for internal and external loci of control. The current study evaluates the one- and two-factor models with confirmatory…
Automating analysis of large-scale botnet probing events
Zhichun Li; Anup Goyal; Yan Chen; Vern Paxson
2009-01-01
Botnets dominate today's attack landscape. In this work we inves- tigate ways to analyze collections of malicious probing trafc in order to understand the signicance of large-scale ìbotnet probesî. In such events, an entire collection of remote hosts together probes the address space monitored by a sensor in some sort of coordi- nated fashion. Our goal is to develop methodologies
A Multidimensional Scaling Analysis of Students' Attitudes about Science Careers
ERIC Educational Resources Information Center
Masnick, Amy M.; Valenti, S. Stavros; Cox, Brian D.; Osman, Christopher J.
2010-01-01
To encourage students to seek careers in Science, Technology, Engineering and Mathematics (STEM) fields, it is important to gauge students' implicit and explicit attitudes towards scientific professions. We asked high school and college students to rate the similarity of pairs of occupations, and then used multidimensional scaling (MDS) to create…
A Factor Analysis of the Research Self-Efficacy Scale.
ERIC Educational Resources Information Center
Bieschke, Kathleen J.; And Others
Counseling professionals' and counseling psychology students' interest in performing research seems to be waning. Identifying the impediments to graduate students' interest and participation in research is important if systematic efforts to engage them in research are to succeed. The Research Self-Efficacy Scale (RSES) was designed to measure…
The Asian Values Scale: Development, Factor Analysis, Validation, and Reliability.
ERIC Educational Resources Information Center
Kim, Bryan S. K.; Atkinson, Donald R.; Yang, Peggy H.
1999-01-01
Client adherence to culture-of-origin values plays an important role in the provision of culturally relevant psychological services. Lack of instruments that measure ethnic cultural values has been a shortcoming of past research. Describes development of an Asian Values Scale (AVS) used to measure psychometric measures in four studies. Results…
A Reliability Analysis of Goal Attainment Scaling (GAS) Weights
ERIC Educational Resources Information Center
Marson, Stephen M.; Wei, Guo; Wasserman, Deborah
2009-01-01
Goal attainment scaling (GAS) has been considered to be one of the most versatile and appealing evaluation protocols available for human services. Aspects of the protocol that make the method so appealing to practitioners--that is, collaboratively working with individual clients to identify and assign weights to goals they will work to…
APPLICATION OF MULTIDIMENSIONAL AND SCALE ANALYSIS TO INTEREST MEASUREMENT.
ERIC Educational Resources Information Center
RONNING, ROYCE R.; AND OTHERS
RESEARCH IS DIRECTED TOWARD FINDING A METHOD TO IMPROVE EXISTING INSTRUMENTS FOR MEASUREMENT OF NONCOGNITIVE PROCESSES BY USE OF AN EQUISECTION TECHNIQUE TO DEVELOP SCALES OF CURRENT EDUCATIONAL-VOCATIONAL INTERESTS. FACTOR STRUCTURE DEVELOPED WAS MODIFICATION OF COTTLES' INTEREST ITEMS AND WAS BASED ON EMPIRICALLY OR RATIONALLY DEVELOPED KEYS.…
A Rasch Analysis of the Teachers Music Confidence Scale
ERIC Educational Resources Information Center
Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria
2007-01-01
This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…
The Multidimensional Fear of Death Scale: An Independent Analysis.
ERIC Educational Resources Information Center
Walkey, Frank H.
1982-01-01
Examined the factor structure and subscale reliabilities of an eight-dimensional measure of fear of death (the Multidimensional Fear of Death Scale) using a New Zealand sample. Comparison with the results of a United States study showed that both the subscale reliabilities and the factor structure were almost perfectly reproduced. (Author)
Analysis of the time scales in time periodic Darcy flows
NASA Astrophysics Data System (ADS)
Zhu, T.; Waluga, C.; Wohlmuth, B.; Manhart, M.
2014-12-01
We investigate unsteady flow in a porous medium under time - periodic (sinusoidal) pressure gradient. DNS were performed to benchmark the analytical solution of the unsteady Darcy equation with two different expressions of the time scale : one given by a consistent volume averaging of the Navier - Stokes equation [1] with a steady state closure for the flow resistance term, another given by volume averaging of the kinetic energy equation [2] with a closure for the dissipation rate . For small and medium frequencies, the analytical solutions with the time scale obtained by the energy approach compare well with the DNS results in terms of amplitude and phase lag. For large frequencies (f > 100 [Hz]) we observe a slightly smaller damping of the amplitude. This study supports the use of the unsteady form of Darcy's equation with constant coefficients to solve time - periodic Darcy flows at low and medium frequencies. Our DNS simulations, however, indicate that the time scale predicted by the VANS approach together with a steady - state closure for the flow resistance term is too small. The one obtained by the energy approach matches the DNS results well. At large frequencies, the amplitudes deviate slightly from the analytical solution of the unsteady Darcy equation. Note that at those high frequencies, the flow amplitudes remain below 1% of those of steady state flow. This result indicates that unsteady porous media flow can approximately be described by the unsteady Darcy equation with constant coefficients for a large range of frequencies, provided, the proper time scale has been found.
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Williams, Dean; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Steed, Chad; Krishnan, Harinarayan; Silva, Claudio; Chaudhary, Aashish; Bremer, Peer-Timo; Pugmire, David; Bethel, E. Wes; Childs, Hank; Prabhat, Mr.; Geveci, Berk; Bauer, Andrew; Pletzer, Alexander; Poco, Jorge; Ellqvist, Tommy; Santos, Emanuele; Potter, Gerald; Smith, Brian; Maxwell, Thomas; Kindig, David; Koop, David
2013-05-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
Bohr model and dimensional scaling analysis of atoms and molecules
NASA Astrophysics Data System (ADS)
Svidzinsky, Anatoly; Chen, Goong; Chin, Siu; Kim, Moochan; Ma, Dongxia; Murawski, Robert; Sergeev, Alexei; Scully, Marlan; Herschbach, Dudley
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here we review recent developments of the Bohr model that connect it with dimensional scaling procedures adapted from quantum chromodynamics. This approach treats electrons as point particles whose positions are determined by optimizing an algebraic energy function derived from the large-dimension limit of the Schrödinger equation. The calculations required are simple yet yield useful accuracy for molecular potential curves and bring out appealing heuristic aspects. We first examine the ground electronic states of H2, HeH, He2, LiH, BeH and Li2. Even a rudimentary Bohr model, employing interpolation between large and small internuclear distances, gives good agreement with potential curves obtained from conventional quantum mechanics. An amended Bohr version, augmented by constraints derived from Heitler-London or Hund-Mulliken results, dispenses with interpolation and gives substantial improvement for H2 and H3. The relation to D-scaling is emphasized. A key factor is the angular dependence of the Jacobian volume element, which competes with interelectron repulsion. Another version, incorporating principal quantum numbers in the D-scaling transformation, extends the Bohr model to excited S states of multielectron atoms. We also discuss kindred Bohr-style applications of D-scaling to the H atom subjected to superstrong magnetic fields or to atomic anions subjected to high frequency, superintense laser fields. In conclusion, we note correspondences to the prequantum bonding models of Lewis and Langmuir and to the later resonance theory of Pauling, and discuss prospects for joining D-scaling with other methods to extend its utility and scope.
Combinatorial Motif Analysis and Hypothesis Generation on a Genomic Scale
Kibler, Dennis F.
of regulatory molecules (transcription factors) to com binations of short motifs. The goal of our analysis. The hypothesis is generated by applying standard inductive learning algorithms. Results: Tests using ten system for test application of DNA arrays genomic analysis algorithms to gene regulation. Not only has
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…
Lie group analysis for multi-scale plasma dynamics
Vladimir F. Kovalev
2011-12-19
An application of approximate transformation groups to study dynamics of a system with distinct time scales is discussed. The utilization of the Krylov-Bogoliubov-Mitropolsky method of averaging to find solutions of the Lie equations is considered. Physical illustrations from the plasma kinetic theory demonstrate the potentialities of the suggested approach. Several examples of invariant solutions for the system of the Vlasov-Maxwell equations for the two-component (electron-ion) plasma are presented.
Observation and analysis of large-scale human motion
Janez Pers; Marta Bon; Stanislav Kovacic; Branko Dezman
Many team sports include complex human movement, which can be observed at different levels of detail. Some aspects of the athlete's motion can be studied in detail using commercially available high-speed, high-accuracy biomechani- cal measurement systems. However, due to their limitations, these devices are not appropriate for studying large-scale motion during a game (for example, the motion of a player
On the analysis of large-scale genomic structures
Nestor Norio Oiwa; Carla Goldman
2005-01-01
We apply methods from statistical physics (histograms, correlation functions, fractal dimensions, and singularity spectra)\\u000a to characterize large-scale structure of the distribution of nucleotides along genomic sequences. We discuss the role of the\\u000a extension of noncoding segments (“junk DNA”) for the genomic organization, and the connection between the coding segment distribution\\u000a and the high-eukaryotic chromatin condensation. The following sequences taken from
Wavelet multiscale analysis for Hedge Funds: Scaling and strategies
NASA Astrophysics Data System (ADS)
Conlon, T.; Crane, M.; Ruskin, H. J.
2008-09-01
The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.
Modeling and Analysis of Large-Scale On-Chip Interconnects
Feng, Zhuo
2010-07-14
As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted ...
Singha, Kamini
-scale spatial moment analysis Kamini Singha and Steven M. Gorelick Department of Geological and Environmental than observed in field data of concentration breakthrough at the pumping well. Citation: Singha, K
Length Scale Analysis of Surface Energy Fluxes Derived from Remote Sensing
Brunsell, Nathaniel A.; Gillies, Robert R.
2003-01-01
Wavelet multiresolution analysis was used to examine the variation in dominant length scales determined from remotely sensed airborne- and satellite-derived surface energy flux data. The wavelet cospectra are computed between surface radiometric...
Supplementary Information Integrative genome-scale metabolic analysis of Vibrio vulnificus
Supplementary Information Integrative genome-scale metabolic analysis of Vibrio vulnificus for drug of conserved genes in Vibrio vulnificus and Vibrio parahaemolyticus genomes Vibrios........................................................ 5 Supplementary Table III. Reactions
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech*
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech Laboratory) & Reva Schwartz (United States Secret Services) We quantitatively investigate how the telephone in law enforcement. The telephone channel and regional dialect are important factors in forensic
Brief Psychometric Analysis of the Self-Efficacy Teacher Report Scale
ERIC Educational Resources Information Center
Erford, Bradley T.; Duncan, Kelly; Savin-Murphy, Janet
2010-01-01
This study provides preliminary analysis of reliability and validity of scores on the Self-Efficacy Teacher Report Scale, which was designed to assess teacher perceptions of self-efficacy of students aged 8 to 17 years. (Contains 3 tables.)
Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii
Not Available
2012-03-01
This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.
ORIGINAL ARTICLE A system for ranking organizations using social scale analysis
Davulcu, Hasan
ORIGINAL ARTICLE A system for ranking organizations using social scale analysis Sukru Tikves political, cultural and religious transforma- tions and change the existing social order fundamentally. Muslim radical movements have complex origins and depend on diverse factors that enable translation
An introduction and tutorial on multiple-scale analysis Harold S. Park *, Wing Kam Liu
Park, Harold S.
An introduction and tutorial on multiple-scale analysis in solids Harold S. Park *, Wing Kam Liu shearband of Zhou et al. [2] propagates in a curved path away from the notch tip. However, were the steel
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists Michael Q challenge for computational biologists trying to extract functional information from such large-scale gene 40 years ago (Ja- cob and Monod 1961), biologists have been fascinated by how different genetic
Factor Analysis of the Nowicki-Strickland Locus of Control Scale: Why Is Replication So Difficult?
ERIC Educational Resources Information Center
Watters, Derek A.; And Others
1990-01-01
Factor analysis problems of scales with dichotomous items were illustrated using the Nowicki-Strickland Locus of Control Scale for Children. A different solution than that of R. E. Lindal and P. H. Venables (1983) was obtained, despite using similar samples--2 random samples of 671 and 674 male seventh and eighth graders. (SLD)
Fed-Batch Microbioreactor Platform for Scale Down and Analysis of a Plasmid DNA
Ram, Rajeev J.
ARTICLE Fed-Batch Microbioreactor Platform for Scale Down and Analysis of a Plasmid DNA Production for continuous monitoring of cell growth. To test our micro- bioreactor platform, we used production of a plasmid concentrations, as well as plasmid copy number and quality obtained in a bench-scale bioreactor. The predictive
ERIC Educational Resources Information Center
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R.
2012-01-01
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
De Micheli, Giovanni
Fast Process Variation Analysis in Nano-Scaled Technologies Using Column-Wise Sparse Parameter variation in deeply nano-scaled technologies, parameterized device and circuit modeling is becoming very on Double-Gate Silicon NanoWire FET (DG-SiNWFET) technology indicate 2.5× speed up in timing variation
Algebraic approach to time scale analysis of singularly perturbed linear systems
Willsky, Alan S.
, ALAN S. WILLSKY7 and GEORGE C. VERGHESES In this paper we developan algebraic approach to the multiple time scale analysis of perturbed linear systems based on the examination of the Smith form of time scale modification (i.e. invariant factor placement) via state feedback. We present a result along
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model
Dimov, Ivan
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-scale Air Pollution Model Ivan of input parameters contribution into output variability of a large- scale air pollution model]. This model simulates the transport of air pollutants and has been developed by Dr. Z. Zlatev and his
Measurement and analysis of a large scale commercial mobile internet TV system
Yuheng Li; Yiping Zhang; Ruixi Yuan
2011-01-01
Large scale, Internet based mobile TV deployment presents both tremendous opportunities and challenges for mobile operators and technology providers. This paper presents a measurement based study on a large scale mobile TV service offering in China. Within the one month measurement period, our dataset captured over 1 million unique mobile devices and more than 49 million video sessions. Analysis showed
Generalized singular value decomposition for comparative analysis of genome-scale expression
Botstein, David
Generalized singular value decomposition for comparative analysis of genome-scale expression data 14, 2003 We describe a comparative mathematical framework for two genome-scale expression data sets. This framework formulates expression as superposition of the effects of regulatory programs, biological processes
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.
2012-01-01
The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…
Dual Scaling Analysis of Chinese Students' Conceptions of Learning.
ERIC Educational Resources Information Center
Sachs, John; Chan, Carol
2003-01-01
Using a descriptive quantitative methodology for categorical data analysis, investigates whether Chinese students' conceptions of learning included memorization. Explains that the University of Hong Kong students (n=25) ranked six conceptions of learning. Includes references. (CMK)
Cook, Robert Annan
1972-01-01
SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis ROBERT ANNAN COOK Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement..., for the degree of MASTER OF SCIENCE August 1972 Ma)or Sub)oct: Geology SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis by ROBERT ANNAN COOK Approved as to style and content by...