McDonald, Richard R.; Nelson, Jonathan M.; Fosness, Ryan L.; Nelson, Peter O.
2016-01-01
Two- and three-dimensional morphodynamic simulations are becoming common in studies of channel form and process. The performance of these simulations are often validated against measurements from laboratory studies. Collecting channel change information in natural settings for model validation is difficult because it can be expensive and under most channel forming flows the resulting channel change is generally small. Several channel restoration projects designed in part to armor large meanders with several large spurs constructed of wooden piles on the Kootenai River, ID, have resulted in rapid bed elevation change following construction. Monitoring of these restoration projects includes post- restoration (as-built) Digital Elevation Models (DEMs) as well as additional channel surveys following high channel forming flows post-construction. The resulting sequence of measured bathymetry provides excellent validation data for morphodynamic simulations at the reach scale of a real river. In this paper we test the performance a quasi-three-dimensional morphodynamic simulation against the measured elevation change. The resulting simulations predict the pattern of channel change reasonably well but many of the details such as the maximum scour are under predicted.
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Multi-Scale Multi-Dimensional Ion Battery Performance Model
2007-05-07
The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore » usage by solving cell-level electron and heat transports in a macro grid system.« less
Development of multi-dimensional body image scale for malaysian female adolescents.
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
Development of multi-dimensional body image scale for malaysian female adolescents
Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems
NASA Astrophysics Data System (ADS)
Rasekh, Ehsan
2011-11-01
Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.
Development of a Multi-Dimensional Scale for PDD and ADHD
ERIC Educational Resources Information Center
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…
Rübel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V. E.; Knowles, David W.; Hendriks, Cris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2013-01-01
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies —such as efficient data management— supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach. PMID:23762211
Rübel, Oliver; Ahern, Sean; Bethel, E Wes; Biggin, Mark D; Childs, Hank; Cormier-Michel, Estelle; Depace, Angela; Eisen, Michael B; Fowlkes, Charless C; Geddes, Cameron G R; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keränen, Soile V E; Knowles, David W; Hendriks, Cris L Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H; Wu, Kesheng
2010-05-01
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies -such as efficient data management- supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach. PMID:23762211
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-06-08
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
A revised Thai Multi-Dimensional Scale of Perceived Social Support.
Wongpakaran, Nahathai; Wongpakaran, Tinakon
2012-11-01
In order to ensure the construct validity of the three-factor model of the Multi-dimensional Scale of Perceived Social Support (MSPSS), and based on the assumption that it helps users differentiate between sources of social support, in this study a revised version was created and tested. The aim was to compare the level of model fit of the original version of the MSPSS against the revised version--which contains a minor change from the original. The study was conducted on 486 medical students who completed the original and revised versions of the MSPSS, as well as the Rosenberg Self-Esteem Scale (Rosenberg, 1965) and Beck Depression Inventory II (Beck, Steer, & Brown, 1996). Confirmatory factor analysis was performed to compare the results, showing that the revised version of MSPSS demonstrated a good internal consistency--with a Cronbach's alpha of .92 for the MSPSS questionnaire, and a significant correlation with the other scales, as predicted. The revised version provided better internal consistency, increasing the Cronbach's alpha for the Significant Others sub-scale from 0.86 to 0.92. Confirmatory factor analysis revealed an acceptable model fit: chi2 128.11, df 51, p < .001; TLI 0.94; CFI 0.95; GFI 0.90; PNFI 0.71; AGFI 0.85; RMSEA 0.093 (0.073-0.113) and SRMR 0.042, which is better than the original version. The tendency of the new version was to display a better level of fit with a larger sample size. The limitations of the study are discussed, as well as recommendations for further study. PMID:23156952
How Fitch-Margoliash Algorithm can Benefit from Multi Dimensional Scaling
Lespinats, Sylvain; Grando, Delphine; Maréchal, Eric; Hakimi, Mohamed-Ali; Tenaillon, Olivier; Bastien, Olivier
2011-01-01
Whatever the phylogenetic method, genetic sequences are often described as strings of characters, thus molecular sequences can be viewed as elements of a multi-dimensional space. As a consequence, studying motion in this space (ie, the evolutionary process) must deal with the amazing features of high-dimensional spaces like concentration of measured phenomenon. To study how these features might influence phylogeny reconstructions, we examined a particular popular method: the Fitch-Margoliash algorithm, which belongs to the Least Squares methods. We show that the Least Squares methods are closely related to Multi Dimensional Scaling. Indeed, criteria for Fitch-Margoliash and Sammon’s mapping are somewhat similar. However, the prolific research in Multi Dimensional Scaling has definitely allowed outclassing Sammon’s mapping. Least Square methods for tree reconstruction can now take advantage of these improvements. However, “false neighborhood” and “tears” are the two main risks in dimensionality reduction field: “false neighborhood” corresponds to a widely separated data in the original space that are found close in representation space, and neighbor data that are displayed in remote positions constitute a “tear”. To address this problem, we took advantage of the concepts of “continuity” and “trustworthiness” in the tree reconstruction field, which limit the risk of “false neighborhood” and “tears”. We also point out the concentration of measured phenomenon as a source of error and introduce here new criteria to build phylogenies with improved preservation of distances and robustness. The authors and the Evolutionary Bioinformatics Journal dedicate this article to the memory of Professor W.M. Fitch (1929–2011). PMID:21697992
AstroMD: A Multi Dimensional Visualization and Analysis Toolkit for Astrophysics
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Gheller, C.; Calori, L.; Buonomo, F.; Imboden, S.
2010-10-01
Over the past few years, the role of visualization for scientific purpose has grown up enormously. Astronomy makes an extended use of visualization techniques to analyze data, and scientific visualization has became a fundamental part of modern researches in Astronomy. With the evolution of high performance computers, numerical simulations have assumed a great role in the scientific investigation, allowing the user to run simulation with higher and higher resolution. Data produced in these simulations are often multi-dimensional arrays with several physical quantities. These data are very hard to manage and to analyze efficiently. Consequently the data analysis and visualization tools must follow the new requirements of the research. AstroMD is a tool for data analysis and visualization of astrophysical data and can manage different physical quantities and multi-dimensional data sets. The tool uses virtual reality techniques by which the user has the impression of travelling through a computer-based multi-dimensional model.
Development of a multi-dimensional scale for PDD and ADHD.
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of support required, since inter-individual differences are substantial and co-morbidity is common. The MSPA consists of 14 domains and each domain is rated by a nine-point quantitative scale. The clinical and behavioral features are projected onto a radar-chart, which facilitates understanding of the disorders both by the patients themselves and by those in their surroundings. We assessed 179 patients and analyzed features by six diagnostic subgroups, which showed relationships between features and diagnoses. The inter-rater reliability was satisfactory. PMID:21353761
Theme section: Multi-dimensional modelling, analysis and visualization
NASA Astrophysics Data System (ADS)
Guilbert, Éric; Çöltekin, Arzu; Castro, Francesc Antón; Pettit, Chris
2016-07-01
Spatial data are now collected and processed in larger amounts, and used by larger populations than ever before. While most geospatial data have traditionally been recorded as two-dimensional data, the evolution of data collection methods and user demands have led to data beyond the two dimensions describing complex multidimensional phenomena. An example of the relevance of multidimensional modelling is seen with the development of urban modelling where several dimensions have been added to the traditional 2D map representation (Sester et al., 2011). These include obviously the third spatial dimension (Biljecki et al., 2015) as well as the temporal, but also the scale dimension (Van Oosterom and Stoter, 2010) or, as mentioned by (Lu et al., 2016), multi-spectral and multi-sensor data. Such a view provides an organisation of multidimensional data around these different axes and it is time to explore each axis as the availability of unprecedented amounts of new data demands new solutions. The availability of such large amounts of data induces an acute need for developing new approaches to assist with their dissemination, visualisation, and analysis by end users. Several issues need to be considered in order to provide a meaningful representation and assist in data visualisation and mining, modelling and analysis; such as data structures allowing representation at different scales or in different contexts of thematic information.
ERIC Educational Resources Information Center
Chiou, Guo-Li; Anderson, O. Roger
2010-01-01
This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…
Visualizing the sedimentary response through the orogenic cycle using multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Spencer, C. J.; Kirkland, C.
2015-12-01
Changing patterns in detrital provenance through time have the ability to resolve salient features of an orogenic cycle. Such changes in the age spectrum of detrital minerals can be attributed to fluctuations in the geodynamic regime (e.g. opening of seaways, initiation of subduction and arc magmatism, and transition from subduction to collisional tectonics with arrival of exotic crustal material). These processes manifest themselves through a variety of sedimentary responses due to basin formation, transition from rift to drift sedimentation, or inversion and basement unroofing. This generally is charted by the presence of older detrital zircon populations during basement unroofing events and is followed by a successive younging in the detrital zircon age signature either through arrival of young island arc terranes or the progression of subduction magmatism along a continental margin. The sedimentary response to the aforementioned geodynamic environment can be visualized using a multi-dimensional scaling approach to detrital zircon age spectra. This statistical tool characterizes the "dissimilarity" of age spectra of the various sedimentary successions, but importantly also charts this measure through time. We present three case studies in which multi-dimensional scaling reveals additional useful information on the style of basin evolution within the orogenic cycle. The Albany-Fraser Orogeny in Western Australia and Grenville Orogeny (sensu stricto) in Laurentia demonstrate clear patterns in which detrital zircon age spectra become more dissimilar with time. In stark contrast, sedimentary successions from the Meso- to Neoproterozoic North Atlantic Region reveal no consistent pattern. Rather, the North Atlantic Region reflects a signature consistent with significant zircon age communication due to a distal position from an orogenic front, oblique translation of terranes, and complexity of the continental margin. This statistical approach provides a mechanism to
Nguyen, Lan K.; Degasperi, Andrea; Cotter, Philip; Kholodenko, Boris N.
2015-01-01
Biochemical networks are dynamic and multi-dimensional systems, consisting of tens or hundreds of molecular components. Diseases such as cancer commonly arise due to changes in the dynamics of signalling and gene regulatory networks caused by genetic alternations. Elucidating the network dynamics in health and disease is crucial to better understand the disease mechanisms and derive effective therapeutic strategies. However, current approaches to analyse and visualise systems dynamics can often provide only low-dimensional projections of the network dynamics, which often does not present the multi-dimensional picture of the system behaviour. More efficient and reliable methods for multi-dimensional systems analysis and visualisation are thus required. To address this issue, we here present an integrated analysis and visualisation framework for high-dimensional network behaviour which exploits the advantages provided by parallel coordinates graphs. We demonstrate the applicability of the framework, named “Dynamics Visualisation based on Parallel Coordinates” (DYVIPAC), to a variety of signalling networks ranging in topological wirings and dynamic properties. The framework was proved useful in acquiring an integrated understanding of systems behaviour. PMID:26220783
A multi scale multi-dimensional thermo electrochemical modelling of high capacity lithium-ion cells
NASA Astrophysics Data System (ADS)
Tourani, Abbas; White, Peter; Ivey, Paul
2014-06-01
Lithium iron phosphate (LFP) and lithium manganese oxide (LMO) are competitive and complementary to each other as cathode materials for lithium-ion batteries, especially for use in electric vehicles. A multi scale multi-dimensional physic-based model is proposed in this paper to study the thermal behaviour of the two lithium-ion chemistries. The model consists of two sub models, a one dimensional (1D) electrochemical sub model and a two dimensional (2D) thermo-electric sub model, which are coupled and solved concurrently. The 1D model predicts the heat generation rate (Qh) and voltage (V) of the battery cell through different load cycles. The 2D model of the battery cell accounts for temperature distribution and current distribution across the surface of the battery cell. The two cells are examined experimentally through 90 h load cycles including high/low charge/discharge rates. The experimental results are compared with the model results and they are in good agreement. The presented results in this paper verify the cells temperature behaviour at different operating conditions which will lead to the design of a cost effective thermal management system for the battery pack.
Method of multi-dimensional moment analysis for the characterization of signal peaks
Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A
2012-10-23
A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.
On-the-fly analysis of multi-dimensional rasters in a GIS
NASA Astrophysics Data System (ADS)
Abdul-Kadar, F.; Xu, H.; Gao, P.
2016-04-01
Geographic Information Systems and other mapping applications that specialize in image analysis routinely process high-dimensional gridded rasters as multivariate data cubes. Frameworks responsible for processing image data within these applications suffer from a combination of key shortcomings: inefficiencies stemming from intermediate results being stored on disk, the lack of versatility from disparate tools that don't work in unison, or the poor scalability with increasing volume and dimensionality of the data. We present raster functions as a powerful mechanism for processing and analyzing multi-dimensional rasters designed to overcome these crippling issues. A raster function accepts multivariate hypercubes and processing parameters as input and produces one output raster. Function chains and their parameterized form, function templates, represent a complex image processing operation constructed by composing simpler raster functions. We discuss extensibility of the framework via Python, portability of templates via XML, and dynamic filtering of data cubes using SQL. This paper highlights how ArcGIS employs raster functions in its mission to build actionable information from science and geographic data—by shrinking the lag between the acquisition of raw multi-dimensional raster data and the ultimate dissemination of derived image products. ArcGIS has a mature raster I/O pipeline based on GDAL, and it manages gridded multivariate multi-dimensional cubes in mosaic datasets stored within a geodatabase atop an RDBMS. Bundled with raster functions, we show those capabilities make possible up-to-date maps that are driven by distributed geoanalytics and powerful visualizations against large volumes of near real-time gridded data.
Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis
Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel
2016-01-01
An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
2012-05-01
This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.
Multi-dimensional PARAFAC2 component analysis of multi-channel EEG data including temporal tracking.
Weis, Martin; Jannek, Dunja; Roemer, Florian; Guenther, Thomas; Haardt, Martin; Husar, Peter
2010-01-01
The identification of signal components in electroencephalographic (EEG) data originating from neural activities is a long standing problem in neuroscience. This area has regained new attention due to the possibilities of multi-dimensional signal processing. In this work we analyze measured visual-evoked potentials on the basis of the time-varying spectrum for each channel. Recently, parallel factor (PARAFAC) analysis has been used to identify the signal components in the space-time-frequency domain. However, the PARAFAC decomposition is not able to cope with components appearing time-shifted over the different channels. Furthermore, it is not possible to track PARAFAC components over time. In this contribution we derive how to overcome these problems by using the PARAFAC2 model, which renders it an attractive approach for processing EEG data with highly dynamic (moving) sources. PMID:21096263
Dynameomics: a multi-dimensional analysis-optimized database for dynamic protein data.
Kehl, Catherine; Simms, Andrew M; Toofanny, Rudesh D; Daggett, Valerie
2008-06-01
The Dynameomics project is our effort to characterize the native-state dynamics and folding/unfolding pathways of representatives of all known protein folds by way of molecular dynamics simulations, as described by Beck et al. (in Protein Eng. Des. Select., the first paper in this series). The data produced by these simulations are highly multidimensional in structure and multi-terabytes in size. Both of these features present significant challenges for storage, retrieval and analysis. For optimal data modeling and flexibility, we needed a platform that supported both multidimensional indices and hierarchical relationships between related types of data and that could be integrated within our data warehouse, as described in the accompanying paper directly preceding this one. For these reasons, we have chosen On-line Analytical Processing (OLAP), a multi-dimensional analysis optimized database, as an analytical platform for these data. OLAP is a mature technology in the financial sector, but it has not been used extensively for scientific analysis. Our project is further more unusual for its focus on the multidimensional and analytical capabilities of OLAP rather than its aggregation capacities. The dimensional data model and hierarchies are very flexible. The query language is concise for complex analysis and rapid data retrieval. OLAP shows great promise for the dynamic protein analysis for bioengineering and biomedical applications. In addition, OLAP may have similar potential for other scientific and engineering applications involving large and complex datasets. PMID:18411222
Evaluating the use of HILIC in large-scale, multi dimensional proteomics: Horses for courses?
Bensaddek, Dalila; Nicolas, Armel; Lamond, Angus I.
2015-01-01
Despite many recent advances in instrumentation, the sheer complexity of biological samples remains a major challenge in large-scale proteomics experiments, reflecting both the large number of protein isoforms and the wide dynamic range of their expression levels. However, while the dynamic range of expression levels for different components of the proteome is estimated to be ∼107–8, the equivalent dynamic range of LC–MS is currently limited to ∼106. Sample pre-fractionation has therefore become routinely used in large-scale proteomics to reduce sample complexity during MS analysis and thus alleviate the problem of ion suppression and undersampling. There is currently a wide range of chromatographic techniques that can be applied as a first dimension separation. Here, we systematically evaluated the use of hydrophilic interaction liquid chromatography (HILIC), in comparison with hSAX, as a first dimension for peptide fractionation in a bottom-up proteomics workflow. The data indicate that in addition to its role as a useful pre-enrichment method for PTM analysis, HILIC can provide a robust, orthogonal and high-resolution method for increasing the depth of proteome coverage in large-scale proteomics experiments. The data also indicate that the choice of using either HILIC, hSAX, or other methods, is best made taking into account the specific types of biological analyses being performed. PMID:26869852
The use of multi-dimensional flow and morphodynamic models for restoration design analysis
NASA Astrophysics Data System (ADS)
McDonald, R.; Nelson, J. M.
2013-12-01
River restoration projects with the goal of restoring a wide range of morphologic and ecologic channel processes and functions have become common. The complex interactions between flow and sediment-transport make it challenging to design river channels that are both self-sustaining and improve ecosystem function. The relative immaturity of the field of river restoration and shortcomings in existing methodologies for evaluating channel designs contribute to this problem, often leading to project failures. The call for increased monitoring of constructed channels to evaluate which restoration techniques do and do not work is ubiquitous and may lead to improved channel restoration projects. However, an alternative approach is to detect project flaws before the channels are built by using numerical models to simulate hydraulic and sediment-transport processes and habitat in the proposed channel (Restoration Design Analysis). Multi-dimensional models provide spatially distributed quantities throughout the project domain that may be used to quantitatively evaluate restoration designs for such important metrics as (1) the change in water-surface elevation which can affect the extent and duration of floodplain reconnection, (2) sediment-transport and morphologic change which can affect the channel stability and long-term maintenance of the design; and (3) habitat changes. These models also provide an efficient way to evaluate such quantities over a range of appropriate discharges including low-probability events which often prove the greatest risk to the long-term stability of restored channels. Currently there are many free and open-source modeling frameworks available for such analysis including iRIC, Delft3D, and TELEMAC. In this presentation we give examples of Restoration Design Analysis for each of the metrics above from projects on the Russian River, CA and the Kootenai River, ID. These examples demonstrate how detailed Restoration Design Analysis can be used to
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.
Jarraya, Mohamed; Guermazi, Ali; Niu, Jingbo; Duryea, Jeffrey; Lynch, John A; Roemer, Frank W
2015-11-01
The aim of this study has been to test reproducibility of fractal signature analysis (FSA) in a young, active patient population taking into account several parameters including intra- and inter-reader placement of regions of interest (ROIs) as well as various aspects of projection geometry. In total, 685 patients were included (135 athletes and 550 non-athletes, 18-36 years old). Regions of interest (ROI) were situated beneath the medial tibial plateau. The reproducibility of texture parameters was evaluated using intraclass correlation coefficients (ICC). Multi-dimensional assessment included: (1) anterior-posterior (A.P.) vs. posterior-anterior (P.A.) (Lyon-Schuss technique) views on 102 knees; (2) unilateral (single knee) vs. bilateral (both knees) acquisition on 27 knees (acquisition technique otherwise identical; same A.P. or P.A. view); (3) repetition of the same image acquisition on 46 knees (same A.P. or P.A. view, and same unitlateral or bilateral acquisition); and (4) intra- and inter-reader reliability with repeated placement of the ROIs in the subchondral bone area on 99 randomly chosen knees. ICC values on the reproducibility of texture parameters for A.P. vs. P.A. image acquisitions for horizontal and vertical dimensions combined were 0.72 (95% confidence interval (CI) 0.70-0.74) ranging from 0.47 to 0.81 for the different dimensions. For unilateral vs. bilateral image acquisitions, the ICCs were 0.79 (95% CI 0.76-0.82) ranging from 0.55 to 0.88. For the repetition of the identical view, the ICCs were 0.82 (95% CI 0.80-0.84) ranging from 0.67 to 0.85. Intra-reader reliability was 0.93 (95% CI 0.92-0.94) and inter-observer reliability was 0.96 (95% CI 0.88-0.99). A decrease in reliability was observed with increasing voxel sizes. Our study confirms excellent intra- and inter-reader reliability for FSA, however, results seem to be affected by acquisition technique, which has not been previously recognized. PMID:26343866
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Large-Scale Multi-Dimensional Document Clustering on GPU Clusters
Cui, Xiaohui; Mueller, Frank; Zhang, Yongpeng; Potok, Thomas E
2010-01-01
Document clustering plays an important role in data mining systems. Recently, a flocking-based document clustering algorithm has been proposed to solve the problem through simulation resembling the flocking behavior of birds in nature. This method is superior to other clustering algorithms, including k-means, in the sense that the outcome is not sensitive to the initial state. One limitation of this approach is that the algorithmic complexity is inherently quadratic in the number of documents. As a result, execution time becomes a bottleneck with large number of documents. In this paper, we assess the benefits of exploiting the computational power of Beowulf-like clusters equipped with contemporary Graphics Processing Units (GPUs) as a means to significantly reduce the runtime of flocking-based document clustering. Our framework scales up to over one million documents processed simultaneously in a sixteennode GPU cluster. Results are also compared to a four-node cluster with higher-end GPUs. On these clusters, we observe 30X-50X speedups, which demonstrates the potential of GPU clusters to efficiently solve massive data mining problems. Such speedups combined with the scalability potential and accelerator-based parallelization are unique in the domain of document-based data mining, to the best of our knowledge.
Nitrogen deposition and multi-dimensional plant diversity at the landscape scale
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat
2015-01-01
Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1 km2 plots covering most habitat types of Central Europe and an elevational range of 2900 m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale. PMID:26064640
CTH: A software family for multi-dimensional shock physics analysis
Hertel, E.S. Jr.; Bell, R.L.; Elrick, M.G.; Farnsworth, A.V.; Kerley, G.I.; McGlaun, J.M.; Petney, S.V.; Silling, S.A.; Taylor, P.A.; Yarrington, L.
1992-12-31
CTH is a family of codes developed at Sandia National Laboratories for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH includes models for material strength, fracture, porous materials, and high explosive detonation and initiation. Viscoplastic or rate-dependent models of material strength have been added recently. The formulations of Johnson-Cook, Zerilli-Armstrong, and Steinberg-Guinan-Lund are standard options within CTH. These models rely on using an internal state variable to account for the history dependence of material response. The implementation of internal state variable models will be discussed and several sample calculations will be presented. Comparison with experimental data will be made among the various material strength models. The advancements made in modelling material response have significantly improved the ability of CTH to model complex large-deformation, plastic-flow dominated phenomena. Detonation of energetic material under shock loading conditions has been of great interest. A recently developed model of reactive burn for high explosives (HE) has been added to CTH. This model along with newly developed tabular equations-of-state for the HE reaction by-products has been compared to one- and two-dimensional explosive detonation experiments. These comparisons indicate excellent agreement of CTH predictions with experimental results. The new reactive burn model coupled with the advances in equation-of-state modeling make it possible to predict multi-dimensional burn phenomena without modifying the model parameters for different dimensionality. Examples of the features of CTH will be given. The emphasis in simulations shown will be in comparison with well characterized experiments covering key phenomena of shock physics.
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
Riordan, Daniel P.; Varma, Sushama; West, Robert B.; Brown, Patrick O.
2015-01-01
Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP) that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells) were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples) for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the development of computer
2014-01-01
Background Lack of social support is an important risk factor for antenatal depression and anxiety in low- and middle-income countries. We translated, adapted and validated the Multi-dimensional Scale of Perceived Social Support (MSPSS) in order to study the relationship between perceived social support, intimate partner violence and antenatal depression in Malawi. Methods The MSPSS was translated and adapted into Chichewa and Chiyao. Five hundred and eighty-three women attending an antenatal clinic were administered the MSPSS, depression screening measures, and a risk factor questionnaire including questions about intimate partner violence. A sub-sample of participants (n = 196) were interviewed using the Structured Clinical Interview for DSM-IV to diagnose major depressive episode. Validity of the MSPSS was evaluated by assessment of internal consistency, factor structure, and correlation with Self Reporting Questionnaire (SRQ) score and major depressive episode. We investigated associations between perception of support from different sources (significant other, family, and friends) and major depressive episode, and whether intimate partner violence was a moderator of these associations. Results In both Chichewa and Chiyao, the MSPSS had high internal consistency for the full scale and significant other, family, and friends subscales. MSPSS full scale and subscale scores were inversely associated with SRQ score and major depression diagnosis. Using principal components analysis, the MSPSS had the expected 3-factor structure in analysis of the whole sample. On confirmatory factor analysis, goodness–of-fit indices were better for a 3-factor model than for a 2-factor model, and met standard criteria when correlation between items was allowed. Lack of support from a significant other was the only MSPSS subscale that showed a significant association with depression on multivariate analysis, and this association was moderated by experience of intimate partner
NASA Astrophysics Data System (ADS)
Carkin, Susan
The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states. PMID:26046669
NASA Astrophysics Data System (ADS)
Okano, Toshiyuki
2001-05-01
Correlations between subjective acoustical ratings and hall-averaged values of acoustical measures are studied among existing worldwide major concert halls. It was shown that the classified acoustical ratings by Beranek [Concert and Opera Halls, How They Sound (ASA, 1996)] are discriminated correctly by combining binaural quality index (BQI) with some other acoustical measures. BQI is determined by the arithmetic average of inter-aural cross correlation coefficient in three octave bands of 500, 1000, and 2000 Hz, subtracted from unity, calculated from the early 80-ms part of binaural impulse response. Considering that the upper limit value of BQI not to cause disturbing image shift is approximately 0.85 at individual seat [Okano, J. Acoust. Soc. Am. 2219-2230 (2000)], the values of 0.6 or higher in hall averaged value of BQI, 0.85 or smaller in individual seat value of BQI, and approximately 5 dB or higher in strength factor at middle frequencies are proposed as design objectives to attain a high acoustical quality. It should be provided that other acoustical measures are also optimized. These target values will be very effective in studying room shape of halls, using scale models or computer models.
Multi-Dimensional Effective Field Theory Analysis for Direct Detection of Dark Matter
NASA Astrophysics Data System (ADS)
Rogers, Hannah; SuperCDMS Collaboration
2016-03-01
Experiments like the Cryogenic Dark Matter Search (CDMS) attempt to find dark matter (non-luminous matter that makes up approximately 80% of the matter in the universe) through direct detection of interactions between dark matter and a target material. The Effective Field Theory (EFT) approach increases the number of considered interactions between dark matter and the normal, target matter from two (spin independent and spin dependent interactions) to eleven operators with four possible interference terms. These additional operators allow for a more complete analysis of complimentary direct dark matter searches; however, the higher dimensional likelihoods necessary to span an increase in operators requires a clever computational tool such as MultiNest. I present here analyses of published and projected data from CDMS (Si and Ge targets) and LUX (liquid Xe target) assuming operator parameter spaces ranging from 3 - 5 dimensions and folding in information on energy-dependent backgrounds when possible.
2008-05-22
are the latest versions available from NEA-DB). o The memory and data management was updated as well as the language level (code was rewritten from Fortran-77 to Fortran-95). SUSD3D is coupled to several discrete‑ordinates codes via binary interface files. SUSD3D can use the flux moment files produced by discrete ordinates codes: ANISN, DORT, TORT, ONEDANT, TWODANT, and THREEDANT. In some of these codes minor modifications are required. Variable dimensions used in the TORT‑DORT system are supported. In 3D analysis the geometry and material composition is taken directly from the TORT produced VARSCL binary file, reducing in this way the user's input to SUSD3D. Multigroup cross‑section sets are read in the GENDF format of the NJOY/GROUPR code system, and the covariance data are expected in the COVFIL format of NJOY/ERRORR or the COVERX format of PUFF‑2. The ZZ‑VITAMIN‑J/COVA cross section covariance matrix library can be used as an alternative to the NJOY code system. The package includes the ANGELO code to produce the covariance data in the required energy structure in the COVFIL format. The following cross section processing modules to be added to the NJOY‑94 code system are included in the package: o ERR34: an extension of the ERRORR module of the NJOY code system for the File‑34 processing. It is used to prepare multigroup SAD cross sections covariance matrices. o GROUPSR: An additional code module for the preparation of partial cross sections for SAD sensitivity analysis. Updated version of the same code from SUSD, extended to the ENDF‑6 format. o SEADR: An additional code module to prepare group covariance matrices for SAD/SED uncertainty analysis.« less
Zheng, Xiwei; Yoo, Michelle J.; Hage, David S.
2013-01-01
A multi-dimensional chromatographic approach was developed to measure the free fractions of drug enantiomers in samples that also contained a binding protein or serum. This method, which combined ultrafast affinity extraction with a chiral stationary phase, was demonstrated using the drug warfarin and the protein human serum albumin. PMID:23979112
NASA Astrophysics Data System (ADS)
Tyobeka, Bismark Mzubanzi
A coupled neutron transport thermal-hydraulics code system with both diffusion and transport theory capabilities is presented. At the heart of the coupled code is a powerful neutronics solver, based on a neutron transport theory approach, powered by the time-dependent extension of the well known DORT code, DORT-TD. DORT-TD uses a fully implicit time integration scheme and is coupled via a general interface to the thermal-hydraulics code THERMIX-DIREKT, an HTR-specific two dimensional core thermal-hydraulics code. Feedback is accounted for by interpolating multigroup cross sections from pre-generated libraries which are structured for user specified discrete sets of thermal-hydraulic parameters e.g. fuel and moderator temperatures. The coupled code system is applied to two HTGR designs, the PBMR 400MW and the PBMR 268MW. Steady-state and several design basis transients are modeled in an effort to discern with the adequacy of using neutron diffusion theory as against the more accurate but yet computationally expensive neutron transport theory. It turns out that there are small but significant differences in the results from using either of the two theories. It is concluded that diffusion theory can be used with a higher degree of confidence in the PBMR as long as more than two energy groups are used and that the result must be checked against lower order transport solution, especially for safety analysis purposes. The end product of this thesis is a high fidelity, state-of-the-art computer code system, with multiple capabilities to analyze all PBMR safety related transients in an accurate and efficient manner.
Analysis of multi-dimensional contemporaneous EHR data to refine delirium assessments.
Corradi, John P; Chhabra, Jyoti; Mather, Jeffrey F; Waszynski, Christine M; Dicks, Robert S
2016-08-01
Delirium is a potentially lethal condition of altered mental status, attention, and level of consciousness with an acute onset and fluctuating course. Its causes are multi-factorial, and its pathophysiology is not well understood; therefore clinical focus has been on prevention strategies and early detection. One patient evaluation technique in routine use is the Confusion Assessment Method (CAM): a relatively simple test resulting in 'positive', 'negative' or 'unable-to-assess' (UTA) ratings. Hartford Hospital nursing staff use the CAM regularly on all non-critical care units, and a high frequency of UTA was observed after reviewing several years of records. In addition, patients with UTA ratings displayed poor outcomes such as in-hospital mortality, longer lengths of stay, and discharge to acute and long term care facilities. We sought to better understand the use of UTA, especially outside of critical care environments, in order to improve delirium detection throughout the hospital. An unsupervised clustering approach was used with additional, concurrent assessment data available in the EHR to categorize patient visits with UTA CAMs. The results yielded insights into the most common situations in which the UTA rating was used (e.g. impaired verbal communication, dementia), suggesting potentially inappropriate ratings that could be refined with further evaluation and remedied with updated clinical training. Analysis of the patient clusters also suggested that unrecognized delirium may contribute to the poor outcomes associated with the use of UTA. This method of using temporally related high dimensional EHR data to illuminate a dynamic medical condition could have wider applicability. PMID:27340924
Contributions to the computational analysis of multi-dimensional stochastic dynamical systems
NASA Astrophysics Data System (ADS)
Wojtkiewicz, Steven F., Jr.
2000-12-01
Several contributions in the area of computational stochastic dynamics are discussed; specifically, the response of stochastic dynamical systems by high order closure, the response of Poisson and Gaussian white noise driven systems by solution of a transformed generalized Kolmogorov equation, and control of nonlinear systems by response moment specification. Statistical moments of response are widely used in the analysis of stochastic dynamical systems of engineering interest. It is known that, if the inputs to the system are Gaussian or filtered Gaussian white noise, Ito's rule can be used to generate a system of first order linear differential equations governing the evolution of the moments. For nonlinear systems, the moment equations form an infinite hierarchy, necessitating the application of a closure procedure to truncate the system at some finite dimension at the expense of making the moment equations nonlinear. Various methods to close these moment equations have been developed. The efficacy of cumulant-neglect closure methods for complex dynamical systems is examined. Various methods have been developed to determine the response of dynamical systems subjected to additive and/or multiplicative Gaussian white noise excitations. While Gaussian white noise and filtered Gaussian white noise provide efficient and useful models of various environmental loadings, a broader class of random processes, filtered Poisson processes, are often more realistic in modeling disturbances that originate from impact-type loadings. The response of dynamical systems to combinations of Poisson and Gaussian white noise forms a Markov process whose transition density satisfies a pair of initial-boundary value problem termed the generalized Kolmogorov equations. A numerical solution algorithm for these IBVP's is developed and applied to several representative systems. Classical covariance control theory is extended to the case of nonlinear systems using the method of statistical
NASA Astrophysics Data System (ADS)
De Masi, A.
2015-09-01
The paper describes reading criteria for the documentation for important buildings in Milan, Italy, as a case study of the research on the integration of new technologies to obtain 3D multi-scale representation architectures. In addition, affords an overview of the actual optical 3D measurements sensors and techniques used for surveying, mapping, digital documentation and 3D modeling applications in the Cultural Heritage field. Today new opportunities for an integrated management of data are given by multiresolution models, that can be employed for different scale of representation. The goal of multi-scale representations is to provide several representations where each representation is adapted to a different information density with several degrees of detail. The Digital Representation Platform, along with the 3D City Model, are meant to be particularly useful to heritage managers who are developing recording, documentation, and information management strategies appropriate to territories, sites and monuments. Digital Representation Platform and 3D City Model are central activities in a the decision-making process for heritage conservation management and several urban related problems. This research investigates the integration of the different level-of-detail of a 3D City Model into one consistent 4D data model with the creation of level-of-detail using algorithms from a GIS perspective. In particular, such project is based on open source smart systems, and conceptualizes a personalized and contextualized exploration of the Cultural Heritage through an experiential analysis of the territory.
ERIC Educational Resources Information Center
Shim, Minsuk K.; Felner, Robert D.; Shim, Eunjae; Noonan, Nancy
This study examined the reliability and validity of self-reported survey data on instructional practices. It was based on a nationwide survey of more than 25,000 teachers in more than 1,000 schools across 5 years. The survey instrument was the Classroom Instructional Practice Scale (CIPS), which was based on the Classroom Information Sheet…
Data Mining in Multi-Dimensional Functional Data for Manufacturing Fault Diagnosis
Jeong, Myong K; Kong, Seong G; Omitaomu, Olufemi A
2008-09-01
Multi-dimensional functional data, such as time series data and images from manufacturing processes, have been used for fault detection and quality improvement in many engineering applications such as automobile manufacturing, semiconductor manufacturing, and nano-machining systems. Extracting interesting and useful features from multi-dimensional functional data for manufacturing fault diagnosis is more difficult than extracting the corresponding patterns from traditional numeric and categorical data due to the complexity of functional data types, high correlation, and nonstationary nature of the data. This chapter discusses accomplishments and research issues of multi-dimensional functional data mining in the following areas: dimensionality reduction for functional data, multi-scale fault diagnosis, misalignment prediction of rotating machinery, and agricultural product inspection based on hyperspectral image analysis.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and
NASA Astrophysics Data System (ADS)
Breunig, Martin; Kuper, Paul V.; Butwilowski, Edgar; Thomsen, Andreas; Jahn, Markus; Dittrich, André; Al-Doori, Mulhim; Golovko, Darya; Menninghaus, Mathias
2016-07-01
Multi-dimensional data analysis and visualization need efficient data handling to archive original data, to reproduce results on large data sets, and to retrieve space and time partitions just in time. This article tells the story of more than twenty years research resulting in the development of DB4GeO, a web service-based geo-database architecture for geo-objects to support the data handling of 3D/4D geo-applications. Starting from the roots and lessons learned, the concepts and implementation of DB4GeO are described in detail. Furthermore, experiences and extensions to DB4GeO are presented. Finally, conclusions and an outlook on further research also considering 3D/4D geo-applications for DB4GeO in the context of Dubai 2020 are given.
Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.
NASA Astrophysics Data System (ADS)
Kalenchuk, K. S.; Hutchinson, D.; Diederichs, M. S.
2013-12-01
Downie Slide, one of the world's largest landslides, is a massive, active, composite, extremely slow rockslide located on the west bank of the Revelstoke Reservoir in British Columbia. It is a 1.5 billion m3 rockslide measuring 2400 m along the river valley, 3300m from toe to headscarp and up to 245 m thick. Significant contributions to the field of landslide geomechanics have been made by analyses of spatially and temporally discriminated slope deformations, and how these are controlled by complex geological and geotechnical factors. Downie Slide research demonstrates the importance of delineating massive landslides into morphological regions in order to characterize global slope behaviour and identify localized events, which may or may not influence the overall slope deformation patterns. Massive slope instabilities do not behave as monolithic masses, rather, different landslide zones can display specific landslide processes occurring at variable rates of deformation. The global deformation of Downie Slide is extremely slow moving; however localized regions of the slope incur moderate to high rates of movement. Complex deformation processes and composite failure mechanism are contributed to by topography, non-uniform shear surfaces, heterogeneous rockmass and shear zone strength and stiffness characteristics. Further, from the analysis of temporal changes in landslide behaviour it has been clearly recognized that different regions of the slope respond differently to changing hydrogeological boundary conditions. State-of-the-art methodologies have been developed for numerical simulation of large landslides; these provide important tools for investigating dynamic landslide systems which account for complex three-dimensional geometries, heterogenous shear zone strength parameters, internal shear zones, the interaction of discrete landslide zones and piezometric fluctuations. Numerical models of Downie Slide have been calibrated to reproduce observed slope behaviour
Yang, Hyun-Jin; Ratnapriya, Rinki; Cogliati, Tiziana; Kim, Jung-Woong; Swaroop, Anand
2015-01-01
Genomics and genetics have invaded all aspects of biology and medicine, opening uncharted territory for scientific exploration. The definition of “gene” itself has become ambiguous, and the central dogma is continuously being revised and expanded. Computational biology and computational medicine are no longer intellectual domains of the chosen few. Next generation sequencing (NGS) technology, together with novel methods of pattern recognition and network analyses, has revolutionized the way we think about fundamental biological mechanisms and cellular pathways. In this review, we discuss NGS-based genome-wide approaches that can provide deeper insights into retinal development, aging and disease pathogenesis. We first focus on gene regulatory networks (GRNs) that govern the differentiation of retinal photoreceptors and modulate adaptive response during aging. Then, we discuss NGS technology in the context of retinal disease and develop a vision for therapies based on network biology. We should emphasize that basic strategies for network construction and analyses can be transported to any tissue or cell type. We believe that specific and uniform guidelines are required for generation of genome, transcriptome and epigenome data to facilitate comparative analysis and integration of multi-dimensional data sets, and for constructing networks underlying complex biological processes. As cellular homeostasis and organismal survival are dependent on gene-gene and gene-environment interactions, we believe that network-based biology will provide the foundation for deciphering disease mechanisms and discovering novel drug targets for retinal neurodegenerative diseases. PMID:25668385
Yang, Hyun-Jin; Ratnapriya, Rinki; Cogliati, Tiziana; Kim, Jung-Woong; Swaroop, Anand
2015-05-01
Genomics and genetics have invaded all aspects of biology and medicine, opening uncharted territory for scientific exploration. The definition of "gene" itself has become ambiguous, and the central dogma is continuously being revised and expanded. Computational biology and computational medicine are no longer intellectual domains of the chosen few. Next generation sequencing (NGS) technology, together with novel methods of pattern recognition and network analyses, has revolutionized the way we think about fundamental biological mechanisms and cellular pathways. In this review, we discuss NGS-based genome-wide approaches that can provide deeper insights into retinal development, aging and disease pathogenesis. We first focus on gene regulatory networks (GRNs) that govern the differentiation of retinal photoreceptors and modulate adaptive response during aging. Then, we discuss NGS technology in the context of retinal disease and develop a vision for therapies based on network biology. We should emphasize that basic strategies for network construction and analyses can be transported to any tissue or cell type. We believe that specific and uniform guidelines are required for generation of genome, transcriptome and epigenome data to facilitate comparative analysis and integration of multi-dimensional data sets, and for constructing networks underlying complex biological processes. As cellular homeostasis and organismal survival are dependent on gene-gene and gene-environment interactions, we believe that network-based biology will provide the foundation for deciphering disease mechanisms and discovering novel drug targets for retinal neurodegenerative diseases. PMID:25668385
A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system
NASA Astrophysics Data System (ADS)
Mix, Ken
The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979
NASA Astrophysics Data System (ADS)
Ulrich, C.; Dafflon, B.; Wu, Y.; Kneafsey, T. J.; López, R. D.; Peterson, J.; Hubbard, S. S.
2015-12-01
Shallow permafrost distribution and characteristics are important for predicting ecosystem feedbacks to a changing climate over decadal to century timescales. These can drive active layer deepening and land surface deformation, which in turn can significantly affect hydrological and biogeochemical responses, including greenhouse gas dynamics. Investigating permafrost soil intrinsic properties generally involves time-consuming and expensive lab-based analysis of few soil cores over a large area and extrapolating between points to characterize spatial variations in soil properties. Geophysical techniques provide lower resolution data over a spatially large area and when coupled with high-resolution point data can potentially estimate with greater accuracy the spatial variation of investigated properties, thus limiting the difficulty of collecting many soil cores in remote areas. As part of the Next-Generation Ecosystem Experiment (NGEE-Arctic), we investigate multi-dimensional relationships between various permafrost intrinsic soil properties, and further linkages with geophysical parameters such as density from X-ray computed tomography (CT) and electrical conductivity from electrical resistance tomography (ERT) to evaluate how best to constrain estimation of properties as soil organic carbon content, ice content and saturation across low- to high-centered polygon features in the arctic tundra. Results of this study enable the quantification of the multi-dimensional relationships between intrinsic properties, which can be further used to constrain estimation of such properties from geophysical data and/or where limited core-based information is available. This study also enables the identification of the key controls on soil electrical resistivity and density at the investigated permafrost site, including salinity, porosity, water content, ice content, soil organic matter, and lithological properties. Overall, inferred multi-dimensional relationships and related
ERIC Educational Resources Information Center
Papay, John P.; Willett, John B.; Murnane, Richard J.
2011-01-01
We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,…
Multi-dimensional laser radars
NASA Astrophysics Data System (ADS)
Molebny, Vasyl; Steinvall, Ove
2014-06-01
We introduce the term "multi-dimensional laser radar", where the dimensions mean not only the coordinates of the object in space, but its velocity and orientation, parameters of the media: scattering, refraction, temperature, humidity, wind velocity, etc. The parameters can change in time and can be combined. For example, rendezvous and docking missions, autonomous planetary landing, along with laser ranging, laser altimetry, laser Doppler velocimetry, are thought to have aboard also the 3D ladar imaging. Operating in combinations, they provide more accurate and safer navigation, docking or landing, hazard avoidance capabilities. Combination with Doppler-based measurements provides more accurate navigation for both space and cruise missile applications. Critical is the information identifying the snipers based on combination of polarization and fluctuation parameters with data from other sources. Combination of thermal imaging and vibrometry can unveil the functionality of detected targets. Hyperspectral probing with laser reveals even more parameters. Different algorithms and architectures of ladar-based target acquisition, reconstruction of 3D images from point cloud, information fusion and displaying is discussed with special attention to the technologies of flash illumination and single-photon focal-plane-array detection.
ICM: a web server for integrated clustering of multi-dimensional biomedical data.
He, Song; He, Haochen; Xu, Wenjian; Huang, Xin; Jiang, Shuai; Li, Fei; He, Fuchu; Bo, Xiaochen
2016-07-01
Large-scale efforts for parallel acquisition of multi-omics profiling continue to generate extensive amounts of multi-dimensional biomedical data. Thus, integrated clustering of multiple types of omics data is essential for developing individual-based treatments and precision medicine. However, while rapid progress has been made, methods for integrated clustering are lacking an intuitive web interface that facilitates the biomedical researchers without sufficient programming skills. Here, we present a web tool, named Integrated Clustering of Multi-dimensional biomedical data (ICM), that provides an interface from which to fuse, cluster and visualize multi-dimensional biomedical data and knowledge. With ICM, users can explore the heterogeneity of a disease or a biological process by identifying subgroups of patients. The results obtained can then be interactively modified by using an intuitive user interface. Researchers can also exchange the results from ICM with collaborators via a web link containing a Project ID number that will directly pull up the analysis results being shared. ICM also support incremental clustering that allows users to add new sample data into the data of a previous study to obtain a clustering result. Currently, the ICM web server is available with no login requirement and at no cost at http://biotech.bmi.ac.cn/icm/. PMID:27131784
Vargas, Sara E; Fava, Joseph L; Severy, Lawrence; Rosen, Rochelle K; Salomon, Liz; Shulman, Lawrence; Guthrie, Kate Morrow
2016-02-01
Currently available risk perception scales tend to focus on risk behaviors and overall risk (vs partner-specific risk). While these types of assessments may be useful in clinical contexts, they may be inadequate for understanding the relationship between sexual risk and motivations to engage in safer sex or one's willingness to use prevention products during a specific sexual encounter. We present the psychometric evaluation and validation of a scale that includes both general and specific dimensions of sexual risk perception. A one-time, audio computer-assisted self-interview was administered to 531 women aged 18-55 years. Items assessing sexual risk perceptions, both in general and in regards to a specific partner, were examined in the context of a larger study of willingness to use HIV/STD prevention products and preferences for specific product characteristics. Exploratory and confirmatory factor analyses yielded two subscales: general perceived risk and partner-specific perceived risk. Validity analyses demonstrated that the two subscales were related to many sociodemographic and relationship factors. We suggest that this risk perception scale may be useful in research settings where the outcomes of interest are related to motivations to use HIV and STD prevention products and/or product acceptability. Further, we provide specific guidance on how this risk perception scale might be utilized to understand such motivations with one or more specific partners. PMID:26621151
Spatial Indexing and Visualization of Large Multi-Dimensional Databases
NASA Astrophysics Data System (ADS)
Dobos, L.; Csabai, I.; Trencséni, M.; Herczegh, G.; Józsa, P.; Purger, N.
2007-10-01
Scientific endeavors such as large astronomical surveys generate databases on the terabyte scale. These usually multi-dimensional databases must be visualized and mined in order to find interesting objects or to extract meaningful and qualitatively new relationships. Many statistical algorithms required for these tasks run reasonably fast when operating on small sets of in-memory data, but take noticeable performance hits when operating on large databases that do not fit into memory. We utilize new software technologies to develop and evaluate fast multi-dimensional, spatial indexing schemes that inherently follow the underlying highly non-uniform distribution of the data: one of them is hierarchical binary space partitioning; the other is sampled flat Voronoi partitioning of the data. Our working database is the 5-dimensional magnitude space of the Sloan Digital Sky Survey with more than 250 million data points. We show that these techniques can dramatically speed up data mining operations such as finding similar objects by example, classifying objects or comparing extensive simulation sets with observations. We are also developing tools to interact with the spatial database and visualize the data real-time at multiple resolutions at different zoom levels in an adaptive manner.
Extended Darknet: Multi-Dimensional Internet Threat Monitoring System
NASA Astrophysics Data System (ADS)
Shimoda, Akihiro; Mori, Tatsuya; Goto, Shigeki
Internet threats caused by botnets/worms are one of the most important security issues to be addressed. Darknet, also called a dark IP address space, is one of the best solutions for monitoring anomalous packets sent by malicious software. However, since darknet is deployed only on an inactive IP address space, it is an inefficient way for monitoring a working network that has a considerable number of active IP addresses. The present paper addresses this problem. We propose a scalable, light-weight malicious packet monitoring system based on a multi-dimensional IP/port analysis. Our system significantly extends the monitoring scope of darknet. In order to extend the capacity of darknet, our approach leverages the active IP address space without affecting legitimate traffic. Multi-dimensional monitoring enables the monitoring of TCP ports with firewalls enabled on each of the IP addresses. We focus on delays of TCP syn/ack responses in the traffic. We locate syn/ack delayed packets and forward them to sensors or honeypots for further analysis. We also propose a policy-based flow classification and forwarding mechanism and develop a prototype of a monitoring system that implements our proposed architecture. We deploy our system on a campus network and perform several experiments for the evaluation of our system. We verify that our system can cover 89% of the IP addresses while darknet-based monitoring only covers 46%. On our campus network, our system monitors twice as many IP addresses as darknet.
The Extraction of One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, Robert A.; Gaffney, Richard L., Jr.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e.g. thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
Statistical Downscaling in Multi-dimensional Wave Climate Forecast
NASA Astrophysics Data System (ADS)
Camus, P.; Méndez, F. J.; Medina, R.; Losada, I. J.; Cofiño, A. S.; Gutiérrez, J. M.
2009-04-01
Wave climate at a particular site is defined by the statistical distribution of sea state parameters, such as significant wave height, mean wave period, mean wave direction, wind velocity, wind direction and storm surge. Nowadays, long-term time series of these parameters are available from reanalysis databases obtained by numerical models. The Self-Organizing Map (SOM) technique is applied to characterize multi-dimensional wave climate, obtaining the relevant "wave types" spanning the historical variability. This technique summarizes multi-dimension of wave climate in terms of a set of clusters projected in low-dimensional lattice with a spatial organization, providing Probability Density Functions (PDFs) on the lattice. On the other hand, wind and storm surge depend on instantaneous local large-scale sea level pressure (SLP) fields while waves depend on the recent history of these fields (say, 1 to 5 days). Thus, these variables are associated with large-scale atmospheric circulation patterns. In this work, a nearest-neighbors analog method is used to predict monthly multi-dimensional wave climate. This method establishes relationships between the large-scale atmospheric circulation patterns from numerical models (SLP fields as predictors) with local wave databases of observations (monthly wave climate SOM PDFs as predictand) to set up statistical models. A wave reanalysis database, developed by Puertos del Estado (Ministerio de Fomento), is considered as historical time series of local variables. The simultaneous SLP fields calculated by NCEP atmospheric reanalysis are used as predictors. Several applications with different size of sea level pressure grid and with different temporal domain resolution are compared to obtain the optimal statistical model that better represents the monthly wave climate at a particular site. In this work we examine the potential skill of this downscaling approach considering perfect-model conditions, but we will also analyze the
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
Vlasov multi-dimensional model dispersion relation
Lushnikov, Pavel M.; Rose, Harvey A.; Silantyev, Denis A.; Vladimirova, Natalia
2014-07-15
A hybrid model of the Vlasov equation in multiple spatial dimension D > 1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like θ{sup N}, where θ is the polar angle and flows are arranged uniformly over the azimuthal angle.
Anonymous voting for multi-dimensional CV quantum system
NASA Astrophysics Data System (ADS)
Rong-Hua, Shi; Yi, Xiao; Jin-Jing, Shi; Ying, Guo; Moon-Ho, Lee
2016-06-01
We investigate the design of anonymous voting protocols, CV-based binary-valued ballot and CV-based multi-valued ballot with continuous variables (CV) in a multi-dimensional quantum cryptosystem to ensure the security of voting procedure and data privacy. The quantum entangled states are employed in the continuous variable quantum system to carry the voting information and assist information transmission, which takes the advantage of the GHZ-like states in terms of improving the utilization of quantum states by decreasing the number of required quantum states. It provides a potential approach to achieve the efficient quantum anonymous voting with high transmission security, especially in large-scale votes. Project supported by the National Natural Science Foundation of China (Grant Nos. 61272495, 61379153, and 61401519), the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20130162110012), and the MEST-NRF of Korea (Grant No. 2012-002521).
Some theorems and properties of multi-dimensional fractional Laplace transforms
NASA Astrophysics Data System (ADS)
Ahmood, Wasan Ajeel; Kiliçman, Adem
2016-06-01
The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.
NASA Astrophysics Data System (ADS)
Zhong, Zhaopeng
In the past twenty 20 years considerable progress has been made in developing new methods for solving the multi-dimensional transport problem. However the effort devoted to the resonance self-shielding calculation has lagged, and much less progress has been made in enhancing resonance-shielding techniques for generating problem-dependent multi-group cross sections (XS) for the multi-dimensional transport calculations. In several applications, the error introduced by self-shielding methods exceeds that due to uncertainties in the basic nuclear data, and often they can be the limiting factor on the accuracy of the final results. This work is to improve the accuracy of the resonance self-shielding calculation by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. A new method has been developed, it can calculate the continuous-energy neutron fluxes for the whole two-dimensional domain, which can be utilized as weighting function to process the self-shielded multi-group cross sections for reactor analysis and criticality calculations, and during this process, the two-dimensional heterogeneous effect in the resonance self-shielding calculation can be fully included. A new code, GEMINEWTRN (Group and Energy-Pointwise Methodology Implemented in NEWT for Resonance Neutronics) has been developed in the developing version of SCALE [1], it combines the energy pointwise (PW) capability of the CENTRM [2] with the two-dimensional discrete ordinates transport capability of lattice physics code NEWT [14]. Considering the large number of energy points in the resonance region (typically more than 30,000), the computational burden and memory requirement for GEMINEWTRN is tremendously large, some efforts have been performed to improve the computational efficiency, parallel computation has been implemented into GEMINEWTRN, which can save the computation and memory requirement a lot; some energy points reducing
ERIC Educational Resources Information Center
Strijbos, Jan-Willem; Stahl, Gerry
2007-01-01
In CSCL research, collaboration through chat has primarily been studied in dyadic settings. This article discusses three issues that emerged during the development of a multi-dimensional coding procedure for small-group chat communication: (a) the unit of analysis and unit fragmentation, (b) the reconstruction of the response structure and (c)…
Multi-dimensionally encoded magnetic resonance imaging
Lin, Fa-Hsuan
2013-01-01
Magnetic resonance imaging typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here we propose the multi-dimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel RF coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled. PMID:22926830
Woods, Carl T; Raynor, Annette J; Bruce, Lyndell; McDonald, Zane; Robertson, Sam
2016-07-01
This study investigated whether a multi-dimensional assessment could assist with talent identification in junior Australian football (AF). Participants were recruited from an elite under 18 (U18) AF competition and classified into two groups; talent identified (State U18 Academy representatives; n = 42; 17.6 ± 0.4 y) and non-talent identified (non-State U18 Academy representatives; n = 42; 17.4 ± 0.5 y). Both groups completed a multi-dimensional assessment, which consisted of physical (standing height, dynamic vertical jump height and 20 m multistage fitness test), technical (kicking and handballing tests) and perceptual-cognitive (video decision-making task) performance outcome tests. A multivariate analysis of variance tested the main effect of status on the test criterions, whilst a receiver operating characteristic curve assessed the discrimination provided from the full assessment. The talent identified players outperformed their non-talent identified peers in each test (P < 0.05). The receiver operating characteristic curve reflected near perfect discrimination (AUC = 95.4%), correctly classifying 95% and 86% of the talent identified and non-talent identified participants, respectively. When compared to single assessment approaches, this multi-dimensional assessment reflects a more comprehensive means of talent identification in AF. This study further highlights the importance of assessing multi-dimensional performance qualities when identifying talented team sports. PMID:26862858
Towards a genuinely multi-dimensional upwind scheme
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Vanleer, Bram; Roe, Philip L.
1990-01-01
Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.
Multi-dimensional modelling of gas turbine combustion using a flame sheet model in KIVA II
NASA Technical Reports Server (NTRS)
Cheng, W. K.; Lai, M.-C.; Chue, T.-H.
1991-01-01
A flame sheet model for heat release is incorporated into a multi-dimensional fluid mechanical simulation for gas turbine application. The model assumes that the chemical reaction takes place in thin sheets compared to the length scale of mixing, which is valid for the primary combustion zone in a gas turbine combustor. In this paper, the details of the model are described and computational results are discussed.
Multi-Dimensional Calibration of Impact Dynamic Models
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.
2011-01-01
NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.
High-value energy storage for the grid: a multi-dimensional look
Culver, Walter J.
2010-12-15
The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)
Fast Packet Classification Using Multi-Dimensional Encoding
NASA Astrophysics Data System (ADS)
Huang, Chi Jia; Chen, Chien
Internet routers need to classify incoming packets quickly into flows in order to support features such as Internet security, virtual private networks and Quality of Service (QoS). Packet classification uses information contained in the packet header, and a predefined rule table in the routers. Packet classification of multiple fields is generally a difficult problem. Hence, researchers have proposed various algorithms. This study proposes a multi-dimensional encoding method in which parameters such as the source IP address, destination IP address, source port, destination port and protocol type are placed in a multi-dimensional space. Similar to the previously best known algorithm, i.e., bitmap intersection, multi-dimensional encoding is based on the multi-dimensional range lookup approach, in which rules are divided into several multi-dimensional collision-free rule sets. These sets are then used to form the new coding vector to replace the bit vector of the bitmap intersection algorithm. The average memory storage of this encoding is Θ (L · N · log N) for each dimension, where L denotes the number of collision-free rule sets, and N represents the number of rules. The multi-dimensional encoding practically requires much less memory than bitmap intersection algorithm. Additionally, the computation needed for this encoding is as simple as bitmap intersection algorithm. The low memory requirement of the proposed scheme means that it not only decreases the cost of packet classification engine, but also increases the classification performance, since memory represents the performance bottleneck in the packet classification engine implementation using a network processor.
Application of Multi-Dimensional Sensing Technologies in Production Systems
NASA Astrophysics Data System (ADS)
Shibuya, Hisae; Kimachi, Akira; Suwa, Masaki; Niwakawa, Makoto; Okuda, Haruhisa; Hashimoto, Manabu
Multi-dimensional sensing has been used for various purposes in the field of production systems. The members of the IEEJ MDS committee investigated the trends in sensing technologies and their applications. In this paper, the result of investigations of auto-guided vehicles, cell manufacturing robots, safety, maintenance, worker monitoring, and sensor networks are discussed.
The Multi-Dimensional Demands of Reading in the Disciplines
ERIC Educational Resources Information Center
Lee, Carol D.
2014-01-01
This commentary addresses the complexities of reading comprehension with an explicit focus on reading in the disciplines. The author proposes reading as entailing multi-dimensional demands of the reader and posing complex challenges for teachers. These challenges are intensified by restrictive conceptions of relevant prior knowledge and experience…
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Image matrix processor for fast multi-dimensional computations
Roberson, George P.; Skeate, Michael F.
1996-01-01
An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it
Towards Semantic Web Services on Large, Multi-Dimensional Coverages
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it
Multi-Dimensional Perception of Parental Involvement
ERIC Educational Resources Information Center
Fisher, Yael
2016-01-01
The main purpose of this study was to define and conceptualize the term parental involvement. A questionnaire was administrated to parents (140), teachers (145), students (120) and high ranking civil servants in the Ministry of Education (30). Responses were analyzed through Smallest Space Analysis (SSA). The SSA solution among all groups rendered…
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F., Jr.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Multi-dimensional Indoor Location Information Model
NASA Astrophysics Data System (ADS)
Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.
2013-11-01
Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and IndoorGML. The goal of the model is to provide an exchange GML-based format for location needed for indoor routing and navigation. An elaborated user requirements analysis and investigation of state-of-the-art technology in expressing indoor location at home and abroad was completed to identify the manner humans specify location. The ultimate goal is to provide an ontology that will allow absolute and relative specification of location such as "in room 321", "on the second floor", as well as, "two meters from the second window", "12 steps from the door".
Advanced numerics for multi-dimensional fluid flow calculations
NASA Technical Reports Server (NTRS)
Vanka, S. P.
1984-01-01
In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.
Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kwak, Dochan (Technical Monitor)
1995-01-01
The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.
Portable laser synthesizer for high-speed multi-dimensional spectroscopy
Demos, Stavros G.; Shverdin, Miroslav Y.; Shirk, Michael D.
2012-05-29
Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
NASA Astrophysics Data System (ADS)
Liu, Chein-Shan; Kuo, Chung-Lun
2016-09-01
In this paper we first express the wave equation in terms of the Minkowskian polar coordinates and generate a set of complete hyperbolic type Trefftz bases: rk cosh (kθ) and rk sinh (kθ), which are further transformed to wave polynomials as the trial solution bases for the one-dimensional wave equation. In order to stably solve the wave propagation problems long-term we develop a multiple-scale Trefftz method (MSTM), of which the scales are determined a priori by the collocation points. Then we derive a very simple method of multi-dimensional wave polynomials, equipped with different spatial directions which being the normalized wavenumber vectors, as the polynomial Trefftz bases for solving the multi-dimensional wave equations, which is named a multiple-direction Trefftz method (MDTM). Several numerical examples of two- and three-dimensional wave equations demonstrate that the present method is efficient and stable.
Multi-Dimensional Damage Detection for Surfaces and Structures
NASA Technical Reports Server (NTRS)
Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah
2013-01-01
Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or
Multi-dimensional structure of accreting young stars
NASA Astrophysics Data System (ADS)
Geroux, C.; Baraffe, I.; Viallet, M.; Goffrey, T.; Pratt, J.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.
2016-04-01
This work is the first attempt to describe the multi-dimensional structure of accreting young stars based on fully compressible time implicit multi-dimensional hydrodynamics simulations. One major motivation is to analyse the validity of accretion treatment used in previous 1D stellar evolution studies. We analyse the effect of accretion on the structure of a realistic stellar model of the young Sun. Our work is inspired by the numerical work of Kley & Lin (1996, ApJ, 461, 933) devoted to the structure of the boundary layer in accretion disks, which provides the outer boundary conditions for our simulations. We analyse the redistribution of accreted material with a range of values of specific entropy relative to the bulk specific entropy of the material in the accreting object's convective envelope. Low specific entropy accreted material characterises the so-called cold accretion process, whereas high specific entropy is relevant to hot accretion. A primary goal is to understand whether and how accreted energy deposited onto a stellar surface is redistributed in the interior. This study focusses on the high accretion rates characteristic of FU Ori systems. We find that the highest entropy cases produce a distinctive behaviour in the mass redistribution, rms velocities, and enthalpy flux in the convective envelope. This change in behaviour is characterised by the formation of a hot layer on the surface of the accreting object, which tends to suppress convection in the envelope. We analyse the long-term effect of such a hot buffer zone on the structure and evolution of the accreting object with 1D stellar evolution calculations. We study the relevance of the assumption of redistribution of accreted energy into the stellar interior used in the literature. We compare results obtained with the latter treatment and those obtained with a more physical accretion boundary condition based on the formation of a hot surface layer suggested by present multi-dimensional
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tsai, Chin-Chung
2013-01-01
In the past, students' science learning self-efficacy (SLSE) was usually measured by questionnaires that consisted of only a single scale, which might be insufficient to fully understand their SLSE. In this study, a multi-dimensional instrument, the SLSE instrument, was developed and validated to assess students' SLSE based on the…
A Multi-Dimensional Classification Model for Scientific Workflow Characteristics
Ramakrishnan, Lavanya; Plale, Beth
2010-04-05
Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.
The Multi-Dimensional Character of Core-Collapse Supernovae
Hix, William Raphael; Lentz, E. J.; Bruenn, S. W.; Mezzacappa, Anthony; Messer, Bronson; Endeve, Eirik; Blondin, J. M.; Harris, James Austin; Marronetti, Pedro; Yakunin, Konstantin N
2016-01-01
Core-collapse supernovae, the culmination of massive stellar evolution, are spectacular astronomical events and the principle actors in the story of our elemental origins. Our understanding of these events, while still incomplete, centers around a neutrino-driven central engine that is highly hydrodynamically unstable. Increasingly sophisticated simulations reveal a shock that stalls for hundreds of milliseconds before reviving. Though brought back to life by neutrino heating, the development of the supernova explosion is inextricably linked to multi-dimensional fluid flows. In this paper, the outcomes of three-dimensional simulations that include sophisticated nuclear physics and spectral neutrino transport are juxtaposed to learn about the nature of the three dimensional fluid flow that shapes the explosion. Comparison is also made between the results of simulations in spherical symmetry from several groups, to give ourselves confidence in the understanding derived from this juxtaposition.
On Multi-Dimensional Vocabulary Teaching Mode for College English Teaching
ERIC Educational Resources Information Center
Zhou, Li-na
2010-01-01
This paper analyses the major approaches in EFL (English as a Foreign Language) vocabulary teaching from historical perspective and puts forward multi-dimensional vocabulary teaching mode for college English. The author stresses that multi-dimensional approaches of communicative vocabulary teaching, lexical phrase teaching method, the grammar…
A one-dimensional shock capturing finite element method and multi-dimensional generalizations
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Mallet, M.; Zanutta, R.; Taki, Y.; Tezduyar, T. E.
1985-01-01
Multi-dimensional generalizations of a one-dimensional finite element shock capturing scheme are proposed. A scalar model problem is used to emphasize that 'preferred directions' are important in multi-dimensional applications. Schemes are developed for the two-dimensional Euler equations. One, based upon characteristics, employs the Mach lines and streamlines as preferred directions.
Wildfire Detection using by Multi Dimensional Histogram in Boreal Forest
NASA Astrophysics Data System (ADS)
Honda, K.; Kimura, K.; Honma, T.
2008-12-01
forest in Kalimantan, Indonesia and around Chiang Mai, Thailand. But the ground truth data in these areas is lesser than the one in Alaska. Our method needs lots of accurate observed data to make multi-dimensional histogram in the same area. In this study, we can show the system to select wildfire data efficiently from satellite imagery. Furthermore, the development of multi-dimensional histogram from past fire data makes it possible to detect wildfires accurately.
Pointwise estimates of solutions for the multi-dimensional bipolar Euler-Poisson system
NASA Astrophysics Data System (ADS)
Wu, Zhigang; Li, Yeping
2016-06-01
In the paper, we consider a multi-dimensional bipolar hydrodynamic model from semiconductor devices and plasmas. This system takes the form of Euler-Poisson with electric field and frictional damping added to the momentum equations. By making a new analysis on Green's functions for the Euler system with damping and the Euler-Poisson system with damping, we obtain the pointwise estimates of the solution for the multi-dimensions bipolar Euler-Poisson system. As a by-product, we extend decay rates of the densities {ρ_i(i=1,2)} in the usual L 2-norm to the L p -norm with {p≥1} and the time-decay rates of the momentums m i ( i = 1,2) in the L 2-norm to the L p -norm with p > 1 and all of the decay rates here are optimal.
Convergence of a discretized self-adaptive evolutionary algorithm on multi-dimensional problems.
Hart, William Eugene; DeLaurentis, John Morse
2003-08-01
We consider the convergence properties of a non-elitist self-adaptive evolutionary strategy (ES) on multi-dimensional problems. In particular, we apply our recent convergence theory for a discretized (1,{lambda})-ES to design a related (1,{lambda})-ES that converges on a class of seperable, unimodal multi-dimensional problems. The distinguishing feature of self-adaptive evolutionary algorithms (EAs) is that the control parameters (like mutation step lengths) are evolved by the evolutionary algorithm. Thus the control parameters are adapted in an implicit manner that relies on the evolutionary dynamics to ensure that more effective control parameters are propagated during the search. Self-adaptation is a central feature of EAs like evolutionary stategies (ES) and evolutionary programming (EP), which are applied to continuous design spaces. Rudolph summarizes theoretical results concerning self-adaptive EAs and notes that the theoretical underpinnings for these methods are essentially unexplored. In particular, convergence theories that ensure convergence to a limit point on continuous spaces have only been developed by Rudolph, Hart, DeLaurentis and Ferguson, and Auger et al. In this paper, we illustrate how our analysis of a (1,{lambda})-ES for one-dimensional unimodal functions can be used to ensure convergence of a related ES on multidimensional functions. This (1,{lambda})-ES randomly selects a search dimension in each iteration, along which points generated. For a general class of separable functions, our analysis shows that the ES searches along each dimension independently, and thus this ES converges to the (global) minimum.
Multi-Dimensional Analysis of Dynamic Human Information Interaction
ERIC Educational Resources Information Center
Park, Minsoo
2013-01-01
Introduction: This study aims to understand the interactions of perception, effort, emotion, time and performance during the performance of multiple information tasks using Web information technologies. Method: Twenty volunteers from a university participated in this study. Questionnaires were used to obtain general background information and…
Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity
Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke
2016-01-01
Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain. PMID:26834608
Multi-Dimensional Dynamics of Human Electromagnetic Brain Activity.
Kida, Tetsuo; Tanaka, Emi; Kakigi, Ryusuke
2015-01-01
Magnetoencephalography (MEG) and electroencephalography (EEG) are invaluable neuroscientific tools for unveiling human neural dynamics in three dimensions (space, time, and frequency), which are associated with a wide variety of perceptions, cognition, and actions. MEG/EEG also provides different categories of neuronal indices including activity magnitude, connectivity, and network properties along the three dimensions. In the last 20 years, interest has increased in inter-regional connectivity and complex network properties assessed by various sophisticated scientific analyses. We herein review the definition, computation, short history, and pros and cons of connectivity and complex network (graph-theory) analyses applied to MEG/EEG signals. We briefly describe recent developments in source reconstruction algorithms essential for source-space connectivity and network analyses. Furthermore, we discuss a relatively novel approach used in MEG/EEG studies to examine the complex dynamics represented by human brain activity. The correct and effective use of these neuronal metrics provides a new insight into the multi-dimensional dynamics of the neural representations of various functions in the complex human brain. PMID:26834608
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
Linsen, Lars; Van Long, Tran; Rosenthal, Paul; Rosswog, Stephan
2008-01-01
Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations. PMID:18989000
An information model for managing multi-dimensional gridded data in a GIS
NASA Astrophysics Data System (ADS)
Xu, H.; Abdul-Kadar, F.; Gao, P.
2016-04-01
Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.
Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory
NASA Astrophysics Data System (ADS)
Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.
2014-01-01
Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an
Contrast Analysis for Scale Differences.
ERIC Educational Resources Information Center
Olejnik, Stephen F.; And Others
Research on tests for scale equality have focused exclusively on an overall test statistic and have not examined procedures for identifying specific differences in multiple group designs. The present study compares four contrast analysis procedures for scale differences in the single factor four-group design: (1) Tukey HSD; (2) Kramer-Tukey; (3)…
Spiritual Competency Scale: Further Analysis
ERIC Educational Resources Information Center
Dailey, Stephanie F.; Robertson, Linda A.; Gill, Carman S.
2015-01-01
This article describes a follow-up analysis of the Spiritual Competency Scale, which initially validated ASERVIC's (Association for Spiritual, Ethical and Religious Values in Counseling) spiritual competencies. The study examined whether the factor structure of the Spiritual Competency Scale would be supported by participants (i.e., ASERVIC…
Chemistry and Transport in a Multi-Dimensional Model
NASA Technical Reports Server (NTRS)
Yung, Yuk L.
2004-01-01
Our work has two primary scientific goals, the interannual variability (IAV) of stratospheric ozone and the hydrological cycle of the upper troposphere and lower stratosphere. Our efforts are aimed at integrating new information obtained by spacecraft and aircraft measurements to achieve a better understanding of the chemical and dynamical processes that are needed for realistic evaluations of human impact on the global environment. A primary motivation for studying the ozone layer is to separate the anthropogenic perturbations of the ozone layer from natural variability. Using the recently available merged ozone data (MOD), we have carried out an empirical orthogonal function EOF) study of the temporal and spatial patterns of the IAV of total column ozone in the tropics. The outstanding problem about water in the stratosphere is its secular increase in the last few decades. The Caltech/PL multi-dimensional chemical transport model (CTM) photochemical model is used to simulate the processes that control the water vapor and its isotopic composition in the stratosphere. Datasets we will use for comparison with model results include those obtained by the Total Ozone Mapping Spectrometer (TOMS), the Solar Backscatter Ultraviolet (SBUV and SBUV/2), Stratosphere Aerosol and Gas Experiment (SAGE I and II), the Halogen Occultation Experiment (HALOE), the Atmospheric Trace Molecular Spectroscopy (ATMOS) and those soon to be obtained by the Cirrus Regional Study of Tropical Anvils and Cirrus Layers Florida Area Cirrus Experiment (CRYSTAL-FACE) mission. The focus of the investigations is the exchange between the stratosphere and the troposphere, and between the troposphere and the biosphere.
Surface diagnostics for scale analysis.
Dunn, S; Impey, S; Kimpton, C; Parsons, S A; Doyle, J; Jefferson, B
2004-01-01
Stainless steel, polymethylmethacrylate and polytetrafluoroethylene coupons were analysed for surface topographical and adhesion force characteristics using tapping mode atomic force microscopy and force-distance microscopy techniques. The two polymer materials were surface modified by polishing with silicon carbide papers of known grade. The struvite scaling rate was determined for each coupon and related to the data gained from the surface analysis. The scaling rate correlated well with adhesion force measurements indicating that lower energy materials scale at a lower rate. The techniques outlined in the paper provide a method for the rapid screening of materials in potential scaling applications. PMID:14982180
Design and Analysis of a Formation Flying System for the Cross-Scale Mission Concept
NASA Technical Reports Server (NTRS)
Cornara, Stefania; Bastante, Juan C.; Jubineau, Franck
2007-01-01
The ESA-funded "Cross-Scale Technology Reference Study has been carried out with the primary aim to identify and analyse a mission concept for the investigation of fundamental space plasma processes that involve dynamical non-linear coupling across multiple length scales. To fulfill this scientific mission goal, a constellation of spacecraft is required, flying in loose formations around the Earth and sampling three characteristic plasma scale distances simultaneously, with at least two satellites per scale: electron kinetic (10 km), ion kinetic (100-2000 km), magnetospheric fluid (3000-15000 km). The key Cross-Scale mission drivers identified are the number of S/C, the space segment configuration, the reference orbit design, the transfer and deployment strategy, the inter-satellite localization and synchronization process and the mission operations. This paper presents a comprehensive overview of the mission design and analysis for the Cross-Scale concept and outlines a technically feasible mission architecture for a multi-dimensional investigation of space plasma phenomena. The main effort has been devoted to apply a thorough mission-level trade-off approach and to accomplish an exhaustive analysis, so as to allow the characterization of a wide range of mission requirements and design solutions.
Mohamadirizi, Soheila; Kordi, Masoumeh
2016-01-01
Background: Multi-dimensional self-compassion is one of the important factors predicting fetal-maternal attachment which vary among different cultures and countries. So the aim of this study was to determine the relationship between multi-dimensional, self-compassion, and fetal-maternal attachment in the prenatal period. Subjects and Methods: This cross-sectional study was carried on 394 primigravida women to Mashhad Health Care Centers in with two stage sampling method (cluster-convenience) in the year 2014. Demographic/prenatal characteristics, multi-dimensional self-compassion (26Q) with five dimension (including self-kindness, self-judgment, common humanity, isolation items, mindfulness, over-identified), and fatal-maternal attachment (21Q) were completed by the participants. The statistical analysis was performed with various statistical tests such as Pearson correlation coefficient, t-test, one-way ANOVA, and linear regression using SPSS statistical software (version 14). Results: Based on the findings, the mean (standard deviation) value for multi-dimensional self-compassion was 59.81 (6.4) and for fatal-maternal attachment was 81.63 (9.5). There was a positive correlation between fatal-maternal attachment and total self-compassion (P = 0.005, r = 0.30) and its dimension including self-kindness (P = 0.003, r = 0.24), self-judgment (P = 0.001, r = 0.18), common humanity (P = 0.004, r = 0.28), isolation items (P = 0.006, r = 0.17), mindfulness (P = 0.002, r = 0.15), over-identified (P = 0.001, r = 0.15). Conclusions: There was a correlation between the multi-dimensional self-compassion and fetal-maternal attachment in pregnant women. Hence, educating people like caregivers by community health midwives regarding psychological problems in during pregnancy can be effective in early diagnosing and identifying such disorders. PMID:27500174
Relaxation-time limit in the multi-dimensional bipolar nonisentropic Euler-Poisson systems
NASA Astrophysics Data System (ADS)
Li, Yeping; Zhou, Zhiming
2015-05-01
In this paper, we consider the multi-dimensional bipolar nonisentropic Euler-Poisson systems, which model various physical phenomena in semiconductor devices, plasmas and channel proteins. We mainly study the relaxation-time limit of the initial value problem for the bipolar full Euler-Poisson equations with well-prepared initial data. Inspired by the Maxwell iteration, we construct the different approximation states for the case τσ = 1 and σ = 1, respectively, and show that periodic initial-value problems of the certain scaled bipolar nonisentropic Euler-Poisson systems in the case τσ = 1 and σ = 1 have unique smooth solutions in the time interval where the classical energy transport equation and the drift-diffusive equation have smooth solution. Moreover, it is also obtained that the smooth solutions converge to those of energy-transport models at the rate of τ2 and those of the drift-diffusive models at the rate of τ, respectively. The proof of these results is based on the continuation principle and the error estimates.
NASA Astrophysics Data System (ADS)
Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang
2016-06-01
In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.
Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics
Dai, William W; Woodward, Paul R
2009-01-01
An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.
Sinclair, Ian; Parry, Elizabeth; Biehal, Nina; Fresen, John; Kay, Catherine; Scott, Stephen; Green, Jonathan
2016-08-01
Multi-dimensional Treatment Foster Care (MTFC), recently renamed Treatment Foster Care Oregon for Adolescents (TFCO-A) is an internationally recognised intervention for troubled young people in public care. This paper seeks to explain conflicting results with MTFC by testing the hypotheses that it benefits antisocial young people more than others and does so through its effects on their behaviour. Hard-to-manage young people in English foster or residential homes were assessed at entry to a randomised and case-controlled trial of MTFC (n = 88) and usual care (TAU) (n = 83). Primary outcome was the Children's Global Assessment Scale (CGAS) at 12 months analysed according to high (n = 112) or low (n = 59) baseline level of antisocial behaviour on the Health of the Nation Outcome Scales for Children and Adolescents. After adjusting for covariates, there was no overall treatment effect on CGAS. However, the High Antisocial Group receiving MTFC gained more on the CGAS than the Low group (mean improvement 9.36 points vs. 5.33 points). This difference remained significant (p < 0.05) after adjusting for propensity and covariates and was statistically explained by the reduced antisocial behaviour ratings in MTFC. These analyses support the use of MTFC for youth in public care but only for those with higher levels of antisocial behaviour. Further work is needed on whether such benefits persist, and on possible negative effects of this treatment for those with low antisocial behaviour.Trial Registry Name: ISRCTNRegistry identification number: ISRCTN 68038570Registry URL: www.isrctn.com. PMID:26662809
Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy
2011-01-01
Background Empathy towards patients is considered to be associated with improved health outcomes. Many scales have been developed to measure empathy in health care professionals and students. The Jefferson Scale of Physician Empathy (JSPE) has been widely used. This study was designed to examine the psychometric properties and the theoretical structure of the JSPE. Methods A total of 853 medical students responded to the JSPE questionnaire. A hypothetical model was evaluated by structural equation modelling to determine the adequacy of goodness-of-fit to sample data. Results The model showed excellent goodness-of-fit. Further analysis showed that the hypothesised three-factor model of the JSPE structure fits well across the gender differences of medical students. Conclusions The results supported scale multi-dimensionality. The 20 item JSPE provides a valid and reliable scale to measure empathy among not only undergraduate and graduate medical education programmes, but also practising doctors. The limitations of the study are discussed and some recommendations are made for future practice. PMID:21810268
Continuation and bifurcation analysis of large-scale dynamical systems with LOCA.
Salinger, Andrew Gerhard; Phipps, Eric Todd; Pawlowski, Roger Patrick
2010-06-01
Dynamical systems theory provides a powerful framework for understanding the behavior of complex evolving systems. However applying these ideas to large-scale dynamical systems such as discretizations of multi-dimensional PDEs is challenging. Such systems can easily give rise to problems with billions of dynamical variables, requiring specialized numerical algorithms implemented on high performance computing architectures with thousands of processors. This talk will describe LOCA, the Library of Continuation Algorithms, a suite of scalable continuation and bifurcation tools optimized for these types of systems that is part of the Trilinos software collection. In particular, we will describe continuation and bifurcation analysis techniques designed for large-scale dynamical systems that are based on specialized parallel linear algebra methods for solving augmented linear systems. We will also discuss several other Trilinos tools providing nonlinear solvers (NOX), eigensolvers (Anasazi), iterative linear solvers (AztecOO and Belos), preconditioners (Ifpack, ML, Amesos) and parallel linear algebra data structures (Epetra and Tpetra) that LOCA can leverage for efficient and scalable analysis of large-scale dynamical systems.
Steps Toward a Large-Scale Solar Image Data Analysis to Differentiate Solar Phenomena
NASA Astrophysics Data System (ADS)
Banda, J. M.; Angryk, R. A.; Martens, P. C. H.
2013-11-01
We detail the investigation of the first application of several dissimilarity measures for large-scale solar image data analysis. Using a solar-domain-specific benchmark dataset that contains multiple types of phenomena, we analyzed combinations of image parameters with different dissimilarity measures to determine the combinations that will allow us to differentiate between the multiple solar phenomena from both intra-class and inter-class perspectives, where by class we refer to the same types of solar phenomena. We also investigate the problem of reducing data dimensionality by applying multi-dimensional scaling to the dissimilarity matrices that we produced using the previously mentioned combinations. As an early investigation into dimensionality reduction, we investigate by applying multidimensional scaling (MDS) how many MDS components are needed to maintain a good representation of our data (in a new artificial data space) and how many can be discarded to enhance our querying performance. Finally, we present a comparative analysis of several classifiers to determine the quality of the dimensionality reduction achieved with this combination of image parameters, similarity measures, and MDS.
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
A Multi-Dimensional Approach to Measuring News Media Literacy
ERIC Educational Resources Information Center
Vraga, Emily; Tully, Melissa; Kotcher, John E.; Smithson, Anne-Bennett; Broeckelman-Post, Melissa
2015-01-01
Measuring news media literacy is important in order for it to thrive in a variety of educational and civic contexts. This research builds on existing measures of news media literacy and two new scales are presented that measure self-perceived media literacy (SPML) and perceptions of the value of media literacy (VML). Research with a larger sample…
Criterion-Sensitive Measurement: A Study in Multi-Dimensionality.
ERIC Educational Resources Information Center
Linacre, John Michael; And Others
The Functional Independence Measure (FIM) of S. Forer and others (1987) records the degree of disability of rehabilitation patients between "Total Dependence" and "Complete Independence." Using the FIM, ratings on a seven-point scale are made by therapists and other expert care-providers at the time of patient admission to rehabilitation, at the…
NASA Astrophysics Data System (ADS)
Baumann, P.
2009-04-01
Recent progress in hardware and software technology opens up vistas where flexible services on large, multi-dimensional coverage data become a commodity. Interactive data browsing like with Virtual Globes, selective download, and ad-hoc analysis services are about to become available routinely, as several sites already demonstrate. However, for easy access and true machine-machine communication, Semantic Web concepts as being investigated for vector and meta data, need to be extended to raster data and other coverage types. Even more will it then be important to rely on open standards for data and service interoperability. The Open GeoSpatial Consortium (OGC), following a modular approach to specifying geo service interfaces, has issued the Web Coverage Service (WCS) Implementation Standard for accessing coverages or parts thereof. In contrast to the Web Map Service (WMS), which delivers imagery, WCS preserves data semantics and, thus, allows further processing. Together with the Web Catalog Service (CS-W) and the Web Feature Service (WFS) WCS completes the classical triad of meta, vector, and raster data. As such, they represent the core data services on which other services build. The current version of WCS is 1.1 with Corrigendum 2, also referred to as WCS 1.1.2. The WCS Standards Working Group (WCS.SWG) is continuing development of WCS in various directions. One work item is to extend WCS, which currently is confined to regularly gridded data, with support for further coverage types, such as those specified in ISO 19123. Two recently released extensions to WCS are WCS-T ("T" standing for "transactional") which adds upload capabilities to coverage servers and WCPS (Web Coverage Processing Service) which offers a coverage processing language, thereby bridging the gap to the generic WPS (Web Processing Service). All this is embedded into OGC's current initiative to achieve modular topical specification suites through so-called "extensions" which add focused
ERIC Educational Resources Information Center
Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd
2012-01-01
The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…
ERIC Educational Resources Information Center
Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen
2011-01-01
Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…
Kullback-Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing
ERIC Educational Resources Information Center
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A.
2011-01-01
This paper first discusses the relationship between Kullback-Leibler information (KL) and Fisher information in the context of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective. This explication should allow for a better understanding of the various item selection methods…
Evidencing Learning Outcomes: A Multi-Level, Multi-Dimensional Course Alignment Model
ERIC Educational Resources Information Center
Sridharan, Bhavani; Leitch, Shona; Watty, Kim
2015-01-01
This conceptual framework proposes a multi-level, multi-dimensional course alignment model to implement a contextualised constructive alignment of rubric design that authentically evidences and assesses learning outcomes. By embedding quality control mechanisms at each level for each dimension, this model facilitates the development of an aligned…
Developing a Hypothetical Multi-Dimensional Learning Progression for the Nature of Matter
ERIC Educational Resources Information Center
Stevens, Shawn Y.; Delgado, Cesar; Krajcik, Joseph S.
2010-01-01
We describe efforts toward the development of a hypothetical learning progression (HLP) for the growth of grade 7-14 students' models of the structure, behavior and properties of matter, as it relates to nanoscale science and engineering (NSE). This multi-dimensional HLP, based on empirical research and standards documents, describes how students…
Stability of shock waves for multi-dimensional hyperbolic-parabolic conservation laws
NASA Astrophysics Data System (ADS)
Li, Dening
1988-01-01
The uniform linear stability of shock waves is considerd for quasilinear hyperbolic-parabolic coupled conservation laws in multi-dimensional space. As an example, the stability condition and its dynamic meaning for isothermal shock wave in radiative hydrodynamics are analyzed.
ERIC Educational Resources Information Center
Andreev, Valentin I.
2014-01-01
The main aim of this research is to disclose the essence of students' multi-dimensional thinking, also to reveal the rating of factors which stimulate the raising of effectiveness of self-development of students' multi-dimensional thinking in terms of subject-oriented teaching. Subject-oriented learning is characterized as a type of learning where…
Zhao, Hongya; Wang, Debby D; Chen, Long; Liu, Xinyu; Yan, Hong
2016-01-01
Co-clustering, often called biclustering for two-dimensional data, has found many applications, such as gene expression data analysis and text mining. Nowadays, a variety of multi-dimensional arrays (tensors) frequently occur in data analysis tasks, and co-clustering techniques play a key role in dealing with such datasets. Co-clusters represent coherent patterns and exhibit important properties along all the modes. Development of robust co-clustering techniques is important for the detection and analysis of these patterns. In this paper, a co-clustering method based on hyperplane detection in singular vector spaces (HDSVS) is proposed. Specifically in this method, higher-order singular value decomposition (HOSVD) transforms a tensor into a core part and a singular vector matrix along each mode, whose row vectors can be clustered by a linear grouping algorithm (LGA). Meanwhile, hyperplanar patterns are extracted and successfully supported the identification of multi-dimensional co-clusters. To validate HDSVS, a number of synthetic and biological tensors were adopted. The synthetic tensors attested a favorable performance of this algorithm on noisy or overlapped data. Experiments with gene expression data and lineage data of embryonic cells further verified the reliability of HDSVS to practical problems. Moreover, the detected co-clusters are well consistent with important genetic pathways and gene ontology annotations. Finally, a series of comparisons between HDSVS and state-of-the-art methods on synthetic tensors and a yeast gene expression tensor were implemented, verifying the robust and stable performance of our method. PMID:27598575
Quantum enhanced estimation of a multi-dimensional field
NASA Astrophysics Data System (ADS)
Datta, Animesh; Baumgratz, Tillmann
We present a framework for the quantum-enhanced estimation of multiple parameters corresponding to non-commuting unitary generators. We derive the quantum Fisher information matrix to put a lower bound on the total variance of all the parameters involved. We present the conditions for the attainment of the multi-parameter bound, which is not guaranteed unlike the quantum metrology of single parameters. Our study also reveals that too much quantum entanglement may be detrimental to attaining the Heisenberg scaling in the estimation of unitarily generated parameters. One particular case of our framework is the simultaneous estimation of all three components of a magnetic field. We propose a probe state that demonstrates that the simultaneous estimation of the three components is better than the precision of estimating the three components individually. We provide realistic measurements that come close to attaining the quantum limit, exhibiting the advantage of simultaneous quantum estimation even in the case of non-commuting generators. Our work applies to precision estimation any Hamiltonian, and may be employed in efficient process tomography and verification. Our theoretical proposal can be implement in any finite dimensional quantum system such as trapped ions and nitrogen vacancy centres in diamond. Acknowledgement: UK EPSRC.
Anusha, L. S.; Nagendra, K. N.
2011-01-01
The solution of the polarized line radiative transfer (RT) equation in multi-dimensional geometries has been rarely addressed and only under the approximation that the changes of frequencies at each scattering are uncorrelated (complete frequency redistribution). With the increase in the resolution power of telescopes, being able to handle RT in multi-dimensional structures becomes absolutely necessary. In the present paper, our first aim is to formulate the polarized RT equation for resonance scattering in multi-dimensional media, using the elegant technique of irreducible spherical tensors T{sub Q}{sup K}(i, {Omega}). Our second aim is to develop a numerical method of a solution based on the polarized approximate lambda iteration (PALI) approach. We consider both complete frequency redistribution and partial frequency redistribution (PRD) in the line scattering. In a multi-dimensional geometry, the radiation field is non-axisymmetrical even in the absence of a symmetry breaking mechanism such as an oriented magnetic field. We generalize here to the three-dimensional (3D) case, the decomposition technique developed for the Hanle effect in a one-dimensional (1D) medium which allows one to represent the Stokes parameters I, Q, U by a set of six cylindrically symmetrical functions. The scattering phase matrix is expressed in terms of T{sub Q}{sup K}(i, {Omega}) (i=0,1,2, K=0,1,2, -K {<=} Q {<=} +K), with {Omega} being the direction of the outgoing ray. Starting from the definition of the source vector, we show that it can be represented in terms of six components S{sup K}{sub Q} independent of {Omega}. The formal solution of the multi-dimensional transfer equation shows that the Stokes parameters can also be expanded in terms of T{sub Q}{sup K}(i, {Omega}). Because of the 3D geometry, the expansion coefficients I{sup K}{sub Q} remain {Omega}-dependent. We show that each I{sup K}{sub Q} satisfies a simple transfer equation with a source term S{sup K}{sub Q} and that
Flexible optofluidic waveguide platform with multi-dimensional reconfigurability
Parks, Joshua W.; Schmidt, Holger
2016-01-01
Dynamic reconfiguration of photonic function is one of the hallmarks of optofluidics. A number of approaches have been taken to implement optical tunability in microfluidic devices. However, a device architecture that allows for simultaneous high-performance microfluidic fluid handling as well as dynamic optical tuning has not been demonstrated. Here, we introduce such a platform based on a combination of solid- and liquid-core polydimethylsiloxane (PDMS) waveguides that also provides fully functioning microvalve-based sample handling. A combination of these waveguides forms a liquid-core multimode interference waveguide that allows for multi-modal tuning of waveguide properties through core liquids and pressure/deformation. We also introduce a novel lifting-gate lightvalve that simultaneously acts as a fluidic microvalve and optical waveguide, enabling mechanically reconfigurable light and fluid paths and seamless incorporation of controlled particle analysis. These new functionalities are demonstrated by an optical switch with >45 dB extinction ratio and an actuatable particle trap for analysis of biological micro- and nanoparticles. PMID:27597164
Flexible optofluidic waveguide platform with multi-dimensional reconfigurability.
Parks, Joshua W; Schmidt, Holger
2016-01-01
Dynamic reconfiguration of photonic function is one of the hallmarks of optofluidics. A number of approaches have been taken to implement optical tunability in microfluidic devices. However, a device architecture that allows for simultaneous high-performance microfluidic fluid handling as well as dynamic optical tuning has not been demonstrated. Here, we introduce such a platform based on a combination of solid- and liquid-core polydimethylsiloxane (PDMS) waveguides that also provides fully functioning microvalve-based sample handling. A combination of these waveguides forms a liquid-core multimode interference waveguide that allows for multi-modal tuning of waveguide properties through core liquids and pressure/deformation. We also introduce a novel lifting-gate lightvalve that simultaneously acts as a fluidic microvalve and optical waveguide, enabling mechanically reconfigurable light and fluid paths and seamless incorporation of controlled particle analysis. These new functionalities are demonstrated by an optical switch with >45 dB extinction ratio and an actuatable particle trap for analysis of biological micro- and nanoparticles. PMID:27597164
Mokken Scale Analysis Using Hierarchical Clustering Procedures
ERIC Educational Resources Information Center
van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries
2004-01-01
Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…
Publishing and sharing multi-dimensional image data with OMERO.
Burel, Jean-Marie; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Moore, Josh; Swedlow, Jason R
2015-10-01
Imaging data are used in the life and biomedical sciences to measure the molecular and structural composition and dynamics of cells, tissues, and organisms. Datasets range in size from megabytes to terabytes and usually contain a combination of binary pixel data and metadata that describe the acquisition process and any derived results. The OMERO image data management platform allows users to securely share image datasets according to specific permissions levels: data can be held privately, shared with a set of colleagues, or made available via a public URL. Users control access by assigning data to specific Groups with defined membership and access rights. OMERO's Permission system supports simple data sharing in a lab, collaborative data analysis, and even teaching environments. OMERO software is open source and released by the OME Consortium at www.openmicroscopy.org. PMID:26223880
Multi-Dimensional Hydrocode Analyses of Penetrating Hypervelocity Impacts
NASA Astrophysics Data System (ADS)
Bessette, G. C.; Lawrence, R. J.; Chhabildas, L. C.; Reinhart, W. D.; Thornhill, T. F.; Saul, W. V.
2004-07-01
The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.
Multi-dimensional hydrocode analyses of penetrating hypervelocity impacts.
Saul, W. Venner; Reinhart, William Dodd; Thornhill, Tom Finley, III; Lawrence, Raymond Jeffery Jr.; Chhabildas, Lalit Chandra; Bessette, Gregory Carl
2003-08-01
The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.
Hitchhiker's guide to multi-dimensional plant pathology.
Saunders, Diane G O
2015-02-01
Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
NASA Astrophysics Data System (ADS)
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
An extension of the TV-HLL scheme for multi-dimensional compressible flows
NASA Astrophysics Data System (ADS)
Tiam Kapen, Pascalin; Tchuen, Ghislain
2015-03-01
This paper investigates a very simple method to numerically approximate the solution of the multi-dimensional Riemann problem for gas dynamics, using the literal extension of the Toro Vazquez-Harten Lax Leer (TV-HLL) scheme as its basis. Indeed, the present scheme is obtained by following the Toro-Vazquez splitting, and using the HLL algorithm with modified wave speeds for the pressure system. An essential feature of the TV-HLL scheme is its simplicity and its accuracy in computing multi-dimensional flows. The proposed scheme is carefully designed to simplify its eventual numerical implementation. It has been applied to numerical tests and its performances are demonstrated for some two-dimensional and three-dimensional test problems.
Multi-dimensional option pricing using radial basis functions and the generalized Fourier transform
NASA Astrophysics Data System (ADS)
Larsson, Elisabeth; Ahlander, Krister; Hall, Andreas
2008-12-01
We show that the generalized Fourier transform can be used for reducing the computational cost and memory requirements of radial basis function methods for multi-dimensional option pricing. We derive a general algorithm, including a transformation of the Black-Scholes equation into the heat equation, that can be used in any number of dimensions. Numerical experiments in two and three dimensions show that the gain is substantial even for small problem sizes. Furthermore, the gain increases with the number of dimensions.
Collard, L.B.
2001-09-21
A suite of multi-dimensional computer models was developed in 1999 (Collard and Flach) to analyze the transport of residual contamination from high-level waste tanks through the subsurface to seeplines. Enhancements in 2000 to those models include investigate the effect of numerical dispersion, develop a solubility-limited case for U and Pu, and develop a plan for a database as part of the Rapid Screening Tool and start to implement that plan.
Multi-dimensional hybrid Fourier continuation-WENO solvers for conservation laws
NASA Astrophysics Data System (ADS)
Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu
2013-11-01
We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC-WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319-338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC-WENO, the pure WENO and the hybrid central difference-WENO (CD-WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer-Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi
Beyond the Child-Langmuir Law: The Physics of Multi-dimensional Space-Charge-Limited Emission
NASA Astrophysics Data System (ADS)
Luginsland, John
2001-10-01
Space-Charge-Limited (SCL) flows in diodes have been an area of active research since the pioneering work of Child and Langmuir in the early part of this century. Indeed, the scaling of current density with the voltage to the 3/2s power is one of the best-known limits in the fields of non-neutral plasma physics, accelerator physics, sheath physics, vacuum electronics, and high power microwaves (HPM). In the past five years, there has been renewed interest in the physics and characteristics of space-charge-limited emission in physically realizable configurations. This research has focused on characterizing the current and current density enhancement possible from two- and three-dimensional geometries, such as field-emitting arrays. In 1996, computational efforts led to the development of a scaling law that described the increased current drawn due to two-dimensional effects. Recently, this scaling has been analytically derived from first principles. In parallel efforts, computational work has characterized the edge enhancement of the current density, leading to a better understanding of the physics of explosive emission cathodes. In this talk, the analytic and computational extensions to the one-dimensional Child-Langmuir law will be reviewed, the accuracy of SCL emission algorithms will be assessed, and the experimental implications of multi-dimensional SCL flows will be discussed.
Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose
2014-01-01
Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589
NASA Astrophysics Data System (ADS)
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
Multi-dimensional Simulations of Core Collapse Supernovae employing Ray-by-Ray Neutrino Transport
NASA Astrophysics Data System (ADS)
Hix, W. R.; Mezzacappa, A.; Liebendoerfer, M.; Messer, O. E. B.; Blondin, J. M.; Bruenn, S. W.
2001-12-01
Decades of research on the mechanism which causes core collapse supernovae has evolved a paradigm wherein the shock that results from the formation of the proto-neutron star stalls, failing to produce an explosion. Only when the shock is re-energized by the tremendous neutrino flux that is carrying off the binding energy of this proto-neutron star can it drive off the star's envelope, creating a supernova. Work in recent years has demonstrated the importance of multi-dimensional hydrodynamic effects like convection to successful simulation of an explosion. Further work has established the necessity of accurately characterizing the distribution of neutrinos in energy and direction. This requires discretizing the neutrino distribution into multiple groups, adding greatly to the computational cost. However, no supernova simulations to date have combined self-consistent multi-group neutrino transport with multi-dimensional hydrodynamics. We present preliminary results of our efforts to combine these important facets of the supernova mechanism by coupling self-consistent ray-by-ray multi-group Boltzmann and flux-limited diffusion neutrino transport schemes to multi-dimensional hydrodynamics. This research is supported by NASA under contract NAG5-8405, by the NSF under contract AST-9877130, and under a SciDAC grant from the DoE Office of Science High Energy and Nuclear Physics Program. Work at Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2001-01-01
A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.
Structural diversity: a multi-dimensional approach to assess recreational services in urban parks.
Voigt, Annette; Kabisch, Nadja; Wurster, Daniel; Haase, Dagmar; Breuste, Jürgen
2014-05-01
Urban green spaces provide important recreational services for urban residents. In general, when park visitors enjoy "the green," they are in actuality appreciating a mix of biotic, abiotic, and man-made park infrastructure elements and qualities. We argue that these three dimensions of structural diversity have an influence on how people use and value urban parks. We present a straightforward approach for assessing urban parks that combines multi-dimensional landscape mapping and questionnaire surveys. We discuss the method as well the results from its application to differently sized parks in Berlin and Salzburg. PMID:24740619
Axial expansion methods for solution of the multi-dimensional neutron diffusion equation
Beaklini Filho, J.F.
1984-01-01
The feasibility and practical implementation of axial expansion methods for the solution of the multi-dimensional multigroup neutron diffusion (MGD) equations is investigated. The theoretical examination which is applicable to the general MGD equations in arbitrary geometry includes the derivation of a new weak (reduced) form of the MGD equations by expanding the axial component of the neutron flux in a series of known trial functions and utilizing the Galerkin weighting. A general two-group albedo boundary condition is included in the weak form as a natural boundary condition. The application of different types of trial functions is presented.
Fawley, William M.
2002-03-25
We discuss the underlying reasoning behind and the details of the numerical algorithm used in the GINGER free-electron laser(FEL) simulation code to load the initial shot noise microbunching on the electron beam. In particular, we point out that there are some additional subtleties which must be followed for multi-dimensional codes which are not necessary for one-dimensional formulations. Moreover, requiring that the higher harmonics of the microbunching also be properly initialized with the correct statistics leads to additional complexities. We present some numerical results including the predicted incoherent, spontaneous emission as tests of the shot noise algorithm's correctness.
NASA Astrophysics Data System (ADS)
Doicu, Adrian; Efremenko, Dmitry; Trautmann, Thomas
2013-03-01
The multi-dimensional scalar Spherical Harmonics Discrete Ordinate Method (SHDOM) is extended to the vector case. The vector model uses complex and real generalized spherical harmonics in the energetic representation of the Stokes vector, and retains some powerful features of the scalar model, as for example, the combination of the generalized spherical harmonic and the discrete ordinate representations of the radiance field, the use of a linear short characteristic method for computing the corner-point values of the Stokes vector, and the application of the adaptive grid technique. Results illustrating the accuracy of the vector model are shown for realistic simulated clouds.
Multi-Dimensional Asymptotically Stable 4th Order Accurate Schemes for the Diffusion Equation
NASA Technical Reports Server (NTRS)
Abarbanel, Saul; Ditkowski, Adi
1996-01-01
An algorithm is presented which solves the multi-dimensional diffusion equation on co mplex shapes to 4th-order accuracy and is asymptotically stable in time. This bounded-error result is achieved by constructing, on a rectangular grid, a differentiation matrix whose symmetric part is negative definite. The differentiation matrix accounts for the Dirichlet boundary condition by imposing penalty like terms. Numerical examples in 2-D show that the method is effective even where standard schemes, stable by traditional definitions fail.
NASA Astrophysics Data System (ADS)
Zeng, Wen; Xie, Maozhao
2006-12-01
The detailed surface reaction mechanism of methane on rhodium catalyst was analyzed. Comparisons between numerical simulation and experiments showed a basic agreement. The combustion process of homogeneous charge compression ignition (HCCI) engine whose piston surface has been coated with catalyst (rhodium and platinum) was numerically investigated. A multi-dimensional model with detailed chemical kinetics was built. The effects of catalytic combustion on the ignition timing, the temperature and CO concentration fields, and HC, CO and NOx emissions of the HCCI engine were discussed. The results showed the ignition timing of the HCCI engine was advanced and the emissions of HC and CO were decreased by the catalysis.
Scaling analysis of affinity propagation.
Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang
2010-06-01
We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for ss*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set. PMID:20866473
The power of correlative microscopy: multi-modal, multi-scale, multi-dimensional.
Caplan, Jeffrey; Niethammer, Marc; Taylor, Russell M; Czymmek, Kirk J
2011-10-01
Correlative microscopy is a sophisticated approach that combines the capabilities of typically separate, but powerful microscopy platforms: often including, but not limited, to conventional light, confocal and super-resolution microscopy, atomic force microscopy, transmission and scanning electron microscopy, magnetic resonance imaging and micro/nano CT (computed tomography). When targeting rare or specific events within large populations or tissues, correlative microscopy is increasingly being recognized as the method of choice. Furthermore, this multi-modal assimilation of technologies provides complementary and often unique information, such as internal and external spatial, structural, biochemical and biophysical details from the same targeted sample. The development of a continuous stream of cutting-edge applications, probes, preparation methodologies, hardware and software developments will enable realization of the full potential of correlative microscopy. PMID:21782417
The Power of Correlative Microscopy: Multi-modal, Multi-scale, Multi-dimensional
Caplan, Jeffrey; Niethammer, Marc; Taylor, Russell M.; Czymmek, Kirk J.
2011-01-01
Correlative microscopy is a sophisticated approach that combines the capabilities of typically separate, but powerful microscopy platforms: often including, but not limited, to conventional light, confocal and super-resolution microscopy, atomic force microscopy, transmission and scanning electron microscopy, magnetic resonance imaging and micro/nanoCT (computed tomography). When targeting rare or specific events within large populations or tissues, correlative microscopy is increasingly being recognized as the method of choice. Furthermore, this multi-modal assimilation of technologies provides complementary and often unique information, such as internal and external spatial, structural, biochemical and biophysical details from the same targeted sample. The development of a continuous stream of cutting-edge applications, probes, preparation methodologies, hardware and software developments will enable realization of the full potential of correlative microscopy. PMID:21782417
Nucleosynthesis in self-consistent, multi-dimensional simulations of CCSNe
NASA Astrophysics Data System (ADS)
Harris, J. Austin; Hix, W. Raphael; Chertkow, Merek; Bruenn, Stephen; Lentz, Eric; Kasen, Daniel
2016-03-01
Observations of nuclear abundances in core-collapse supernova ejecta, highlighted by γ-ray observations of the 44Ti spatial distribution in the nearby supernova remnants Cas A and SN 1987A, allow nucleosynthesis calculations to place powerful constraints on conditions deep in the interiors of supernovae and their progenitor stars. This ability to probe where direct observations cannot makes such calculations an invaluable tool for understanding the CCSN mechanism. Unfortunately, despite knowing for two decades that supernovae are intrinsically multi-dimensional events, discussions of CCSN nucleosynthesis have been predominantly based on spherically symmetric models, which employ a contrived energy source to launch an explosion and often ignore important neutrino effects. As part of the effort to bridge the gap between first-principles simulations of the explosion mechanism and observations of both supernovae and SNRs, we investigate CCSN nucleosynthesis with self-consistent, 2D simulations using a multi-dimensional radiation-hydrodynamics code. We present nucleosynthesis results for several axisymmetric CCSN models models which qualitative differences from their parameterized counterparts in their ejecta composition and spatial distribution.
A lock-free priority queue design based on multi-dimensional linked lists
Dechev, Damian; Zhang, Deli
2015-04-03
The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN) for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.
Multi-dimensional self-esteem and magnitude of change in the treatment of anorexia nervosa.
Collin, Paula; Karatzias, Thanos; Power, Kevin; Howard, Ruth; Grierson, David; Yellowlees, Alex
2016-03-30
Self-esteem improvement is one of the main targets of inpatient eating disorder programmes. The present study sought to examine multi-dimensional self-esteem and magnitude of change in eating psychopathology among adults participating in a specialist inpatient treatment programme for anorexia nervosa. A standardised assessment battery, including multi-dimensional measures of eating psychopathology and self-esteem, was completed pre- and post-treatment for 60 participants (all white Scottish female, mean age=25.63 years). Statistical analyses indicated that self-esteem improved with eating psychopathology and weight over the course of treatment, but that improvements were domain-specific and small in size. Global self-esteem was not predictive of treatment outcome. Dimensions of self-esteem at baseline (Lovability and Moral Self-approval), however, were predictive of magnitude of change in dimensions of eating psychopathology (Shape and Weight Concern). Magnitude of change in Self-Control and Lovability dimensions were predictive of magnitude of change in eating psychopathology (Global, Dietary Restraint, and Shape Concern). The results of this study demonstrate that the relationship between self-esteem and eating disorder is far from straightforward, and suggest that future research and interventions should focus less exclusively on self-esteem as a uni-dimensional psychological construct. PMID:26837476
Multi-dimensional NMR without coherence transfer: Minimizing losses in large systems
Liu, Yizhou; Prestegard, James H.
2011-01-01
Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient 15N–1H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of 15N–1H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with 15N in all alanine residues. Operation at 4 °C mimics observation of a 50 kDa protein at 35 °C. PMID:21835658
Multi-dimensional NMR without coherence transfer: minimizing losses in large systems.
Liu, Yizhou; Prestegard, James H
2011-10-01
Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient (15)N-(1)H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of (15)N-(1)H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with (15)N in all alanine residues. Operation at 4°C mimics observation of a 50 kDa protein at 35°C. PMID:21835658
A lock-free priority queue design based on multi-dimensional linked lists
Dechev, Damian; Zhang, Deli
2015-04-03
The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less
Visualization of Multi-dimensional MISR Datasets Using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Li, P.; Jacob, J.; Braverman, A.; Block, G.
2003-12-01
Many techniques exist for visualization of high dimensional datasets including Parallel Coordinates, Projection Pursuit, and Self-Organizing Map (SOM), but none of these are particularly well suited to satellite data. Remote sensing datasets are typically highly multivariate, but also have spatial structure. In analyzing such data, it is critical to maintain the spatial context within which multivariate relationships exist. Only then can we begin to investigate how those relationships change spatially, and connect observed phenomena to physical processes that may explain them. We present an analysis and visualization system called SOM_VIS that applies an enhanced SOM algorithm proposed by Todd & Kirby [1] to multi-dimensional image datasets in a way that maintains spatial context. We first use SOM to project high-dimensional data into a non-uniform 3D lattice structure. The lattice structure is then mapped to a color space to serve as a colormap for the image. The Voronoi cell refinement algorithm is then used to map the SOM lattice structure to various levels of color resolution. The final result is a false color image with similar colors representing similar characteristics across all its data dimensions. We demonstrate this system using data from JPL's Multi-angle Imaging Spectro-Radiometer (MISR), which looks at Earth and its atmosphere in 36 channels: all combinations of four spectral bands and nine view angles. The SOM_VIS tool consists of a data control panel for users to select a subset from MISR's Level 1B Radiance data products, and a training control panel for users to choose various parameters for SOM training. These include the size of the SOM lattice, the method used to modify the control vectors towards the input training vector, convergence rate, and number of Voronoi regions. Also, the SOM_VIS system contains a multi-window display system allowing users to view false color SOM images and the corresponding color maps for trained SOM lattices. In
Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed
2016-07-01
Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. PMID:27038887
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M
2015-01-01
Changes in gait patterns provide important information about individuals' health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson's disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.
2015-01-01
Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in
Akhter, T.; Hossain, M. M.; Mamun, A. A.
2012-09-15
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative
Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests
Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L
2001-05-31
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.
Han, Xianlin; Yang, Kui; Gross, Richard W.
2011-01-01
Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems. PMID:21755525
Generation and entanglement of multi-dimensional multi-mode coherent fields in cavity QED
NASA Astrophysics Data System (ADS)
Maleki, Y.
2016-08-01
We introduce generalized multi-mode superposition of multi-dimensional coherent field states and propose a generation scheme of such states in a cavity QED scenario. An appropriate encoding of information on these states is employed, which maps the states to the Hilbert space of some multi-qudit states. The entanglement of these states is characterized based on such proper encodings. A detailed study of entanglement in general multi-qudit coherent states is presented, and in addition to establishing some explicit expressions for quantifying entanglement of such systems, several important features of entanglement in these system states are exposed. Furthermore, the effects of both cavity decay and channel noise on these system states are studied and their properties are illustrated.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
Multi-dimensional on-particle detection technology for multi-category disease classification.
Tan, Jie; Chen, Xiaomin; Du, Guansheng; Luo, Qiaohui; Li, Xiao; Liu, Yaqing; Liang, Xiao; Wu, Jianmin
2016-02-28
A serum peptide profile contains important bio-information, which may help disease classification. The motivation of this study is to take advantage of porous silicon microparticles with multiple surface chemistries to reduce the loss of peptide information and simplify the sample pretreatment. We developed a multi-dimensional on-particle MALDI-TOF technology to acquire high fidelity and cross-reactive molecular fingerprints for mining disease information. The peptide fingerprint of serum samples from colorectal cancer patients, liver cancer patients and healthy volunteers were measured with this technology. The featured mass spectral peaks can successfully discriminate and predict the multi-category disease. Data visualization for future clinical application was also demonstrated. PMID:26839921
Ionizing shocks in argon. Part II: Transient and multi-dimensional effects
Kapper, M. G.; Cambier, J.-L.
2011-06-01
We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.
On the canonical forms of the multi-dimensional averaged Poisson brackets
NASA Astrophysics Data System (ADS)
Maltsev, A. Ya.
2016-05-01
We consider here special Poisson brackets given by the "averaging" of local multi-dimensional Poisson brackets in the Whitham method. For the brackets of this kind it is natural to ask about their canonical forms, which can be obtained after transformations preserving the "physical meaning" of the field variables. We show here that the averaged bracket can always be written in the canonical form after a transformation of "Hydrodynamic Type" in the case of absence of annihilators of initial bracket. However, in general case the situation is more complicated. As we show here, in more general case the averaged bracket can be transformed to a "pseudo-canonical" form under some special ("physical") requirements on the initial bracket.
Improved radial basis function methods for multi-dimensional option pricing
NASA Astrophysics Data System (ADS)
Pettersson, Ulrika; Larsson, Elisabeth; Marcusson, Gunnar; Persson, Jonas
2008-12-01
In this paper, we have derived a radial basis function (RBF) based method for the pricing of financial contracts by solving the Black-Scholes partial differential equation. As an example of a financial contract that can be priced with this method we have chosen the multi-dimensional European basket call option. We have shown numerically that our scheme is second-order accurate in time and spectrally accurate in space for constant shape parameter. For other non-optimal choices of shape parameter values, the resulting convergence rate is algebraic. We propose an adapted node point placement that improves the accuracy compared with a uniform distribution. Compared with an adaptive finite difference method, the RBF method is 20-40 times faster in one and two space dimensions and has approximately the same memory requirements.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
Catley, Christina; McGregor, Carolyn; Percival, Jennifer; Curry, Joanne; James, Andrew
2008-01-01
This paper presents a multi-dimensional approach to knowledge translation, enabling results obtained from a survey evaluating the uptake of Information Technology within Neonatal Intensive Care Units to be translated into knowledge, in the form of health informatics capacity audits. Survey data, having multiple roles, patient care scenarios, levels, and hospitals, is translated using a structured data modeling approach, into patient journey models. The data model is defined such that users can develop queries to generate patient journey models based on a pre-defined Patient Journey Model architecture (PaJMa). PaJMa models are then analyzed to build capacity audits. Capacity audits offer a sophisticated view of health informatics usage, providing not only details of what IT solutions a hospital utilizes, but also answering the questions: when, how and why, by determining when the IT solutions are integrated into the patient journey, how they support the patient information flow, and why they improve the patient journey. PMID:19162956
Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai
2014-01-01
RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here. PMID:24671105
Gattol, Valentin; Sääksjärvi, Maria; Carbon, Claus-Christian
2011-01-01
Background The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2], [3]. PMID:21246037
Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.
Evans, Gregory Herbert; Chen, Ken Shuang
2004-06-01
This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using the finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.
Optimization of resolution and sensitivity of 4D NOESY using multi-dimensional decomposition.
Luan, T; Jaravine, V; Yee, A; Arrowsmith, C H; Orekhov, V Yu
2005-09-01
Highly resolved multi-dimensional NOE data are essential for rapid and accurate determination of spatial protein structures such as in structural genomics projects. Four-dimensional spectra contain almost no spectral overlap inherently present in lower dimensionality spectra and are highly amenable to application of automated routines for spectral resonance location and assignment. However, a high resolution 4D data set using conventional uniform sampling usually requires unacceptably long measurement time. Recently we have reported that the use of non-uniform sampling and multi-dimensional decomposition (MDD) can remedy this problem. Here we validate accuracy and robustness of the method, and demonstrate its usefulness for fully protonated protein samples. The method was applied to 11 kDa protein PA1123 from structural genomics pipeline. A systematic evaluation of spectral reconstructions obtained using 15-100% subsets of the complete reference 4D 1H-13C-13C-1H NOESY spectrum has been performed. With the experimental time saving of up to six times, the resolution and the sensitivity per unit time is shown to be similar to that of the fully recorded spectrum. For the 30% data subset we demonstrate that the intensities in the reconstructed and reference 4D spectra correspond with a correlation coefficient of 0.997 in the full range of spectral amplitudes. Intensities of the strong, middle and weak cross-peaks correlate with coefficients 0.9997, 0.9965, and 0.83. The method does not produce false peaks. 2% of weak peaks lost in the 30% reconstruction is in line with theoretically expected noise increase for the shorter measurement time. Together with good accuracy in the relative line-widths these translate to reliable distance constrains derived from sparsely sampled, high resolution 4D NOESY data. PMID:16222553
Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport
NASA Astrophysics Data System (ADS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-02-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
TWO-DIMENSIONAL CORE-COLLAPSE SUPERNOVA MODELS WITH MULTI-DIMENSIONAL TRANSPORT
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun E-mail: burrows@astro.princeton.edu
2015-02-10
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant O(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate O(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying ''ray-by-ray'' approach employed by all other groups may be compromising their results. We show that ''ray-by-ray'' calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
Scale-Specific Multifractal Medical Image Analysis
Braverman, Boris
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588
Scale Free Reduced Rank Image Analysis.
ERIC Educational Resources Information Center
Horst, Paul
In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…
ERIC Educational Resources Information Center
Basantia, Tapan Kumar; Panda, B. N.; Sahoo, Dukhabandhu
2012-01-01
Cognitive development of the learners is the prime task of each and every stage of our school education and its importance especially in elementary state is quite worth mentioning. Present study investigated the effectiveness of a new and innovative strategy (i.e., MAI (multi-dimensional activity based integrated approach)) for the development of…
Bengtsson, Henrik; Hössjer, Ola
2006-01-01
Background Low-level processing and normalization of microarray data are most important steps in microarray analysis, which have profound impact on downstream analysis. Multiple methods have been suggested to date, but it is not clear which is the best. It is therefore important to further study the different normalization methods in detail and the nature of microarray data in general. Results A methodological study of affine models for gene expression data is carried out. Focus is on two-channel comparative studies, but the findings generalize also to single- and multi-channel data. The discussion applies to spotted as well as in-situ synthesized microarray data. Existing normalization methods such as curve-fit ("lowess") normalization, parallel and perpendicular translation normalization, and quantile normalization, but also dye-swap normalization are revisited in the light of the affine model and their strengths and weaknesses are investigated in this context. As a direct result from this study, we propose a robust non-parametric multi-dimensional affine normalization method, which can be applied to any number of microarrays with any number of channels either individually or all at once. A high-quality cDNA microarray data set with spike-in controls is used to demonstrate the power of the affine model and the proposed normalization method. Conclusion We find that an affine model can explain non-linear intensity-dependent systematic effects in observed log-ratios. Affine normalization removes such artifacts for non-differentially expressed genes and assures that symmetry between negative and positive log-ratios is obtained, which is fundamental when identifying differentially expressed genes. In addition, affine normalization makes the empirical distributions in different channels more equal, which is the purpose of quantile normalization, and may also explain why dye-swap normalization works or fails. All methods are made available in the aroma package, which is
Resolution or Analysis Scale: What Matters Most?
NASA Astrophysics Data System (ADS)
Miller, Bradley
2016-04-01
Identifying the scale at which different covariates best explain the variation of soil properties reflects the geographic strategy of using map generalization (relative size of map delineations) to identify the scale at which phenomena occur. The size of map delineations corresponds to resolution in raster data models. Although not always considered in digital soil mapping studies, resolution is widely recognized as an important factor in identifying covariates in digital spatial analysis. However, many variables that are useful as predictors in digital soil mapping are dependent upon spatial context. For example, the slope gradient at a specific location can only be calculated by considering the surrounding area. In these cases, an analysis neighborhood is used when calculating such variables using a raster data model. The context or area considered is then dependent upon both the resolution and the number of cells (window size) used to define the neighborhood. This presentation explores the difference between resolution and analysis scale, then tests which concept is most important for identifying optimal scales of correlation for digital soil informatics.
NASA Astrophysics Data System (ADS)
Güçlü, Y.; Hitchon, W. N. G.
2012-04-01
The term 'Convected Scheme' (CS) refers to a family of algorithms, most usually applied to the solution of Boltzmann's equation, which uses a method of characteristics in an integral form to project an initial cell forward to a group of final cells. As such the CS is a 'forward-trajectory' semi-Lagrangian scheme. For multi-dimensional simulations of neutral gas flows, the cell-centered version of this semi-Lagrangian (CCSL) scheme has advantages over other options due to its implementation simplicity, low memory requirements, and easier treatment of boundary conditions. The main drawback of the CCSL-CS to date has been its high numerical diffusion in physical space, because of the 2nd order remapping that takes place at the end of each time step. By means of a modified equation analysis, it is shown that a high order estimate of the remapping error can be obtained a priori, and a small correction to the final position of the cells can be applied upon remapping, in order to achieve full compensation of this error. The resulting scheme is 4th order accurate in space while retaining the desirable properties of the CS: it is conservative and positivity-preserving, and the overall algorithm complexity is not appreciably increased. Two monotone (i.e. non-oscillating) versions of the fourth order CCSL-CS are also presented: one uses a common flux-limiter approach; the other uses a non-polynomial reconstruction to evaluate the derivatives of the density function. The method is illustrated in simple one- and two-dimensional examples, and a fully 3D solution of the Boltzmann equation describing expansion of a gas into vacuum through a cylindrical tube.
Luong, J; Gras, R; Cortes, H; Shellie, R A
2012-09-14
Oxygenated compounds like methanol, ethanol, 1-propanol, 2-propanol, 1-butanol, acetaldehyde, crotonaldehyde, ethylene oxide, tetrahydrofuran, 1,4-dioxane, 1,3-dioxolane, and 2-chloromethyl-1,3-dioxolane are commonly encountered in industrial manufacturing processes. Despite the availability of a variety of column stationary phases for chromatographic separation, it is difficult to separate these solutes from their respective matrices using single dimension gas chromatography. Implemented with a planar microfluidic device, conventional two-dimensional gas chromatography and the employment of chromatographic columns using dissimilar separation mechanisms like that of a selective wall-coated open tubular column and an ionic sorbent column have been successfully applied to resolve twelve industrially significant volatile oxygenated compounds in both gas and aqueous matrices. A Large Volume Gas Injection System (LVGIS) was also employed for sample introduction to enhance system automation and precision. By successfully integrating these concepts, in addition to having the capability to separate all twelve components in one single analysis, features associated with multi-dimensional gas chromatography like dual retention time capability, and the ability to quarantine undesired chromatographic contaminants or matrix components in the first dimension column to enhance overall system cleanliness were realized. With this technique, a complete separation for all the compounds mentioned can be carried out in less than 15 min. The compounds cited can be analyzed over a range of 250 ppm (v/v) to 100 ppm (v/v) with a relative standard deviation of less than 5% (n=20) with high degree of reliability. PMID:22410155
Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.
2008-01-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10−4 samples in range and 2.2 × 10−3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10−3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping
2000-01-01
We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.
Preparation of 13C and 15N labelled RNAs for heteronuclear multi-dimensional NMR studies.
Nikonowicz, E P; Sirr, A; Legault, P; Jucker, F M; Baer, L M; Pardi, A
1992-09-11
A procedure is described for the efficient preparation of isotopically enriched RNAs of defined sequence. Uniformly labelled nucleotide 5'triphosphates (NTPs) were prepared from E.coli grown on 13C and/or 15N isotopically enriched media. These procedures routinely yield 180 mumoles of labelled NTPs per gram of 13C enriched glucose. The labelled NTPs were then used to synthesize RNA oligomers by in vitro transcription. Several 13C and/or 15N labelled RNAs have been synthesized for the sequence r(GGCGCUUGCGUC). Under conditions of high salt or low salt, this RNA forms either a symmetrical duplex with two U.U base pairs or a hairpin containing a CUUG loop respectively. These procedures were used to synthesize uniformly labelled RNAs and a RNA labelled only on the G and C residues. The ability to generate milligram quantities of isotopically labelled RNAs allows application of multi-dimensional heteronuclear magnetic resonance experiments that enormously simplify the resonance assignment and solution structure determination of RNAs. Examples of several such heteronuclear NMR experiments are shown. PMID:1383927
Operationalising the Sustainable Knowledge Society Concept through a Multi-dimensional Scorecard
NASA Astrophysics Data System (ADS)
Dragomirescu, Horatiu; Sharma, Ravi S.
Since the early 21st Century, building a Knowledge Society represents an aspiration not only for the developed countries, but for the developing ones too. There is an increasing concern worldwide for rendering this process manageable towards a sustainable, equitable and ethically sound societal system. As proper management, including at the societal level, requires both wisdom and measurement, the operationalisation of the Knowledge Society concept encompasses a qualitative side, related to vision-building, and a quantitative one, pertaining to designing and using dedicated metrics. The endeavour of enabling policy-makers mapping, steering and monitoring the sustainable development of the Knowledge Society at national level, in a world increasingly based on creativity, learning and open communication, led researchers to devising a wide range of composite indexes. However, as such indexes are generated through weighting and aggregation, their usefulness is limited to retrospectively assessing and comparing levels and states already attained; therefore, to better serve policy-making purposes, composite indexes should be complemented by other instruments. Complexification, inspired by the systemic paradigm, allows obtaining "rich pictures" of the Knowledge Society; to this end, a multi-dimensional scorecard of the Knowledge Society development is hereby suggested, that seeks a more contextual orientation towards sustainability. It is assumed that, in the case of the Knowledge Society, the sustainability condition goes well beyond the "greening" desideratum and should be of a higher order, relying upon the conversion of natural and productive life-cycles into virtuous circles of self-sustainability.
Kitsiou, Dimitra; Coccossis, Harry; Karydis, Michael
2002-02-01
Coastal ecosystems are increasingly threatened by short-sighted management policies that focus on human activities rather than the systems that sustain them. The early assessment of the impacts of human activities on the quality of the environment in coastal areas is important for decision-making, particularly in cases of environment/development conflicts, such as environmental degradation and saturation in tourist areas. In the present study, a methodology was developed for the multi-dimensional evaluation and ranking of coastal areas using a set of criteria and based on the combination of multiple criteria choice methods and Geographical Information Systems (GIS). The northeastern part of the island of Rhodes in the Aegean Sea, Greece was the case study area. A distinction in sub-areas was performed and they were ranked according to socio-economic and environmental parameters. The robustness of the proposed methodology was assessed using different configurations of the initial criteria and reapplication of the process. The advantages and disadvantages, as well as the usefulness of this methodology for comparing the status of coastal areas and evaluating their potential for further development based on various criteria, is further discussed. PMID:11846155
Multi-dimensional permutation-modulation format for coherent optical communications.
Ishimura, Shota; Kikuchi, Kazuro
2015-06-15
We introduce the multi-dimensional permutation-modulation format in coherent optical communication systems and analyze its performance, focusing on the power efficiency and the spectral efficiency. In the case of four-dimensional (4D) modulation, the polarization-switched quadrature phase-shift keying (PS-QPSK) modulation format and the polarization quadrature-amplitude modulation (POL-QAM) format can be classified into the permutation modulation format. Other than these well-known modulation formats, we find novel modulation formats trading-off between the power efficiency and the spectral efficiency. With the increase in the dimension, the spectral efficiency can more closely approach the channel capacity predicted from the Shannon's theory. We verify these theoretical characteristics through computer simulations of the symbol-error rate (SER) and bit-error rate (BER) performances. For example, the newly-found eight-dimensional (8D) permutation-modulation format can improve the spectral efficiency up to 2.75 bit/s/Hz/pol/channel, while the power penalty against QPSK is about 1 dB at BER=10(-3). PMID:26193538
Polarized multi-dimensional radiative transfer using the discrete ordinates method
Haferman, J.L.; Smith, T.F.; Krajewski, W.F.
1996-11-01
A polarized multi-dimensional radiative transfer model based on the discrete-ordinates method is developed. The model solves the monochromatic vector radiative transfer equation (VRTE) that considers polarization using the four Stokes parameters. For the VRTE, the intensity of the scalar radiative transfer equation is replaced by the Stokes intensity vector; the position-dependent scalar extinction coefficient is replaced by a direction- and position-dependent 4 x 4 extinction matrix; the position-dependent scalar absorption coefficient is replaced by a direction- and position-dependent emission vector; and the scalar phase function is replaced by a scattering phase matrix. The model is capable of solving the VRTE in anisotropically scattering one-, two-, or three-dimensional Cartesian geometries. The model is validated for one-dimensional polarized radiative transfer by comparing its results to several benchmark cases available in the literature. The model results are accurate so long as a quadrature set is chosen so that all phase functions used for a given problem normalize to unity. The model has been developed using a parallel computing paradigm, where each Stokes parameter is solved for on a separate computer processing unit.
Vogt, Stefan; Ralle, Martina
2012-01-01
Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies. PMID:23079951
Quantum-rod dispersed photopolymers for multi-dimensional photonic applications.
Li, Xiangping; Chon, James W M; Evans, Richard A; Gu, Min
2009-02-16
Nanocrystal quantum rods (QRs) have been identified as an important potential key to future photonic devices because of their unique two-photon (2P) excitation, large 2P absorption cross section and polarization sensitivity. 2P excitation in a conventional solid photosensitive medium has driven all-optical devices towards three-dimensional (3D) platform architectures such as 3D photonic crystals, optical circuits and optical memory. The development of a QR-sensitized medium should allow for a polarization-dependent change in refractive index. Such a localized polarization control inside the focus can confine the light not only in 3D but also in additional polarization domain. Here we report on the first 2P absorption excitation of QR-dispersed photopolymers and its application to the fabrication of polarization switched waveguides, multi-dimensional optical patterning and optical memory. This fabrication was achieved by a 2P excited energy transfer process between QRs and azo dyes which facilitated 3D localized polarization sensitivity resulting in the control of light in four dimensions. PMID:19219199
Hierarchical multi-dimensional limiting strategy for correction procedure via reconstruction
NASA Astrophysics Data System (ADS)
Park, Jin Seok; Kim, Chongam
2016-03-01
Hierarchical multi-dimensional limiting process (MLP) is improved and extended for flux reconstruction or correction procedure via reconstruction (FR/CPR) on unstructured grids. MLP was originally developed in finite volume method (FVM) and it provides an accurate, robust and efficient oscillation-control mechanism in multiple dimensions for linear reconstruction. This limiting philosophy can be hierarchically extended into higher-order Pn approximation or reconstruction. The resulting algorithm is referred to as the hierarchical MLP and facilitates detailed capture of flow structures while maintaining formal order-of-accuracy in a smooth region and providing accurate non-oscillatory solutions across a discontinuous region. This algorithm was developed within modal DG framework, but it can also be formulated into a nodal framework, most notably the FR/CPR framework. Troubled-cells are detected by applying the MLP concept, and the final accuracy is determined by a projection procedure and the hierarchical MLP limiting step. Extensive numerical analyses and computations, ranging from two-dimensional to three-dimensional fluid systems, have demonstrated that the proposed limiting approach yields outstanding performances in capturing compressible inviscid and viscous flow features.
New methodology for multi-dimensional spinal joint testing with a parallel robot.
Walker, Matthew R; Dickey, James P
2007-03-01
Six degree-of-freedom (6DOF) robots can be used to examine joints and their mechanical properties with the spatial freedom encountered physiologically. Parallel robots are capable of 6DOF motion under large payloads making them ideal for joint testing. This study developed and assessed novel methods for spinal joint testing with a custom-built parallel robot implementing hybrid load-position control. We hypothesized these methods would allow multi-dimensional control of joint loading scenarios, resulting in physiological joint motions. Tests were performed in 3DOF and 6DOF. 3DOF methods controlled the forces and the principal moment within +/-10 N and 0.25 N m under combined bending and compressive loads. 6DOF tests required larger tolerances for convergence due to machine compliance, however expected motion patterns were still observed. The unique mechanism and control approaches show promise for enabling complex three-dimensional loading patterns for in vitro joint biomechanics, and could facilitate research using specimens with unknown, changing, or nonlinear load-deformation properties. PMID:17235615
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Ji, Yuefeng; Zhang, Jie; Li, Hui; Xiong, Qianjin; Qiu, Shaofeng
2014-08-01
Ultrahigh throughout capacity requirement is challenging the current optical switching nodes with the fast development of data center networks. Pbit/s level all optical switching networks need to be deployed soon, which will cause the high complexity of node architecture. How to control the future network and node equipment together will become a new problem. An enhanced Software Defined Networking (eSDN) control architecture is proposed in the paper, which consists of Provider NOX (P-NOX) and Node NOX (N-NOX). With the cooperation of P-NOX and N-NOX, the flexible control of the entire network can be achieved. All optical switching network testbed has been experimentally demonstrated with efficient control of enhanced Software Defined Networking (eSDN). Pbit/s level all optical switching nodes in the testbed are implemented based on multi-dimensional switching architecture, i.e. multi-level and multi-planar. Due to the space and cost limitation, each optical switching node is only equipped with four input line boxes and four output line boxes respectively. Experimental results are given to verify the performance of our proposed control and switching architecture.
A Second-order Godunov Method for Multi-dimensional Relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Beckwith, Kris; Stone, James M.
2011-03-01
We describe a new Godunov algorithm for relativistic magnetohydrodynamics (RMHD) that combines a simple, unsplit second-order accurate integrator with the constrained transport (CT) method for enforcing the solenoidal constraint on the magnetic field. A variety of approximate Riemann solvers are implemented to compute the fluxes of the conserved variables. The methods are tested with a comprehensive suite of multi-dimensional problems. These tests have helped us develop a hierarchy of correction steps that are applied when the integration algorithm predicts unphysical states due to errors in the fluxes, or errors in the inversion between conserved and primitive variables. Although used exceedingly rarely, these corrections dramatically improve the stability of the algorithm. We present preliminary results from the application of these algorithms to two problems in RMHD: the propagation of supersonic magnetized jets and the amplification of magnetic field by turbulence driven by the relativistic Kelvin-Helmholtz instability (KHI). Both of these applications reveal important differences between the results computed with Riemann solvers that adopt different approximations for the fluxes. For example, we show that the use of Riemann solvers that include both contact and rotational discontinuities can increase the strength of the magnetic field within the cocoon by a factor of 10 in simulations of RMHD jets and can increase the spectral resolution of three-dimensional RMHD turbulence driven by the KHI by a factor of two. This increase in accuracy far outweighs the associated increase in computational cost. Our RMHD scheme is publicly available as part of the Athena code.
A Second-order Godunov Method for Multi-dimensional Relativistic Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Beckwith, Kris; Stone, J. M.
2011-05-01
We describe a new Godunov algorithm for relativistic magnetohydrodynamics (RMHD) that combines a simple, unsplit second-order accurate integrator with the constrained transport (CT) method for enforcing the solenoidal constraint on the magnetic field. A variety of approximate Riemann solvers are implemented to compute the fluxes of the conserved variables. The methods are tested with a comprehensive suite of multi-dimensional problems. These tests have helped us develop a hierarchy of correction steps that are applied when the integration algorithm predicts unphysical states due to errors in the fluxes, or errors in the inversion between conserved and primitive variables. Although used exceedingly rarely, these corrections dramatically improve the stability of the algorithm. We present preliminary results from the application of these algorithms to two problems in RMHD: the propagation of supersonic magnetized jets and the amplification of magnetic field by turbulence driven by the relativistic Kelvin-Helmholtz instability (KHI). Both of these applications reveal important differences between the results computed with Riemann solvers that adopt different approximations for the fluxes. For example, we show that the use of Riemann solvers that include both contact and rotational discontinuities can increase the strength of the magnetic field within the cocoon by a factor of 10 in simulations of RMHD jets and can increase the spectral resolution of three-dimensional RMHD turbulence driven by the KHI by a factor of two. This increase in accuracy far outweighs the associated increase in computational cost. Our RMHD scheme is publicly available as part of the Athena code.
Multi-dimensional SAR tomography for monitoring the deformation of newly built concrete buildings
NASA Astrophysics Data System (ADS)
Ma, Peifeng; Lin, Hui; Lan, Hengxing; Chen, Fulong
2015-08-01
Deformation often occurs in buildings at early ages, and the constant inspection of deformation is of significant importance to discover possible cracking and avoid wall failure. This paper exploits the multi-dimensional SAR tomography technique to monitor the deformation performances of two newly built buildings (B1 and B2) with a special focus on the effects of concrete creep and shrinkage. To separate the nonlinear thermal expansion from total deformations, the extended 4-D SAR technique is exploited. The thermal map estimated from 44 TerraSAR-X images demonstrates that the derived thermal amplitude is highly related to the building height due to the upward accumulative effect of thermal expansion. The linear deformation velocity map reveals that B1 is subject to settlement during the construction period, in addition, the creep and shrinkage of B1 lead to wall shortening that is a height-dependent movement in the downward direction, and the asymmetrical creep of B2 triggers wall deflection that is a height-dependent movement in the deflection direction. It is also validated that the extended 4-D SAR can rectify the bias of estimated wall shortening and wall deflection by 4-D SAR.
Pharmacy Information Systems in Teaching Hospitals: A Multi-dimensional Evaluation Study
Kazemi, Alireza; Moghaddasi, Hamid; Deimazar, Ghasem
2016-01-01
Objectives In hospitals, the pharmacy information system (PIS) is usually a sub-system of the hospital information system (HIS). The PIS supports the distribution and management of drugs, shows drug and medical device inventory, and facilitates preparing needed reports. In this study, pharmacy information systems implemented in general teaching hospitals affiliated to medical universities in Tehran (Iran) were evaluated using a multi-dimensional tool. Methods This was an evaluation study conducted in 2015. To collect data, a checklist was developed by reviewing the relevant literature; this checklist included both general and specific criteria to evaluate pharmacy information systems. The checklist was then validated by medical informatics experts and pharmacists. The sample of the study included five PIS in general-teaching hospitals affiliated to three medical universities in Tehran (Iran). Data were collected using the checklist and through observing the systems. The findings were presented as tables. Results Five PIS were evaluated in the five general-teaching hospitals that had the highest bed numbers. The findings showed that the evaluated pharmacy information systems lacked some important general and specific criteria. Among the general evaluation criteria, it was found that only two of the PIS studied were capable of restricting repeated attempts made for unauthorized access to the systems. With respect to the specific evaluation criteria, no attention was paid to the patient safety aspect. Conclusions The PIS studied were mainly designed to support financial tasks; little attention was paid to clinical and patient safety features. PMID:27525164
On SCALE Validation for PBR Analysis
Ilas, Germina
2010-01-01
Studies were performed to assess the capabilities of the SCALE code system to provide accurate cross sections for analyses of pebble bed reactor configurations. The analyzed configurations are representative of fuel in the HTR-10 reactor in the first critical core and at full power operation conditions. Relevant parameters-multiplication constant, spectral indices, few-group cross sections-are calculated with SCALE for the considered configurations. The results are compared to results obtained with corresponding consistent MCNP models. The code-to-code comparison shows good agreement at both room and operating temperatures, indicating a good performance of SCALE for analysis of doubly heterogeneous fuel configurations. The development of advanced methods and computational tools for the analysis of pebble bed reactor (PBR) configurations has been a research area of renewed interest for the international community during recent decades. The PBR, which is a High Temperature Gas Cooled Reactor (HTGR) system, represents one of the potential candidates for future deployment throughout the world of reactor systems that would meet the increased requirements of efficiency, safety, and proliferation resistance and would support other applications such as hydrogen production or nuclear waste recycling. In the U.S, the pebble bed design is one of the two designs under consideration by the Next Generation Nuclear Plant (NGNP) Program.
NASA Astrophysics Data System (ADS)
EL-Shamy, E. F.
2014-08-01
The solitary structures of multi-dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
Pauly, Anne; Wolf, Carolin; Mayr, Andreas; Lenz, Bernd; Kornhuber, Johannes; Friedland, Kristina
2015-01-01
Background In psychiatry, hospital stays and transitions to the ambulatory sector are susceptible to major changes in drug therapy that lead to complex medication regimens and common non-adherence among psychiatric patients. A multi-dimensional and inter-sectoral intervention is hypothesized to improve the adherence of psychiatric patients to their pharmacotherapy. Methods 269 patients from a German university hospital were included in a prospective, open, clinical trial with consecutive control and intervention groups. Control patients (09/2012-03/2013) received usual care, whereas intervention patients (05/2013-12/2013) underwent a program to enhance adherence during their stay and up to three months after discharge. The program consisted of therapy simplification and individualized patient education (multi-dimensional component) during the stay and at discharge, as well as subsequent phone calls after discharge (inter-sectoral component). Adherence was measured by the “Medication Adherence Report Scale” (MARS) and the “Drug Attitude Inventory” (DAI). Results The improvement in the MARS score between admission and three months after discharge was 1.33 points (95% CI: 0.73–1.93) higher in the intervention group compared to controls. In addition, the DAI score improved 1.93 points (95% CI: 1.15–2.72) more for intervention patients. Conclusion These two findings indicate significantly higher medication adherence following the investigated multi-dimensional and inter-sectoral program. Trial Registration German Clinical Trials Register DRKS00006358 PMID:26437449
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
NASA Astrophysics Data System (ADS)
Wood, William Alfred, III
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. The scalar test cases include advected shear, circular advection, non-linear advection with coalescing shock and expansion fans, and advection-diffusion. For all scalar cases the fluctuation splitting scheme is more accurate, and the primary mechanism for the improved fluctuation splitting performance is shown to be the reduced production of artificial dissipation relative to DMFDSFV. The most significant scalar result is for combined advection-diffusion, where the present fluctuation splitting scheme is able to resolve the physical dissipation from the artificial dissipation on a much coarser mesh than DMFDSFV is able to, allowing order-of-magnitude reductions in solution time. Among the inviscid test cases the converging supersonic streams problem is notable in that the fluctuation splitting scheme exhibits superconvergent third-order spatial accuracy. For the inviscid cases of a supersonic diamond airfoil, supersonic slender cone, and incompressible circular bump the fluctuation splitting drag coefficient errors are typically half the DMFDSFV drag errors. However, for the incompressible inviscid sphere the fluctuation splitting drag error is larger than for DMFDSFV. A Blasius flat plate viscous validation case reveals a more accurate v-velocity profile for fluctuation splitting, and the reduced artificial dissipation
Scaling, dimensional analysis, and hardness measurements
NASA Astrophysics Data System (ADS)
Cheng, Yang-Tse; Cheng, Che-Min; Li, Zhiyong
2000-03-01
Hardness is one of the frequently used concepts in tribology. For nearly one hundred years, indentation experiments have been performed to obtain the hardness of materials. Recent years have seen significant improvements in indentation equipment and a growing need to measure the mechanical properties of materials on small scales. However, questions remain, including what properties can be measured using instrumented indention techniques and what is hardness? We discuss these basic questions using dimensional analysis together with finite element calculations. We derive scaling relationships for loading and unloading curve, initial unloading slope, contact depth, and hardness. Hardness is shown to depend on elastic, as well as plastic properties of materials. The conditions for "piling-up" and "sinking-in" of surface profiles in indentation are obtained. The methods for estimating contact area are examined. The work done during indentation is also studied. A relationship between hardness, elastic modulus, and the work of indentation is revealed. This relationship offers a new method for obtaining hardness and elastic modulus. In addition, we demonstrate that stress-strain relationships may not be uniquely determined from loading/unloading curves alone using a conical or pyramidal indenter. The dependence of hardness on indenter geometry is also studied. Finally, a scaling theory for indentation in power-law creep solids using self-similar indenters is developed. A connection between creep and "indentation size effect" is established.
Multi-dimensional Conjunctive Operation Rule for the Water Supply System
NASA Astrophysics Data System (ADS)
Chiu, Y.; Tan, C. A.; CHEN, Y.; Tung, C.
2011-12-01
In recent years, with the increment of floods and droughts, not only in numbers but also in intensities, floods were severer during the wet season and the droughts were more serious during the dry season. In order to reduce their impact on agriculture, industry, and even human being, the conjunctive use of surface water and groundwater has been paid much attention and become a new direction for the future research. Traditionally, the reservoir operation usually follows the operation rule curve to satisfy the water demand and considers only water levels at the reservoirs and time series. The strategy used in the conjunctive-use management model is that the water demand is first satisfied with the reservoirs operated based on the rule curves, and the deficit between demand and supply, if exists, is provided by the groundwater. In this study, we propose a new operation rule, named multi-dimensional conjunctive operation rule curve (MCORC), which is extended from the concept of reservoir operation rule curve. The MCORC is a three-dimensional curve and is applied to both surface water and groundwater. Three sets of parameters, water levels and the supply percentage at reservoirs, groundwater levels and the supply percentage, and time series, are considered simultaneously in the curve. The zonation method and heuristic algorithm are applied to optimize the curve subject to the constraints of the reservoir operation rules and the safety yield of groundwater. The proposed conjunctive operation rule was applied to the water supply system which is analogue to the area in northern Taiwan. The results showed that the MCORC could increase the efficiency of water use and reduce the risk of serious water deficits.
Numerical simulation of multi-dimensional NMR response in tight sandstone
NASA Astrophysics Data System (ADS)
Guo, Jiangfeng; Xie, Ranhong; Zou, Youlong; Ding, Yejiao
2016-06-01
Conventional logging methods have limitations in the evaluation of tight sandstone reservoirs. The multi-dimensional nuclear magnetic resonance (NMR) logging method has the advantage that it can simultaneously measure transverse relaxation time (T 2), longitudinal relaxation time (T 1) and diffusion coefficient (D). In this paper, we simulate NMR measurements of tight sandstone with different wettability and saturations by the random walk method and obtain the magnetization decays of Carr–Purcell–Meiboom–Gill pulse sequences with different wait times (TW) and echo spacings (TE) under a magnetic field gradient, resulting in D-T 2-T 1 maps by the multiple echo trains joint inversion method. We also study the effects of wettability, saturation, signal-to-noise ratio (SNR) of data and restricted diffusion on the D-T 2-T 1 maps in tight sandstone. The results show that with decreasing wetting fluid saturation, the surface relaxation rate of the wetting fluid gradually increases and the restricted diffusion phenomenon becomes more and more obvious, which leads to the wetting fluid signal moving along the direction of short relaxation and the direction of the diffusion coefficient decreasing in D-T 2-T 1 maps. Meanwhile, the non-wetting fluid position in D-T 2-T 1 maps does not change with saturation variation. With decreasing SNR, the ability to identify water and oil signals based on NMR maps gradually decreases. The wetting fluid D-T 1 and D-T 2 correlations in NMR diffusion-relaxation maps of tight sandstone are obtained through expanding the wetting fluid restricted diffusion models, and are further applied to recognize the wetting fluid in simulated D-T 2 maps and D-T 1 maps.
Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features
Mihaljević, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features
Multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates.
Chen, Ken Shuang
2004-11-01
This report documents the author's efforts in the deterministic modeling of copper-sulfidation corrosion on non-planar substrates such as diodes and electrical connectors. A new framework based on Goma was developed for multi-dimensional modeling of atmospheric copper-sulfidation corrosion on non-planar substrates. In this framework, the moving sulfidation front is explicitly tracked by treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and repeatedly performing re-meshing using CUBIT and re-mapping using MAPVAR. Three one-dimensional studies were performed for verifying the framework in asymptotic regimes. Limited model validation was also carried out by comparing computed copper-sulfide thickness with experimental data. The framework was first demonstrated in modeling one-dimensional copper sulfidation with charge separation. It was found that both the thickness of the space-charge layers and the electrical potential at the sulfidation surface decrease rapidly as the Cu{sub 2}S layer thickens initially but eventually reach equilibrium values as Cu{sub 2}S layer becomes sufficiently thick; it was also found that electroneutrality is a reasonable approximation and that the electro-migration flux may be estimated by using the equilibrium potential difference between the sulfidation and annihilation surfaces when the Cu{sub 2}S layer is sufficiently thick. The framework was then employed to model copper sulfidation in the solid-state-diffusion controlled regime (i.e. stage II sulfidation) on a prototypical diode until a continuous Cu{sub 2}S film was formed on the diode surface. The framework was also applied to model copper sulfidation on an intermittent electrical contact between a gold-plated copper pin and gold-plated copper pad; the presence of Cu{sub 2}S was found to raise the effective electrical resistance drastically. Lastly, future research needs in modeling atmospheric copper sulfidation are discussed.
TimeSpan: Using Visualization to Explore Temporal Multi-dimensional Data of Stroke Patients.
Loorak, Mona Hosseinkhani; Perin, Charles; Kamal, Noreen; Hill, Michael; Carpendale, Sheelagh
2016-01-01
We present TimeSpan, an exploratory visualization tool designed to gain a better understanding of the temporal aspects of the stroke treatment process. Working with stroke experts, we seek to provide a tool to help improve outcomes for stroke victims. Time is of critical importance in the treatment of acute ischemic stroke patients. Every minute that the artery stays blocked, an estimated 1.9 million neurons and 12 km of myelinated axons are destroyed. Consequently, there is a critical need for efficiency of stroke treatment processes. Optimizing time to treatment requires a deep understanding of interval times. Stroke health care professionals must analyze the impact of procedures, events, and patient attributes on time-ultimately, to save lives and improve quality of life after stroke. First, we interviewed eight domain experts, and closely collaborated with two of them to inform the design of TimeSpan. We classify the analytical tasks which a visualization tool should support and extract design goals from the interviews and field observations. Based on these tasks and the understanding gained from the collaboration, we designed TimeSpan, a web-based tool for exploring multi-dimensional and temporal stroke data. We describe how TimeSpan incorporates factors from stacked bar graphs, line charts, histograms, and a matrix visualization to create an interactive hybrid view of temporal data. From feedback collected from domain experts in a focus group session, we reflect on the lessons we learned from abstracting the tasks and iteratively designing TimeSpan. PMID:26390482
Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David
2002-01-01
A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this
Anusha, L. S.; Nagendra, K. N.
2011-09-01
In two previous papers, we solved the polarized radiative transfer (RT) equation in multi-dimensional (multi-D) geometries with partial frequency redistribution as the scattering mechanism. We assumed Rayleigh scattering as the only source of linear polarization (Q/I, U/I) in both these papers. In this paper, we extend these previous works to include the effect of weak oriented magnetic fields (Hanle effect) on line scattering. We generalize the technique of Stokes vector decomposition in terms of the irreducible spherical tensors T{sup K}{sub Q}, developed by Anusha and Nagendra, to the case of RT with Hanle effect. A fast iterative method of solution (based on the Stabilized Preconditioned Bi-Conjugate-Gradient technique), developed by Anusha et al., is now generalized to the case of RT in magnetized three-dimensional media. We use the efficient short-characteristics formal solution method for multi-D media, generalized appropriately to the present context. The main results of this paper are the following: (1) a comparison of emergent (I, Q/I, U/I) profiles formed in one-dimensional (1D) media, with the corresponding emergent, spatially averaged profiles formed in multi-D media, shows that in the spatially resolved structures, the assumption of 1D may lead to large errors in linear polarization, especially in the line wings. (2) The multi-D RT in semi-infinite non-magnetic media causes a strong spatial variation of the emergent (Q/I, U/I) profiles, which is more pronounced in the line wings. (3) The presence of a weak magnetic field modifies the spatial variation of the emergent (Q/I, U/I) profiles in the line core, by producing significant changes in their magnitudes.
Multi-dimensional Features of Neutrino Transfer in Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S.
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S n method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ~20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (~2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE
Sumiyoshi, K.; Takiwaki, T.; Matsufuru, H.; Yamada, S. E-mail: takiwaki.tomoya@nao.ac.jp E-mail: shoichi@heap.phys.waseda.ac.jp
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ∼20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (∼2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
Correlation network analysis for multi-dimensional data in stocks market
NASA Astrophysics Data System (ADS)
Kazemilari, Mansooreh; Djauhari, Maman Abdurachman
2015-07-01
This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.
Multi-dimensional, multiphase flow analysis of flamespreading in a stick propellant charge
NASA Astrophysics Data System (ADS)
Horst, A. W.; Robbins, F. W.; Gough, P. S.
1983-10-01
The interior ballistic performance of propelling charges employing stick propellant often cannot be simulated using either lumped parameter or two phase flow models. Much of this disparity is usually attributed to enhanced burning within the long perforations, perhaps accompanied by splitting or fracture of the stick to yield additional burning surface. Unusually low (or even reversed) sensitivity of performance to propellant conditioning temperature has also been noted, a factor that, if controllable, may have significant impact on the acceptability of new stick propellant charges. Moreover, the mechanisms responsible for all the above behavior may well be exploitable as high progressivity, high density (HPD) propelling charge concepts. A state of the art version of TDNOVA, a two dimensional, two phase flow interior ballistic code, is employed to probe the ignition and flamespreading processes in stick propellant charges. Calculations of flame propagation on exterior and interior surfaces, as well as pressurization profiles both within the perforations and in the interstices, are described for typical and simplified stick charge configurations. Reconcilation of predicted behavior with experimental observation is discussed, and further specific studies using TDNOVA are identified in order to verify a postulated explanation for ballistic data exhibiting an anomalous temperature sensitivity.
Detection and analysis of multi-dimensional pulse wave based on optical coherence tomography
NASA Astrophysics Data System (ADS)
Shen, Yihui; Li, Zhifang; Li, Hui; Chen, Haiyu
2014-11-01
Pulse diagnosis is an important method of traditional Chinese medicine (TCM). Doctors diagnose the patients' physiological and pathological statuses through the palpation of radial artery for radial artery pulse information. Optical coherence tomography (OCT) is an useful tool for medical optical research. Current conventional diagnostic devices only function as a pressure sensor to detect the pulse wave - which can just partially reflect the doctors feelings and lost large amounts of useful information. In this paper, the microscopic changes of the surface skin above radial artery had been studied in the form of images based on OCT. The deformation of surface skin in a cardiac cycle which is caused by arterial pulse is detected by OCT. The patient's pulse wave is calculated through image processing. It is found that it is good consistent with the result conducted by pulse analyzer. The real-time patient's physiological and pathological statuses can be monitored. This research provides a kind of new method for pulse diagnosis of traditional Chinese medicine.
Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data
NASA Astrophysics Data System (ADS)
Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.
2013-12-01
Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving
Evolutionary artificial neural networks by multi-dimensional particle swarm optimization.
Kiranyaz, Serkan; Ince, Turker; Yildirim, Alper; Gabbouj, Moncef
2009-12-01
In this paper, we propose a novel technique for the automatic design of Artificial Neural Networks (ANNs) by evolving to the optimal network configuration(s) within an architecture space. It is entirely based on a multi-dimensional Particle Swarm Optimization (MD PSO) technique, which re-forms the native structure of swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multidimensional search space where the optimum dimension is unknown, swarm particles can seek both positional and dimensional optima. This eventually removes the necessity of setting a fixed dimension a priori, which is a common drawback for the family of swarm optimizers. With the proper encoding of the network configurations and parameters into particles, MD PSO can then seek the positional optimum in the error space and the dimensional optimum in the architecture space. The optimum dimension converged at the end of a MD PSO process corresponds to a unique ANN configuration where the network parameters (connections, weights and biases) can then be resolved from the positional optimum reached on that dimension. In addition to this, the proposed technique generates a ranked list of network configurations, from the best to the worst. This is indeed a crucial piece of information, indicating what potential configurations can be alternatives to the best one, and which configurations should not be used at all for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected ANNs so as to use the conventional techniques such as back-propagation and some other evolutionary methods in this field. The proposed technique is applied over the most challenging synthetic problems to test its optimality on evolving networks and over the benchmark problems to test its generalization capability as well as to make comparative evaluations with the several competing techniques. The experimental
Event-based Recession Analysis across Scales
NASA Astrophysics Data System (ADS)
Chen, B.; Krajewski, W. F.
2012-12-01
Hydrograph recessions have long been a window to investigate hydrological processes and their interactions. The authors conducted an exploratory analysis of about 1000 individual hydrograph recessions in a period of around 15 years (1995-2010) from time series of hourly discharge (USGS IDA stream flow data set) at 27 USGS gauges located in Iowa and Cedar River basins with drainage area ranging from 6.7 to around 17000 km2. They calculated recession exponents with the same recession length but different time lags from the hydrograph peak ranging from ~0 to 96 hours, and then plotted them against time lags to construct the evolution of recession exponent. The result shows that, as recession continues, the recession exponent in first increases quickly, then decreases quickly, and finally stays constant. Occasionally and for different reasons, the decreasing portion is missing due to negligible contribution from soil water storage. The increasing part of the evolution of can be related to fast response to rainfall including overland flow and quick subsurface flow through macropores (or tiles), and the decreasing portion can be connected to the delayed soil water response. Lastly, the constant segment can be attributed to the groundwater storage with the slowest response. The points where recession exponent reaches its maximum and begins to plateau are the times that fast response and soil water response end, respectively. The authors conducted further theoretical analysis by combining mathematical derivation and literature results to explain the observed evolution path of the recession exponent . Their results have a direct application in hydrograph separation and important implications for dynamic basin storage-discharge relation analysis and hydrological process understanding across scales.
NASA Astrophysics Data System (ADS)
Kaethner, Christian; Ahlborg, Mandy; Knopp, Tobias; Sattel, Timo F.; Buzug, Thorsten M.
2014-01-01
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
Kaethner, Christian Ahlborg, Mandy; Buzug, Thorsten M.; Knopp, Tobias; Sattel, Timo F.
2014-01-28
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
NASA Astrophysics Data System (ADS)
Delzanno, G. L.
2015-11-01
A spectral method for the numerical solution of the multi-dimensional Vlasov-Maxwell equations is presented. The plasma distribution function is expanded in Fourier (for the spatial part) and Hermite (for the velocity part) basis functions, leading to a truncated system of ordinary differential equations for the expansion coefficients (moments) that is discretized with an implicit, second order accurate Crank-Nicolson time discretization. The discrete non-linear system is solved with a preconditioned Jacobian-Free Newton-Krylov method. It is shown analytically that the Fourier-Hermite method features exact conservation laws for total mass, momentum and energy in discrete form. Standard tests involving plasma waves and the whistler instability confirm the validity of the conservation laws numerically. The whistler instability test also shows that we can step over the fastest time scale in the system without incurring in numerical instabilities. Some preconditioning strategies are presented, showing that the number of linear iterations of the Krylov solver can be drastically reduced and a significant gain in performance can be obtained.
The New Environmental Paradigm and Further Scale Analysis.
ERIC Educational Resources Information Center
Noe, Francis P.; Snow, Rob
1990-01-01
Examined were the responses of park visitors to the New Environmental Paradigm scale. Research methods, and results including reliabilities and factor analysis of the scales on the survey are discussed. (CW)
NASA Astrophysics Data System (ADS)
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-05-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two student groups. Moreover, within-culture comparisons were made in terms of gender. The results showed that, first, the SLSE instrument was valid and reliable for measuring the Singaporean and Taiwanese students' SLSE. Second, through a two-way multivariate analysis of variance analysis (nationality by gender), the main result indicated that the SLSE held by the Singaporean eighth graders was significantly higher than that of their Taiwanese counterparts in all dimensions, including 'conceptual understanding and higher-order cognitive skills', 'practical work (PW)', 'everyday application', and 'science communication'. In addition, the within-culture gender comparisons indicated that the male Singaporean students tended to possess higher SLSE than the female students did in all SLSE dimensions except for the 'PW' dimension. However, no gender differences were found in the Taiwanese sample. The findings unraveled in this study were interpreted from a socio-cultural perspective in terms of the curriculum differences, societal expectations of science education, and educational policies in Singapore and Taiwan.
NASA Astrophysics Data System (ADS)
de Lima, Isabel; Lovejoy, Shaun
2016-04-01
The characterization of precipitation scaling regimes represents a key contribution to the improved understanding of space-time precipitation variability, which is the focus here. We conduct space-time scaling analyses of spectra and Haar fluctuations in precipitation, using three global scale precipitation products (one instrument based, one reanalysis based, one satellite and gauge based), from monthly to centennial scales and planetary down to several hundred kilometers in spatial scale. Results show the presence - similarly to other atmospheric fields - of an intermediate "macroweather" regime between the familiar weather and climate regimes: we characterize systematically the macroweather precipitation temporal and spatial, and joint space-time statistics and variability, and the outer scale limit of temporal scaling. These regimes qualitatively and quantitatively alternate in the way fluctuations vary with scale. In the macroweather regime, the fluctuations diminish with time scale (this is important for seasonal, annual, and decadal forecasts) while anthropogenic effects increase with time scale. Our approach determines the time scale at which the anthropogenic signal can be detected above the natural variability noise: the critical scale is about 20 - 40 yrs (depending on the product, on the spatial scale). This explains for example why studies that use data covering only a few decades do not easily give evidence of anthropogenic changes in precipitation, as a consequence of warming: the period is too short. Overall, while showing that precipitation can be modeled with space-time scaling processes, our results clarify the different precipitation scaling regimes and further allow us to quantify the agreement (and lack of agreement) of the precipitation products as a function of space and time scales. Moreover, this work contributes to clarify a basic problem in hydro-climatology, which is to measure precipitation trends at decadal and longer scales and to
Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR
Ishii, M.; Ravankar, S.T.; Dowlati, R.
1996-04-01
The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.
Lee, Jiwon; Zhou, Menglian; Zhu, Hongbo; Nidetz, Robert; Kurabayashi, Katsuo; Fan, Xudong
2016-06-20
A photoionization detector (PID) is widely used as a gas chromatography (GC) detector. By virtue of its non-destructive nature, multiple PIDs can be used in multi-dimensional GC. However, different PIDs have different responsivities towards the same chemical compound with the same concentration or mass due to different aging conditions of the PID lamps and windows. Here, we carried out a systematic study regarding the response of 5 Krypton μPIDs in a 1 × 4-channel 2-dimensional μGC system to 7 different volatile organic compounds (VOCs) with the ionization potential ranging from 8.45 eV to 10.08 eV and the concentration ranging from ∼1 ng to ∼2000 ng. We used one of the PIDs as the reference detector and calculated the calibration factor for each of the remaining 4 PIDs against the first PID, which we found is quite uniform regardless of the analyte, its concentration, or chromatographic peak width. Based on the above observation, we were able to quantitatively reconstruct the coeluted peaks in the first dimension using the signal obtained with a PID array in the second dimension. Our work will enable rapid and in situ calibration of PIDs in a GC system using a single analyte at a single concentration. It will also lead to the development of multi-channel multi-dimensional GC where multiple PIDs are employed. PMID:27152367
Developmental Work Personality Scale: An Initial Analysis.
ERIC Educational Resources Information Center
Strauser, David R.; Keim, Jeanmarie
2002-01-01
The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive…
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Allu, Srikanth; Velamur Asokan, Badri; Shelton, William A; Philip, Bobby; Pannala, Sreekanth
2014-01-01
A generalized three dimensional computational model based on unied formulation of electrode- electrolyte-electrode system of a electric double layer supercapacitor has been developed. The model accounts for charge transport across the solid-liquid system. This formulation based on volume averaging process is a widely used concept for the multiphase ow equations ([28] [36]) and is analogous to porous media theory typically employed for electrochemical systems [22] [39] [12]. This formulation is extended to the electrochemical equations for a supercapacitor in a consistent fashion, which allows for a single-domain approach with no need for explicit interfacial boundary conditions as previously employed ([38]). In this model it is easy to introduce the spatio-temporal variations, anisotropies of physical properties and it is also conducive for introducing any upscaled parameters from lower length{scale simulations and experiments. Due to the irregular geometric congurations including porous electrode, the charge transport and subsequent performance characteristics of the super-capacitor can be easily captured in higher dimensions. A generalized model of this nature also provides insight into the applicability of 1D models ([38]) and where multidimensional eects need to be considered. In addition, simple sensitivity analysis on key input parameters is performed in order to ascertain the dependence of the charge and discharge processes on these parameters. Finally, we demonstarted how this new formulation can be applied to non-planar supercapacitors
Dynamical scaling analysis of plant callus growth
NASA Astrophysics Data System (ADS)
Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.
2003-07-01
We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.
Failure Analysis of a Pilot Scale Melter
Imrich, K J
2001-09-14
Failure of the pilot-scale test melter resulted from severe overheating of the Inconel 690 jacketed molybdenum electrode. Extreme temperatures were required to melt the glass during this campaign because the feed material contained a very high waste loading.
A Scale Analysis of the Effects of US Federal Policy
ERIC Educational Resources Information Center
Pandya, Jessica Zacher
2012-01-01
In this essay I argue that the effects of federal policy can be examined through a scale analysis that helps deconstruct the effect of the current widespread accountability movement in the US educational system. I first discuss the concept of scale, including its thus-far limited use in educational research. I define scales not only as…
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
2013-01-01
Background A common characteristic of environmental epidemiology is the multi-dimensional aspect of exposure patterns, frequently reduced to a cumulative exposure for simplicity of analysis. By adopting a flexible Bayesian clustering approach, we explore the risk function linking exposure history to disease. This approach is applied here to study the relationship between different smoking characteristics and lung cancer in the framework of a population based case control study. Methods Our study includes 4658 males (1995 cases, 2663 controls) with full smoking history (intensity, duration, time since cessation, pack-years) from the ICARE multi-centre study conducted from 2001-2007. We extend Bayesian clustering techniques to explore predictive risk surfaces for covariate profiles of interest. Results We were able to partition the population into 12 clusters with different smoking profiles and lung cancer risk. Our results confirm that when compared to intensity, duration is the predominant driver of risk. On the other hand, using pack-years of cigarette smoking as a single summary leads to a considerable loss of information. Conclusions Our method estimates a disease risk associated to a specific exposure profile by robustly accounting for the different dimensions of exposure and will be helpful in general to give further insight into the effect of exposures that are accumulated through different time patterns. PMID:24152389
Convective scale weather analysis and forecasting
NASA Technical Reports Server (NTRS)
Purdom, J. F. W.
1984-01-01
How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Technical Aspects for the Creation of a Multi-Dimensional Land Information System
NASA Astrophysics Data System (ADS)
Ioannidis, Charalabos; Potsiou, Chryssy; Soile, Sofia; Verykokou, Styliani; Mourafetis, George; Doulamis, Nikolaos
2016-06-01
The complexity of modern urban environments and civil demands for fast, reliable and affordable decision-making requires not only a 3D Land Information System, which tends to replace traditional 2D LIS architectures, but also the need to address the time and scale parameters, that is, the 3D geometry of buildings in various time instances (4th dimension) at various levels of detail (LoDs - 5th dimension). This paper describes and proposes solutions for technical aspects that need to be addressed for the 5D modelling pipeline. Such solutions include the creation of a 3D model, the application of a selective modelling procedure between various time instances and at various LoDs, enriched with cadastral and other spatial data, and a procedural modelling approach for the representation of the inner parts of the buildings. The methodology is based on automatic change detection algorithms for spatial-temporal analysis of the changes that took place in subsequent time periods, using dense image matching and structure from motion algorithms. The selective modelling approach allows a detailed modelling only for the areas where spatial changes are detected. The procedural modelling techniques use programming languages for the textual semantic description of a building; they require the modeller to describe its part-to-whole relationships. Finally, a 5D viewer is developed, in order to tackle existing limitations that accompany the use of global systems, such as the Google Earth or the Google Maps, as visualization software. An application based on the proposed methodology in an urban area is presented and it provides satisfactory results.
Multidimensional Scaling versus Components Analysis of Test Intercorrelations.
ERIC Educational Resources Information Center
Davison, Mark L.
1985-01-01
Considers the relationship between coordinate estimates in components analysis and multidimensional scaling. Reports three small Monte Carlo studies comparing nonmetric scaling solutions to components analysis. Results are related to other methodological issues surrounding research on the general ability factor, response tendencies in…
Longitudinal Network Analysis Using Multidimensional Scaling.
ERIC Educational Resources Information Center
Barnett, George A.; Palmer, Mark T.
The Galileo System, a variant of metric multidimensional scaling, is used in this paper to analyze over-time changes in social networks. The paper first discusses the theoretical necessity for the use of this procedure and the methodological problems associated with its use. It then examines the air traffic network among 31 major cities in the…
NASA Astrophysics Data System (ADS)
Haider, M. M.; Rahman, O.
2016-07-01
An attempt has been made to study the multi-dimensional instability of dust-ion-acoustic (DIA) solitary waves (SWs) in magnetized multi-ion plasmas containing opposite polarity ions, opposite polarity dusts and non-thermal electrons. First of all, we have derived Zakharov-Kuznetsov (ZK) equation to study the DIA SWs in this case using reductive perturbation method as well as its solution. Small-k perturbation technique was employed to find out the instability criterion and growth rate of such a wave which can give a guideline in understanding the space and laboratory plasmas, situated in the D-region of the Earth's ionosphere, mesosphere, and solar photosphere, as well as the microelectronics plasma processing reactors.
NASA Astrophysics Data System (ADS)
Wu, Dapeng; He, Jinjin; Zhang, Shuo; Cao, Kun; Gao, Zhiyong; Xu, Fang; Jiang, Kai
2015-05-01
Multi-dimensional TiO2 hierarchal structures (MD-THS) assembled by mesoporous nanoribbons consisted of oriented aligned nanocrystals are prepared via thermal decomposing Ti-contained gelatin-like precursor. A unique bridge linking mechanism is proposed to illustrate the formation process of the precursor. Moreover, the as-prepared MD-THS possesses high surface area of ∼106 cm2 g-1, broad pore size distribution from several nanometers to ∼100 nm and oriented assembled primary nanocrystals, which gives rise to high CdS/CdSe quantum dots loading amount and inhibits the carries recombination in the photoanode. Thanks to these structural advantages, the cell derived from MD-THS demonstrates a power conversion efficiency (PCE) of 4.15%, representing ∼36% improvement compared with that of the nanocrystal based cell, which permits the promising application of MD-THS as photoanode material in quantum-dot sensitized solar cells.
NASA Astrophysics Data System (ADS)
Grenga, Temistocle
The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)
2002-01-01
We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].
NASA Astrophysics Data System (ADS)
Walton, Jay R.; Rivera-Rivera, Luis A.; Lucchese, Robert R.; Bevan, John W.
2016-05-01
A canonical approach is used to investigate prototypical multi-dimensional intermolecular interaction potentials characteristic of categories in van der Waals, hydrogen-bonded, and halogen-bonded intermolecular interactions. It is demonstrated that well-characterized potentials in Ar·HI, OC·HI, OC·HF, and OC·BrCl, can be canonically transformed to a common dimensionless potential with relative error less than 0.010. The results indicate common intrinsic bonding properties despite other varied characteristics in the systems investigated. The results of these studies are discussed in the context of the previous statement made by Slater (1972) concerning fundamental bonding properties in the categories of interatomic interactions analyzed.
EL-Shamy, E. F.
2014-08-15
The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
Discrete implementations of scale transform
NASA Astrophysics Data System (ADS)
Djurdjanovic, Dragan; Williams, William J.; Koh, Christopher K.
1999-11-01
Scale as a physical quantity is a recently developed concept. The scale transform can be viewed as a special case of the more general Mellin transform and its mathematical properties are very applicable in the analysis and interpretation of the signals subject to scale changes. A number of single-dimensional applications of scale concept have been made in speech analysis, processing of biological signals, machine vibration analysis and other areas. Recently, the scale transform was also applied in multi-dimensional signal processing and used for image filtering and denoising. Discrete implementation of the scale transform can be carried out using logarithmic sampling and the well-known fast Fourier transform. Nevertheless, in the case of the uniformly sampled signals, this implementation involves resampling. An algorithm not involving resampling of the uniformly sampled signals has been derived too. In this paper, a modification of the later algorithm for discrete implementation of the direct scale transform is presented. In addition, similar concept was used to improve a recently introduced discrete implementation of the inverse scale transform. Estimation of the absolute discretization errors showed that the modified algorithms have a desirable property of yielding a smaller region of possible error magnitudes. Experimental results are obtained using artificial signals as well as signals evoked from the temporomandibular joint. In addition, discrete implementations for the separable two-dimensional direct and inverse scale transforms are derived. Experiments with image restoration and scaling through two-dimensional scale domain using the novel implementation of the separable two-dimensional scale transform pair are presented.
Rose, Donald; Bodor, J Nicholas; Hutchinson, Paul L; Swalm, Chris M
2010-06-01
Research on neighborhood food access has focused on documenting disparities in the food environment and on assessing the links between the environment and consumption. Relatively few studies have combined in-store food availability measures with geographic mapping of stores. We review research that has used these multi-dimensional measures of access to explore the links between the neighborhood food environment and consumption or weight status. Early research in California found correlations between red meat, reduced-fat milk, and whole-grain bread consumption and shelf space availability of these products in area stores. Subsequent research in New York confirmed the low-fat milk findings. Recent research in Baltimore has used more sophisticated diet assessment tools and store-based instruments, along with controls for individual characteristics, to show that low availability of healthy food in area stores is associated with low-quality diets of area residents. Our research in southeastern Louisiana has shown that shelf space availability of energy-dense snack foods is positively associated with BMI after controlling for individual socioeconomic characteristics. Most of this research is based on cross-sectional studies. To assess the direction of causality, future research testing the effects of interventions is needed. We suggest that multi-dimensional measures of the neighborhood food environment are important to understanding these links between access and consumption. They provide a more nuanced assessment of the food environment. Moreover, given the typical duration of research project cycles, changes to in-store environments may be more feasible than changes to the overall mix of retail outlets in communities. PMID:20410084
FACTOR ANALYSIS OF THE ELKINS HYPNOTIZABILITY SCALE
Elkins, Gary; Johnson, Aimee K.; Johnson, Alisa J.; Sliwinski, Jim
2015-01-01
Assessment of hypnotizability can provide important information for hypnosis research and practice. The Elkins Hypnotizability Scale (EHS) consists of 12 items and was developed to provide a time-efficient measure for use in both clinical and laboratory settings. The EHS has been shown to be a reliable measure with support for convergent validity with the Stanford Hypnotic Susceptibility Scale, Form C (r = .821, p < .001). The current study examined the factor structure of the EHS, which was administered to 252 adults (51.3% male; 48.7% female). Average time of administration was 25.8 minutes. Four factors selected on the basis of the best theoretical fit accounted for 63.37% of the variance. The results of this study provide an initial factor structure for the EHS. PMID:25978085
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis
Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.
2003-01-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.
Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis.
Hong, Yoon-Seok Timothy; Rosen, Michael R; Bhamidimarri, Rao
2003-04-01
This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. PMID:12600389
Computational methods for criticality safety analysis within the scale system
Parks, C.V.; Petrie, L.M.; Landers, N.F.; Bucholz, J.A.
1986-01-01
The criticality safety analysis capabilities within the SCALE system are centered around the Monte Carlo codes KENO IV and KENO V.a, which are both included in SCALE as functional modules. The XSDRNPM-S module is also an important tool within SCALE for obtaining multiplication factors for one-dimensional system models. This paper reviews the features and modeling capabilities of these codes along with their implementation within the Criticality Safety Analysis Sequences (CSAS) of SCALE. The CSAS modules provide automated cross-section processing and user-friendly input that allow criticality safety analyses to be done in an efficient and accurate manner. 14 refs., 2 figs., 3 tabs.
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Lee, Chin-Pang; Chen, Yu; Jiang, Kun-Hao; Chu, Chun-Lin; Chiu, Yu-Wen; Chen, Jiun-Liang; Chen, Ching-Yen
2016-06-01
The aim of this study was to develop a psychometrically sound short version of the 17-item Aging Males' Symptoms (AMS) scale using Mokken scale analysis (MSA) and Rasch analysis. We recruited a convenient sample of 1787 men (age: mean (SD) = 43.8 (11.5) years) who visited a men's health polyclinic in Taiwan and completed the AMS scale. The scale was first assessed using MSA. The remaining items were assessed using Rasch analysis. We used a stepwise approach to remove items with χ(2) item statistics and mean square values while monitoring unidimensionality. The item reduction process resulted in a 6-item version of the AMS scale (AMS-6). The AMS-6 scale included a 5-item psychosomatic subscale (original items 1, 4, 5, 8, and 9) and a 1-item sexual subscale (original item 16). Analyses confirmed that the 5-item psychosomatic subscale was a Rasch scale. The AMS-6 correlated well with the AMS scales: the 5-item psychosomatic subscale correlated with the AMS scale (r between 0.50 and 0.92); the 1-item sexual subscale correlated with the sexual subscale of the AMS scale (r = 0.81). A 6-item short form of the AMS scale had satisfactory measurement properties. This version may be useful for estimating psychosomatic and sexual symptoms as well as health-related quality of life with a minimal burden on respondents. PMID:26984738
Efficient High Order Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations: Talk Slides
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Brian R. (Technical Monitor)
2002-01-01
This viewgraph presentation presents information on the attempt to produce high-order, efficient, central methods that scale well to high dimension. The central philosophy is that the equations should evolve to the point where the data is smooth. This is accomplished by a cyclic pattern of reconstruction, evolution, and re-projection. One dimensional and two dimensional representational methods are detailed, as well.
Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders
ERIC Educational Resources Information Center
Witkiewitz, Katie; King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy; Bierman, Karen L.; Coie, John D.; Dodge, Kenneth A.; Greenberg, Mark T.; Lochman, John E.; Pinderhughes, Ellen E.
2013-01-01
Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the…
Ding, Cody S
2005-02-01
Although multidimensional scaling (MDS) profile analysis is widely used to study individual differences, there is no objective way to evaluate the statistical significance of the estimated scale values. In the present study, a resampling technique (bootstrapping) was used to construct confidence limits for scale values estimated from MDS profile analysis. These bootstrap confidence limits were used, in turn, to evaluate the significance of marker variables of the profiles. The results from analyses of both simulation data and real data suggest that the bootstrap method may be valid and may be used to evaluate hypotheses about the statistical significance of marker variables of MDS profiles. PMID:16097342
ERIC Educational Resources Information Center
Lin, Tzung-Jin; Tan, Aik Ling; Tsai, Chin-Chung
2013-01-01
Due to the scarcity of cross-cultural comparative studies in exploring students' self-efficacy in science learning, this study attempted to develop a multi-dimensional science learning self-efficacy (SLSE) instrument to measure 316 Singaporean and 303 Taiwanese eighth graders' SLSE and further to examine the differences between the two…
Component Cost Analysis of Large Scale Systems
NASA Technical Reports Server (NTRS)
Skelton, R. E.; Yousuff, A.
1982-01-01
The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.
Rasch Analysis of the Geriatric Depression Scale--Short Form
ERIC Educational Resources Information Center
Chiang, Karl S.; Green, Kathy E.; Cox, Enid O.
2009-01-01
Purpose: The purpose of this study was to examine scale dimensionality, reliability, invariance, targeting, continuity, cutoff scores, and diagnostic use of the Geriatric Depression Scale-Short Form (GDS-SF) over time with a sample of 177 English-speaking U.S. elders. Design and Methods: An item response theory, Rasch analysis, was conducted with…
A variational principle for compressible fluid mechanics: Discussion of the multi-dimensional theory
NASA Technical Reports Server (NTRS)
Prozan, R. J.
1982-01-01
The variational principle for compressible fluid mechanics previously introduced is extended to two dimensional flow. The analysis is stable, exactly conservative, adaptable to coarse or fine grids, and very fast. Solutions for two dimensional problems are included. The excellent behavior and results lend further credence to the variational concept and its applicability to the numerical analysis of complex flow fields.
SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION
John Crepeau
2011-03-01
Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.
Scaling analysis of multi-variate intermittent time series
NASA Astrophysics Data System (ADS)
Kitt, Robert; Kalda, Jaan
2005-08-01
The scaling properties of the time series of asset prices and trading volumes of stock markets are analysed. It is shown that similar to the asset prices, the trading volume data obey multi-scaling length-distribution of low-variability periods. In the case of asset prices, such scaling behaviour can be used for risk forecasts: the probability of observing next day a large price movement is (super-universally) inversely proportional to the length of the ongoing low-variability period. Finally, a method is devised for a multi-factor scaling analysis. We apply the simplest, two-factor model to equity index and trading volume time series.
Scaling Limit Analysis of Borromean Halos
NASA Astrophysics Data System (ADS)
Souza, L. A.; Bellotti, F. F.; Frederico, T.; Yamashita, M. T.; Tomio, Lauro
2016-05-01
The analysis of the core recoil momentum distribution of neutron-rich isotopes of light exotic nuclei is performed within a model of halo nuclei described by a core and two neutrons dominated by the s-wave channel. We adopt the renormalized three-body model with a zero-range force, which accounts for the Efimov physics. This model is applicable to nuclei with large two-neutron halos compared to the core size, and a neutron-core scattering length larger than the interaction range. The halo wave function in momentum space is obtained by using as inputs the two-neutron separation energy and the energies of the singlet neutron-neutron and neutron-core virtual states. Within our model, we obtain the momentum probability densities for the Borromean exotic nuclei Lithium-11 (^{11}Li), Berylium-14 (^{14}Be) and Carbon-22 (^{22}C). A fair reproduction of the experimental data was obtained in the case of the core recoil momentum distribution of ^{11}Li and ^{14}Be, without free parameters. By extending the model to ^{22}C, the combined analysis of the core momentum distribution and matter radius suggest (i) a ^{21}C virtual state well below 1 MeV; (ii) an overestimation of the extracted matter ^{22}C radius; and (iii) a two-neutron separation energy between 100 and 400 keV.
Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model
NASA Technical Reports Server (NTRS)
Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)
1999-01-01
We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.
Scaling Analysis of Nanoelectromechanical Memory Devices
NASA Astrophysics Data System (ADS)
Nagami, Tasuku; Tsuchiya, Yoshishige; Uchida, Ken; Mizuta, Hiroshi; Oda, Shunri
2010-04-01
Numerical simulation of electromechanical switching for bistable bridges in non-volatile nanoelectromechanical (NEM) memory devices suggests that performance of memory characteristics enhanced by decreasing suspended floating gate length. By conducting a two-dimensional finite element electromechanical simulation combined with a drift-diffusion analysis, we analyze the electromechanical switching operation of miniaturized structures. By shrinking the NEM floating gate length from 1000 to 100 nm, the switching (set/reset) voltage reduces from 7.2 to 2.8 V, switching time from 63 to 4.6 ns, power consumption from 16.9 to 0.13 fJ. This indicates the advantage of fast and low-power memory characteristics.
NASA Astrophysics Data System (ADS)
Selcuk, Nevin
1993-02-01
Four flux-type models for radiative heat transfer in cylindrical configurations were applied to the prediction of radiative flux density and source term of a cylindrical enclosure problem based on data reported previously on a pilot-scale experimental combustor with steep temperature gradients. The models, which are Schuster-Hamaker type four-flux model derived by Lockwood and Spalding, two Schuster-Schwarzschild type four-flux models derived by Siddall and Selcuk and Richter and Quack and spherical harmonics approximation, were evaluated from the viewpoint of predictive accuracy by comparing their predictions with exact solutions produced previously. The comparisons showed that spherical harmonics approximation produces more accurate results than the other models with respect to the radiative energy source term and that the four-flux models of Lockwood and Spalding and Siddall and Selcuk for isotropic radiation field are more accurate with respect to the prediction of radiative flux density to the side wall.
Ultralow-velocity zone geometries resolved by multi-dimensional waveform modeling
NASA Astrophysics Data System (ADS)
Vanacore, E. A.; Rost, S.; Thorne, M. S.
2016-03-01
Ultra-low velocity zones (ULVZs) are thin patches of material with strongly reduced seismic wave speeds situated on top of the core-mantle boundary (CMB). A common phase used to detect ULVZs is SPdKS (SKPdS), an SKS wave with a short diffracted P leg along the CMB. Most previous efforts have examined ULVZ properties using 1D waveform modeling approaches. We present waveform modeling results using the 2.5D finite difference algorithm PSVaxi allowing us better insight into ULVZ structure and location. We characterize ULVZ waveforms based on ULVZ elastic properties, shape, and position along the SPdKS raypath. In particular, we vary the ULVZ location (e.g. source or receiver side), ULVZ topographical profiles (e.g. boxcar, trapezoidal, or Gaussian) and ULVZ lateral scale along great circle path (2.5º, 5º, 10º). We observe several waveform effects absent in 1D ULVZ models and show evidence for waveform effects allowing the differentiation between source and receiver side ULVZs. Early inception of the SPdKS/SKPdS phase is difficult to detect for receiver-side ULVZs with maximum shifts in SKPdS initiation of ˜3º in epicentral distance, whereas source-side ULVZs produce maximum shifts of SPdKS initiation of ˜5º, allowing clear separation of source- versus receiver-side structure. We present a case study using data from up to 300 broadband stations in Turkey recorded between 2005 and 2010. We observe a previously undetected ULVZ in the southern Atlantic Ocean region centered near 45º S, 12.5ºW, with a lateral scale of ˜3°, VP reduction of 10%, VS reduction of 30%, and density increase of 10% relative to PREM.
NASA Astrophysics Data System (ADS)
Guo, Z.; Wei, W.; Egbert, G. D.
2015-12-01
Although electrical anisotropy is likely at various scales in the Earth, present 3D inversion codes only allow for isotropic models. In fact, any effects of anisotropy present in any real data can always be accommodated by (possibly fine scale) isotropic structures. This suggests that some complex structures found in 3D inverse solutions (e.g., alternating elongate conductive and resistive "streaks" of Meqbel et al. (2014)), may actually represent anisotropic layers. As a step towards better understanding how anisotropy is manifest in 3D inverse models, and to better incorporate anisotropy in 3D MT interpretations, we have implemented new 1D, 2D AND 3D forward modeling codes which allow for general anisotropy and are implemented in matlab using an object oriented (OO) approach. The 1D code is used primarily to provide boundary conditions (BCs). For the 2D case we have used the OO approach to quickly develop and compare several variants including different formulations (three coupled electric field components, one electric and one magnetic component coupled) and different discretizations (staggered and fixed grids). The 3D case is implemented in integral form on a staggered grid, using either 1D or 2D BC. Iterative solvers, including divergence correction, allow solution for large model grids. As an initial application of these codes we are conducting synthetic inversion tests. We construct test models by replacing streaky conductivity layers, as found at the top of the mantle in the EarthScope models of Meqbel et al. (2014), with simpler smoothly varying anisotropic layers. The modeling process is iterated to obtain a reasonable match to actual data. Synthetic data generated from these 3D anisotropic models can then be inverted with a 3D code (ModEM) and compared to the inversions obtained with actual data. Results will be assessed, taking into account the diffusive nature of EM imaging, to better understand how actual anisotropy is mapped to structure by 3D
Multi-Dimensional Evaluation for Module Improvement: A Mathematics-Based Case Study
ERIC Educational Resources Information Center
Ellery, Karen
2006-01-01
Due to a poor module evaluation, mediocre student grades and a difficult teaching experience in lectures, the Data Analysis section of a first year core module, Research Methods for Social Sciences (RMSS), offered at the University of KwaZulu-Natal in South Africa, was completely revised. In order to review the effectiveness of these changes in…
Voice Dysfunction in Dysarthria: Application of the Multi-Dimensional Voice Program.
ERIC Educational Resources Information Center
Kent, R. D.; Vorperian, H. K.; Kent, J. F.; Duffy, J. R.
2003-01-01
Part 1 of this paper recommends procedures and standards for the acoustic analysis of voice in individuals with dysarthria. In Part 2, acoustic data are reviewed for dysarthria associated with Parkinson disease (PD), cerebellar disease, amytrophic lateral sclerosis, traumatic brain injury, unilateral hemispheric stroke, and essential tremor.…
Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia
ERIC Educational Resources Information Center
Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal
2014-01-01
A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by…
A Statistical Analysis of the Charles F. Kettering Climate Scale.
ERIC Educational Resources Information Center
Johnson, William L.; Dixon, Paul N.
A statistical analysis was performed on the Charles F. Kettering (CFK) Scale, a popular four-section measure of school climate. The study centered on a multivariate analysis of Part A, the General Climate Factors section of the instrument, using data gathered from several elementary, junior high, and high school campuses in a large school district…
Honeycomb: Visual Analysis of Large Scale Social Networks
NASA Astrophysics Data System (ADS)
van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.
The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.
An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles
NASA Technical Reports Server (NTRS)
Brown, Clifford; Bridges, James
2003-01-01
Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2015-03-01
Fifteen multi-dimensional diagrams for basic and ultrabasic rocks, based on log-ratio transformations, were used to infer tectonic setting for eight case studies of Borborema Province, NE Brazil. The applications of these diagrams indicated the following results: (1) a mid-ocean ridge setting for Forquilha eclogites (Central Ceará domain) during the Mesoproterozoic; (2) an oceanic plateau setting for Algodões amphibolites (Central Ceará domain) during the Paleoproterozoic; (3) an island arc setting for Brejo Seco amphibolites (Riacho do Pontal belt) during the Proterozoic; (4) an island arc to mid-ocean ridge setting for greenschists of the Monte Orebe Complex (Riacho do Pontal belt) during the Neoproterozoic; (5) within-plate (continental) setting for Vaza Barris domain mafic rocks (Sergipano belt) during the Neoproterozoic; (6) a less precise arc to continental rift for the Gentileza unit metadiorite/gabbro (Sergipano belt) during the Neoproterozoic; (7) an island arc setting for the Novo Gosto unit metabasalts (Sergipano belt) during Neoproterozoic; (8) continental rift setting for Rio Grande do Norte basic rocks during Miocene.
NASA Astrophysics Data System (ADS)
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-07-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes.
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-01-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296
Yang, Hui; Zhang, Jie; Ji, Yuefeng; He, Yongqi; Lee, Young
2016-01-01
Cloud radio access network (C-RAN) becomes a promising scenario to accommodate high-performance services with ubiquitous user coverage and real-time cloud computing in 5G area. However, the radio network, optical network and processing unit cloud have been decoupled from each other, so that their resources are controlled independently. Traditional architecture cannot implement the resource optimization and scheduling for the high-level service guarantee due to the communication obstacle among them with the growing number of mobile internet users. In this paper, we report a study on multi-dimensional resources integration (MDRI) for service provisioning in cloud radio over fiber network (C-RoFN). A resources integrated provisioning (RIP) scheme using an auxiliary graph is introduced based on the proposed architecture. The MDRI can enhance the responsiveness to dynamic end-to-end user demands and globally optimize radio frequency, optical network and processing resources effectively to maximize radio coverage. The feasibility of the proposed architecture is experimentally verified on OpenFlow-based enhanced SDN testbed. The performance of RIP scheme under heavy traffic load scenario is also quantitatively evaluated to demonstrate the efficiency of the proposal based on MDRI architecture in terms of resource utilization, path blocking probability, network cost and path provisioning latency, compared with other provisioning schemes. PMID:27465296
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
NASA Astrophysics Data System (ADS)
Chaudhury, Pinaki; Bhattacharyya, S. P.
1999-03-01
It is demonstrated that Genetic Algorithm in a floating point realisation can be a viable tool for locating critical points on a multi-dimensional potential energy surface (PES). For small clusters, the standard algorithm works well. For bigger ones, the search for global minimum becomes more efficient when used in conjunction with coordinate stretching, and partitioning of the strings into a core part and an outer part which are alternately optimized The method works with equal facility for locating minima, local as well as global, and saddle points (SP) of arbitrary orders. The search for minima requires computation of the gradient vector, but not the Hessian, while that for SP's requires the information of the gradient vector and the Hessian, the latter only at some specific points on the path. The method proposed is tested on (i) a model 2-d PES (ii) argon clusters (Ar 4-Ar 30) in which argon atoms interact via Lennard-Jones potential, (iii) Ar mX, m=12 clusters where X may be a neutral atom or a cation. We also explore if the method could also be used to construct what may be called a stochastic representation of the reaction path on a given PES with reference to conformational changes in Ar n clusters.
Spanne, Anton; Jörntell, Henrik
2013-01-01
Why are sensory signals and motor command signals combined in the neurons of origin of the spinocerebellar pathways and why are the granule cells that receive this input thresholded with respect to their spike output? In this paper, we synthesize a number of findings into a new hypothesis for how the spinocerebellar systems and the cerebellar cortex can interact to support coordination of our multi-segmented limbs and bodies. A central idea is that recombination of the signals available to the spinocerebellar neurons can be used to approximate a wide array of functions including the spatial and temporal dependencies between limb segments, i.e. information that is necessary in order to achieve coordination. We find that random recombination of sensory and motor signals is not a good strategy since, surprisingly, the number of granule cells severely limits the number of recombinations that can be represented within the cerebellum. Instead, we propose that the spinal circuitry provides useful recombinations, which can be described as linear projections through aspects of the multi-dimensional sensorimotor input space. Granule cells, potentially with the aid of differentiated thresholding from Golgi cells, enhance the utility of these projections by allowing the Purkinje cell to establish piecewise-linear approximations of non-linear functions. Our hypothesis provides a novel view on the function of the spinal circuitry and cerebellar granule layer, illustrating how the coordinating functions of the cerebellum can be crucially supported by the recombinations performed by the neurons of the spinocerebellar systems. PMID:23516353
NASA Astrophysics Data System (ADS)
Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi
2015-10-01
Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR = 4-12 kHz) at cryogenic temperatures (T = 35-120 K) for over a week without consuming helium at a cost for electricity of 16 kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T = 40 K and B0 = 16.4 T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100 K.
Labenski, Matthew T.; Fisher, Ashley A.; Monks, Terrence J.; Lau, Serrine S.
2014-01-01
Recent technological advancements in mass spectrometry facilitate the detection of chemical-induced posttranslational modifications (PTMs) that may alter cell signaling pathways or alter the structure and function of the modified proteins. To identify such protein adducts (Kleiner et al., Chem Res Toxicol 11:1283–1290, 1998), multi-dimensional protein identification technology (MuDPIT) has been utilized. MuDPIT was first described by Link et al. as a new technique useful for protein identification from a complex mixture of proteins (Link et al., Nat Biotechnol 17:676–682, 1999). MuDPIT utilizes two different HPLC columns to further enhance peptide separation, increasing the number of peptide hits and protein coverage. The technology is extremely useful for proteomes, such as the urine proteome, samples from immunoprecipitations, and 1D gel bands resolved from a tissue homogenate or lysate. In particular, MuDPIT has enhanced the field of adduct hunting for adducted peptides, since it is more capable of identifying lesser abundant peptides, such as those that are adducted, than the more standard LC–MS/MS. The site-specific identification of covalently adducted proteins is a prerequisite for understanding the biological significance of chemical-induced PTMs and the subsequent toxicological response they elicit. PMID:20972764
Barrett, Louise; Henzi, S Peter; Lusseau, David
2012-08-01
Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural 'knock-outs' in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054
ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels
Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino
2015-01-01
A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å3, which is one of the lowest among the known zeolites. ITQ-54 was obtainedmore » together with GeO2 as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.« less
ITQ-54: a multi-dimensional extra-large pore zeolite with 20 × 14 × 12-ring channels
Jiang, Jiuxing; Yun, Yifeng; Zou, Xiaodong; Jorda, Jose Luis; Corma, Avelino
2015-01-01
A multi-dimensional extra-large pore silicogermanate zeolite, named ITQ-54, has been synthesised by in situ decomposition of the N,N-dicyclohexylisoindolinium cation into the N-cyclohexylisoindolinium cation. Its structure was solved by 3D rotation electron diffraction (RED) from crystals of ca. 1 μm in size. The structure of ITQ-54 contains straight intersecting 20 × 14 × 12-ring channels along the three crystallographic axes and it is one of the few zeolites with extra-large channels in more than one direction. ITQ-54 has a framework density of 11.1 T atoms per 1000 Å^{3}, which is one of the lowest among the known zeolites. ITQ-54 was obtained together with GeO_{2} as an impurity. A heavy liquid separation method was developed and successfully applied to remove this impurity from the zeolite. ITQ-54 is stable up to 600 °C and exhibits permanent porosity. The structure was further refined using powder X-ray diffraction (PXRD) data for both as-made and calcined samples.
Jin, Renyao; Li, Linqiu; Feng, Junli; Dai, Zhiyuan; Huang, Yao-Wen; Shen, Qing
2017-02-01
Zwitterionic hydrophilic interaction liquid chromatography (ZIC-HILIC) material was used as solid-phase extraction sorbent for purification of phospholipids from Hypophthalmichthys nobilis. The conditions were optimized to be pH 6, flow rate 2.0mL·min(-1), loading breakthrough volume ⩽5mL, and eluting solvent 5mL. Afterwards, the extracts were analyzed by multi-dimensional mass spectrometry (MDMS) based shotgun lipidomics; 20 species of phosphatidylcholine (PC), 22 species of phosphatidylethanoamine (PE), 15 species of phosphatidylserine (PS), and 5 species of phosphatidylinositol (PI) were identified, with content 224.1, 124.1, 27.4, and 34.7μg·g(-1), respectively. The MDMS method was validated in terms of linearity (0.9963-0.9988), LOD (3.7ng·mL(-1)), LOQ (9.8ng·mL(-1)), intra-day precision (<3.64%), inter-day precision (<5.31%), and recovery (78.8-85.6%). ZIC-HILIC and MDMS shotgun lipidomics are efficient for studying phospholipids in H. nobilis. PMID:27596430
Wang, H.; Man, S.; Ewing, R.E.; Qin, G.; Lyons, S.L.; Al-Lawatia, M.
1999-06-10
Many difficult problems arise in the numerical simulation of fluid flow processes within porous media in petroleum reservoir simulation and in subsurface contaminant transport and remediation. The authors develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of the initial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains. Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derived schemes, which are full mass conservative, naturally incorporate inflow boundary conditions into their formulations and do not need any artificial outflow boundary conditions. Moreover, they have regularly structured, well-conditioned, symmetric, and positive-definite coefficient matrices, which can be efficiently solved by the conjugate gradient method in an optimal order number of iterations without any preconditioning needed. Numerical results are presented to compare the performance of the ELLAM schemes with many well studied and widely used methods, including the upwind finite difference method, the Galerkin and the Petrov-Galerkin finite element methods with backward-Euler or Crank-Nicolson temporal discretization, the streamline diffusion finite element methods, the monotonic upstream-centered scheme for conservation laws (MUSCL), and the Minmod scheme.
Spanne, Anton; Jörntell, Henrik
2013-01-01
Why are sensory signals and motor command signals combined in the neurons of origin of the spinocerebellar pathways and why are the granule cells that receive this input thresholded with respect to their spike output? In this paper, we synthesize a number of findings into a new hypothesis for how the spinocerebellar systems and the cerebellar cortex can interact to support coordination of our multi-segmented limbs and bodies. A central idea is that recombination of the signals available to the spinocerebellar neurons can be used to approximate a wide array of functions including the spatial and temporal dependencies between limb segments, i.e. information that is necessary in order to achieve coordination. We find that random recombination of sensory and motor signals is not a good strategy since, surprisingly, the number of granule cells severely limits the number of recombinations that can be represented within the cerebellum. Instead, we propose that the spinal circuitry provides useful recombinations, which can be described as linear projections through aspects of the multi-dimensional sensorimotor input space. Granule cells, potentially with the aid of differentiated thresholding from Golgi cells, enhance the utility of these projections by allowing the Purkinje cell to establish piecewise-linear approximations of non-linear functions. Our hypothesis provides a novel view on the function of the spinal circuitry and cerebellar granule layer, illustrating how the coordinating functions of the cerebellum can be crucially supported by the recombinations performed by the neurons of the spinocerebellar systems. PMID:23516353
Barrett, Louise; Henzi, S. Peter; Lusseau, David
2012-01-01
Understanding human cognitive evolution, and that of the other primates, means taking sociality very seriously. For humans, this requires the recognition of the sociocultural and historical means by which human minds and selves are constructed, and how this gives rise to the reflexivity and ability to respond to novelty that characterize our species. For other, non-linguistic, primates we can answer some interesting questions by viewing social life as a feedback process, drawing on cybernetics and systems approaches and using social network neo-theory to test these ideas. Specifically, we show how social networks can be formalized as multi-dimensional objects, and use entropy measures to assess how networks respond to perturbation. We use simulations and natural ‘knock-outs’ in a free-ranging baboon troop to demonstrate that changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalization of social networks provides a framework within which to predict network dynamics and evolution, helps us highlight how human and non-human social networks differ and has implications for theories of cognitive evolution. PMID:22734054
Matsuki, Yoh; Nakamura, Shinji; Fukui, Shigeo; Suematsu, Hiroto; Fujiwara, Toshimichi
2015-10-01
Magic-angle spinning (MAS) NMR is a powerful tool for studying molecular structure and dynamics, but suffers from its low sensitivity. Here, we developed a novel helium-cooling MAS NMR probe system adopting a closed-loop gas recirculation mechanism. In addition to the sensitivity gain due to low temperature, the present system has enabled highly stable MAS (vR=4-12 kHz) at cryogenic temperatures (T=35-120 K) for over a week without consuming helium at a cost for electricity of 16 kW/h. High-resolution 1D and 2D data were recorded for a crystalline tri-peptide sample at T=40 K and B0=16.4 T, where an order of magnitude of sensitivity gain was demonstrated versus room temperature measurement. The low-cost and long-term stable MAS strongly promotes broader application of the brute-force sensitivity-enhanced multi-dimensional MAS NMR, as well as dynamic nuclear polarization (DNP)-enhanced NMR in a temperature range lower than 100 K. PMID:26302269
Amir, El-Ad David; Kalisman, Nir; Keasar, Chen
2008-07-01
Rotatable torsion angles are the major degrees of freedom in proteins. Adjacent angles are highly correlated and energy terms that rely on these correlations are intensively used in molecular modeling. However, the utility of torsion based terms is not yet fully exploited. Many of these terms do not capture the full scale of the correlations. Other terms, which rely on lookup tables, cannot be used in the context of force-driven algorithms because they are not fully differentiable. This study aims to extend the usability of torsion terms by presenting a set of high-dimensional and fully-differentiable energy terms that are derived from high-resolution structures. The set includes terms that describe backbone conformational probabilities and propensities, side-chain rotamer probabilities, and an elaborate term that couples all the torsion angles within the same residue. The terms are constructed by cubic spline interpolation with periodic boundary conditions that enable full differentiability and high computational efficiency. We show that the spline implementation does not compromise the accuracy of the original database statistics. We further show that the side-chain relevant terms are compatible with established rotamer probabilities. Despite their very local characteristics, the new terms are often able to identify native and native-like structures within decoy sets. Finally, force-based minimization of NMR structures with the new terms improves their torsion angle statistics with minor structural distortion (0.5 A RMSD on average). The new terms are freely available in the MESHI molecular modeling package. The spline coefficients are also available as a documented MATLAB file. PMID:18186478
NASA Astrophysics Data System (ADS)
Kiyan, D.; Jones, A. G.; Fullea, J.; Ledo, J.; Siniscalchi, A.; Romano, G.
2014-12-01
The PICASSO (Program to Investigate Convective Alboran Sea System Overturn) project and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROSCORES TOPO-EUROPE project) project were designed to collect high resolution, multi-disciplinary lithospheric scale data in order to understand the tectonic evolution and lithospheric structure of the western Mediterranean. The over-arching objectives of the magnetotelluric (MT) component of the projects are (i) to provide new electrical conductivity constraints on the crustal and lithospheric structure of the Atlas Mountains, and (ii) to test the hypotheses for explaining the purported lithospheric cavity beneath the Middle and High Atlas inferred from potential-field lithospheric modeling. We present the results of an MT experiment we carried out in Morocco along two profiles: an approximately N-S oriented profile crossing the Middle Atlas, the High Atlas and the eastern Anti-Atlas to the east (called the MEK profile, for Meknes) and NE-SW oriented profile through western High Atlas to the west (called the MAR profile, for Marrakech). Our results are derived from three-dimensional (3-D) MT inversion of the MT data set employing the parallel version of Modular system for Electromagnetic inversion (ModEM) code. The distinct conductivity differences between the Middle-High Atlas (conductive) and the Anti-Atlas (resistive) correlates with the South Atlas Front fault, the depth extent of which appears to be limited to the uppermost mantle (approx. 60 km). In all inverse solutions, the crust and the upper mantle show resistive signatures (approx. 1,000 Ωm) beneath the Anti-Atlas, which is the part of stable West African Craton. Partial melt and/or exotic fluids enriched in volatiles produced by the melt can account for the high middle to lower crustal and uppermost mantle conductivity in the Folded Middle Atlas, the High Moulouya Plain and the
Conservative-variable average states for equilibrium gas multi-dimensional fluxes
NASA Technical Reports Server (NTRS)
Iannelli, G. S.
1992-01-01
Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.
Philip E. Wannamaker
2007-12-31
The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.
NASA Astrophysics Data System (ADS)
West, Ruth; Gossmann, Joachim; Margolis, Todd; Schulze, Jurgen P.; Lewis, J. P.; Hackbarth, Ben; Mostafavi, Iman
2009-02-01
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.
Three decades of multi-dimensional change in global leaf phenology
NASA Astrophysics Data System (ADS)
Buitenwerf, Robert; Rose, Laura; Higgins, Steven I.
2015-04-01
Changes in the phenology of vegetation activity may accelerate or dampen rates of climate change by altering energy exchanges between the land surface and the atmosphere and can threaten species with synchronized life cycles. Current knowledge of long-term changes in vegetation activity is regional, or restricted to highly integrated measures of change such as net primary productivity, which mask details that are relevant for Earth system dynamics. Such details can be revealed by measuring changes in the phenology of vegetation activity. Here we undertake a comprehensive global assessment of changes in vegetation phenology. We show that the phenology of vegetation activity changed severely (by more than 2 standard deviations in one or more dimensions of phenological change) on 54% of the global land surface between 1981 and 2012. Our analysis confirms previously detected changes in the boreal and northern temperate regions. The adverse consequences of these northern phenological shifts for land-surface-climate feedbacks, ecosystems and species are well known. Our study reveals equally severe phenological changes in the southern hemisphere, where consequences for the energy budget and the likelihood of phenological mismatches are unknown. Our analysis provides a sensitive and direct measurement of ecosystem functioning, making it useful both for monitoring change and for testing the reliability of early warning signals of change.
Analysis of Reduced-Scale Nova Hohlraum Experiments
NASA Astrophysics Data System (ADS)
Powers, L. V.; Berger, R. L.; Kirkwood, R. K.; Kruer, W. L.; Langdon, A. B.; MacGowan, B. J.; Orzechowski, T. J.; Rosen, M. D.; Springer, P. T.; Still, C. H.; Suter, L. J.; Williams, E. A.; Blain, M. A.
1996-11-01
Establishing the practical limit on achievable radiation temperature in high-Z hohlraums is of interest both for ignition targets( S.M. Haan, et al., Phys. Plasmas 2, 2480 (1995).) for the National Ignition Facility (NIF), and for high energy density physics experiments( S.B. Libby, Energy and Technology Review, UCRL-52000-94-12, 23 (1994)). Two related efforts are underway to define the physics issues of high energy density hohlraum targets: 1) experiments on the Nova laser in reduced scale hohlraums, and 2) evaluation of high-temperature hohlraums designs for the NIF. Reduced scale Nova hohlraums approach conditions relevant to NIF high temperature designs, albeit at smaller scale. Analysis of reduced-scale experiments on Nova therefore provides valuable physics information for evaluating the capabilities of NIF for producing high energy density in hohlraums. Simulations of Nova reduced scale hohlraum experiments will be presented, and the relevance to a range of NIF hohlraum target designs will be discussed.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
NASA Astrophysics Data System (ADS)
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-03-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane.
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P; Prisacariu, Cristina; Korsunsky, Alexander M
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
[Factorial analysis of the Hamilton depression scale, II].
Dreyfus, J F; Guelfi, J D; Ruschel, S; Blanchard, C; Pichot, P
1981-04-01
A factorial analysis (principal components with Varimax rotation) was performed on 85 ratings of the Hamilton Depression Rating Scale obtained in 1979-1980 on inpatients with a major depressive illness. Using a replicable statistical technique, 4 factors were obtained. These factors do not overlap with those obtain on a similar sample with a similar technique nor with those obtained by other authors. It thus appears that there is no such thing as a factorial structure of this scale. PMID:7305179
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1992-01-01
The nonlinear stability of compact schemes for shock calculations is investigated. In recent years compact schemes were used in various numerical simulations including direct numerical simulation of turbulence. However to apply them to problems containing shocks, one has to resolve the problem of spurious numerical oscillation and nonlinear instability. A framework to apply nonlinear limiting to a local mean is introduced. The resulting scheme can be proven total variation (1D) or maximum norm (multi D) stable and produces nice numerical results in the test cases. The result is summarized in the preprint entitled 'Nonlinearly Stable Compact Schemes for Shock Calculations', which was submitted to SIAM Journal on Numerical Analysis. Research was continued on issues related to two and three dimensional essentially non-oscillatory (ENO) schemes. The main research topics include: parallel implementation of ENO schemes on Connection Machines; boundary conditions; shock interaction with hydrogen bubbles, a preparation for the full combustion simulation; and direct numerical simulation of compressible sheared turbulence.
Multi-dimensional models of circumstellar shells around evolved massive stars
NASA Astrophysics Data System (ADS)
van Marle, A. J.; Keppens, R.
2012-11-01
Context. Massive stars shape their surrounding medium through the force of their stellar winds, which collide with the circumstellar medium. Because the characteristics of these stellar winds vary over the course of the evolution of the star, the circumstellar matter becomes a reflection of the stellar evolution and can be used to determine the characteristics of the progenitor star. In particular, whenever a fast wind phase follows a slow wind phase, the fast wind sweeps up its predecessor in a shell, which is observed as a circumstellar nebula. Aims: We make 2D and 3D numerical simulations of fast stellar winds sweeping up their slow predecessors to investigate whether numerical models of these shells have to be 3D, or whether 2D models are sufficient to reproduce the shells correctly. Methods: We use the MPI-AMRVAC code, using hydrodynamics with optically thin radiative losses included, to make numerical models of circumstellar shells around massive stars in 2D and 3D and compare the results. We focus on those situations where a fast Wolf-Rayet star wind sweeps up the slower wind emitted by its predecessor, being either a red supergiant or a luminous blue variable. Results: As the fast Wolf-Rayet wind expands, it creates a dense shell of swept up material that expands outward, driven by the high pressure of the shocked Wolf-Rayet wind. These shells are subject to a fair variety of hydrodynamic-radiative instabilities. If the Wolf-Rayet wind is expanding into the wind of a luminous blue variable phase, the instabilities will tend to form a fairly small-scale, regular filamentary lattice with thin filaments connecting knotty features. If the Wolf-Rayet wind is sweeping up a red supergiant wind, the instabilities will form larger interconnected structures with less regularity. The numerical resolution must be high enough to resolve the compressed, swept-up shell and the evolving instabilities, which otherwise may not even form. Conclusions: Our results show that 3D
Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh
2014-01-01
This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.
Proteomics beyond large-scale protein expression analysis.
Boersema, Paul J; Kahraman, Abdullah; Picotti, Paola
2015-08-01
Proteomics is commonly referred to as the application of high-throughput approaches to protein expression analysis. Typical results of proteomics studies are inventories of the protein content of a sample or lists of differentially expressed proteins across multiple conditions. Recently, however, an explosion of novel proteomics workflows has significantly expanded proteomics beyond the analysis of protein expression. Targeted proteomics methods, for example, enable the analysis of the fine dynamics of protein systems, such as a specific pathway or a network of interacting proteins, and the determination of protein complex stoichiometries. Structural proteomics tools allow extraction of restraints for structural modeling and identification of structurally altered proteins on a proteome-wide scale. Other variations of the proteomic workflow can be applied to the large-scale analysis of protein activity, location, degradation and turnover. These exciting developments provide new tools for multi-level 'omics' analysis and for the modeling of biological networks in the context of systems biology studies. PMID:25636126
A generic system for integrated modelling of multi-dimensional spatial data
NASA Astrophysics Data System (ADS)
Brown, I. M.
A generic approach to computer modelling of earth science data is presented, utilising a state-of-the-art scientific visualisation environment (AVS/Express). The greater flexibility of such an approach allows us to handle a wide variety of different data types, including geophysical data as well as other earth science data (eg. stratigraphy, geomorphology, palaeontology) which often contrast by being generally discrete bodies rather than continuous fields. Application of volume visualisation techniques generally demonstrates that the sparse nature of sampling favours using surface-extraction techniques, such as isosurfaces and slicing, rather than direct volume rendering techniques. These techniques have also been applied to temporal 4D data-sets by incorporating time-slices into animation. However, all these procedures require a high performance workstation to be effective. Therefore, to allow greater desktop analysis of complex models, we are using the Virtual Reality Modelling Language (VRML) which provides considerable scope for increased access to 3D/4D data for education and collaboration.
NASA Astrophysics Data System (ADS)
Zhang, Zhiyong; Huang, Yuqing; Smith, Pieter E. S.; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong
2014-05-01
Heteronuclear NMR spectroscopy is an extremely powerful tool for determining the structures of organic molecules and is of particular significance in the structural analysis of proteins. In order to leverage the method’s potential for structural investigations, obtaining high-resolution NMR spectra is essential and this is generally accomplished by using very homogeneous magnetic fields. However, there are several situations where magnetic field distortions and thus line broadening is unavoidable, for example, the samples under investigation may be inherently heterogeneous, and the magnet’s homogeneity may be poor. This line broadening can hinder resonance assignment or even render it impossible. We put forth a new class of pulse sequences for obtaining high-resolution heteronuclear spectra in magnetic fields with unknown spatial variations based on distant dipolar field modulations. This strategy’s capabilities are demonstrated with the acquisition of high-resolution 2D gHSQC and gHMBC spectra. These sequences’ performances are evaluated on the basis of their sensitivities and acquisition efficiencies. Moreover, we show that by encoding and decoding NMR observables spatially, as is done in ultrafast NMR, an extra dimension containing J-coupling information can be obtained without increasing the time necessary to acquire a heteronuclear correlation spectrum. Since the new sequences relax magnetic field homogeneity constraints imposed upon high-resolution NMR, they may be applied in portable NMR sensors and studies of heterogeneous chemical and biological materials.
Multi-dimensional combustor flowfield analyses in gas-gas rocket engine
NASA Technical Reports Server (NTRS)
Tsuei, Hsin-Hua; Merkle, Charles L.
1994-01-01
The objectives of the present research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustions processes. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an on-going experimental program at NASA LeRC. Detailed computational modeling requires the application of the full three-dimensional Navier Stokes equations, coupled with species diffusion equations. The numerical procedure is performed on both time-marching and time-accurate algorithms and using an LU approximate factorization in time, flux split upwinding differencing in space. The emphasis in this paper is focused on using numerical analysis to understand detailed combustor flowfields, including the shear layer dynamics created between fuel film cooling and the core gas in the vicinity on the nearby combustor wall; the integrity and effectiveness of the coolant film; three-dimensional fuel jets injection/mixing/combustion characteristics; and their impacts on global engine performance.
Using Qualitative Methods to Inform Scale Development
ERIC Educational Resources Information Center
Rowan, Noell; Wulff, Dan
2007-01-01
This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being…
Geographical Scale Effects on the Analysis of Leptospirosis Determinants
Gracie, Renata; Barcellos, Christovam; Magalhães, Mônica; Souza-Santos, Reinaldo; Barrocas, Paulo Rubens Guimarães
2014-01-01
Leptospirosis displays a great diversity of routes of exposure, reservoirs, etiologic agents, and clinical symptoms. It occurs almost worldwide but its pattern of transmission varies depending where it happens. Climate change may increase the number of cases, especially in developing countries, like Brazil. Spatial analysis studies of leptospirosis have highlighted the importance of socioeconomic and environmental context. Hence, the choice of the geographical scale and unit of analysis used in the studies is pivotal, because it restricts the indicators available for the analysis and may bias the results. In this study, we evaluated which environmental and socioeconomic factors, typically used to characterize the risks of leptospirosis transmission, are more relevant at different geographical scales (i.e., regional, municipal, and local). Geographic Information Systems were used for data analysis. Correlations between leptospirosis incidence and several socioeconomic and environmental indicators were calculated at different geographical scales. At the regional scale, the strongest correlations were observed between leptospirosis incidence and the amount of people living in slums, or the percent of the area densely urbanized. At the municipal scale, there were no significant correlations. At the local level, the percent of the area prone to flooding best correlated with leptospirosis incidence. PMID:25310536
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445
Scale analysis using X-ray microfluorescence and computed radiography
NASA Astrophysics Data System (ADS)
Candeias, J. P.; de Oliveira, D. F.; dos Anjos, M. J.; Lopes, R. T.
2014-02-01
Scale deposits are the most common and most troublesome damage problems in the oil field and can occur in both production and injection wells. They occur because the minerals in produced water exceed their saturation limit as temperatures and pressures change. Scale can vary in appearance from hard crystalline material to soft, friable material and the deposits can contain other minerals and impurities such as paraffin, salt and iron. In severe conditions, scale creates a significant restriction, or even a plug, in the production tubing. This study was conducted to qualify the elements present in scale samples and quantify the thickness of the scale layer using synchrotron radiation micro-X-ray fluorescence (SRμXRF) and computed radiography (CR) techniques. The SRμXRF results showed that the elements found in the scale samples were strontium, barium, calcium, chromium, sulfur and iron. The CR analysis showed that the thickness of the scale layer was identified and quantified with accuracy. These results can help in the decision making about removing the deposited scale.
Bully-Victimization Scale: Using Rasch Modeling in the Analysis of a Qualitative Scale
ERIC Educational Resources Information Center
Lehto, Marybeth
2009-01-01
The primary purpose of this study was to determine whether the data from the qualitative study fit Rasch model requirements for the definition of a measure, as well as to address concern in the extant literature regarding the appropriate number of items needed in analysis to assure unidimensionality. The self-report victimization scale was…
Raje, Satyajeet; Kite, Bobbie; Ramanathan, Jay; Payne, Philip
2016-01-01
Systems designed to expedite data preprocessing tasks such as data discovery, interpretation, and integration that are required before data analysis drastically impact the pace of biomedical informatics research. Current commercial interactive and real-time data integration tools are designed for large-scale business analytics requirements. In this paper we identify the need for end-to-end data fusion platforms from the researcher's perspective, supporting ad-hoc data interpretation and integration. PMID:26262406
NASA Astrophysics Data System (ADS)
Mann, David; Caban, Jesus J.; Stolka, Philipp J.; Boctor, Emad M.; Yoo, Terry S.
2011-03-01
The low-cost and minimum health risks associated with ultrasound (US) have made ultrasonic imaging a widely accepted method to perform diagnostic and image-guided procedures. Despite the existence of 3D ultrasound probes, most analysis and diagnostic procedures are done by studying the B-mode images. Currently, multiple ultrasound probes include 6-DOF sensors that can provide positioning information. Such tracking information can be used to reconstruct a 3D volume from a set of 2D US images. Recent advances in ultrasound imaging have also shown that, directly from the streaming radio frequency (RF) data, it is possible to obtain additional information of the anatomical region under consideration including the elasticity properties. This paper presents a generic framework that takes advantage of current graphics hardware to create a low-latency system to visualize streaming US data while combining multiple tissue attributes into a single illustration. In particular, we introduce a framework that enables real-time reconstruction and interactive visualization of streaming data while enhancing the illustration with elasticity information. The visualization module uses two-dimensional transfer functions (2D TFs) to more effectively fuse and map B-mode and strain values into specific opacity and color values. On commodity hardware, our framework can simultaneously reconstruct, render, and provide user interaction at over 15 fps. Results with phantom and real-world medical datasets show the advantages and effectiveness of our technique with ultrasound data. In particular, our results show how two-dimensional transfer functions can be used to more effectively identify, analyze and visualize lesions in ultrasound images.
Full-scale system impact analysis: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.
Shielding analysis methods available in the scale computational system
Parks, C.V.; Tang, J.S.; Hermann, O.W.; Bucholz, J.A.; Emmett, M.B.
1986-01-01
Computational tools have been included in the SCALE system to allow shielding analysis to be performed using both discrete-ordinates and Monte Carlo techniques. One-dimensional discrete ordinates analyses are performed with the XSDRNPM-S module, and point dose rates outside the shield are calculated with the XSDOSE module. Multidimensional analyses are performed with the MORSE-SGC/S Monte Carlo module. This paper will review the above modules and the four Shielding Analysis Sequences (SAS) developed for the SCALE system. 7 refs., 8 figs.
Multi-scale analysis for environmental dispersion in wetland flow
NASA Astrophysics Data System (ADS)
Wu, Zi; Li, Z.; Chen, G. Q.
2011-08-01
Presented in this work is a multi-scale analysis for longitudinal evolution of contaminant concentration in a fully developed flow through a shallow wetland channel. An environmental dispersion model for the mean concentration is devised as an extension of Taylor's classical formulation by a multi-scale analysis. Corresponding environmental dispersivity is found identical to that determined by the method of concentration moments. For typical contaminant constituents of chemical oxygen demand, biochemical oxygen demand, total phosphorus, total nitrogen and heavy metal, the evolution of contaminant cloud is illustrated with the critical length and duration of the contaminant cloud with constituent concentration beyond some given environmental standard level.
NASA Astrophysics Data System (ADS)
Rice, Stuart A.; Toda, Mikito; Komatsuzaki, Tamiki; Konishi, Tetsuro; Berry, R. Stephen
2005-01-01
Edited by Nobel Prize winner Ilya Prigogine and renowned authority Stuart A. Rice, the Advances in Chemical Physics series provides a forum for critical, authoritative evaluations in every area of the discipline. In a format that encourages the expression of individual points of view, experts in the field present comprehensive analyses of subjects of interest. Advances in Chemical Physics remains the premier venue for presentations of new findings in its field. Volume 130 consists of three parts including: Part I: Phase Space Geometry of Multi-dimensional Dynamical Systems and Reaction Processes Part II Complex Dynamical Behavior in Clusters and Proteins, and Data Mining to Extract Information on Dynamics Part III New directions in Multi-Dimensional Chaos and Evolutionary Reactions
NASA Astrophysics Data System (ADS)
Liu, Hao; Chen, Luyi; Liang, Yeru; Fu, Ruowen; Wu, Dingcai
2015-11-01
A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode.A novel active yolk@conductive shell nanofiber web with a unique synergistic advantage of various hierarchical nanodimensional objects including the 0D monodisperse SiO2 yolks, the 1D continuous carbon shell and the 3D interconnected non-woven fabric web has been developed by an innovative multi-dimensional construction method, and thus demonstrates excellent electrochemical properties as a self-standing LIB anode. Electronic supplementary information (ESI) available: Experimental details and additional information about material characterization. See DOI: 10.1039/c5nr06531c
Tools for Large-Scale Mobile Malware Analysis
Bierma, Michael
2014-01-01
Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.
NASA Astrophysics Data System (ADS)
Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.
2013-12-01
Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and
Complexity of carbon market from multi-scale entropy analysis
NASA Astrophysics Data System (ADS)
Fan, Xinghua; Li, Shasha; Tian, Lixin
2016-06-01
Complexity of carbon market is the consequence of economic dynamics and extreme social political events in global carbon markets. The multi-scale entropy can measure the long-term structures in the daily price return time series. By using multi-scale entropy analysis, we explore the complexity of carbon market and mean reversion trend of daily price return. The logarithmic difference of data Dec16 from August 6, 2010 to May 22, 2015 is selected as the sample. The entropy is higher in small time scale, while lower in large. The dependence of the entropy on the time scale reveals the mean reversion of carbon prices return in the long run. A relatively great fluctuation over some short time period indicates that the complexity of carbon market evolves consistently with economic development track and the events of international climate conferences.
MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A
Anusha, L. S.; Nagendra, K. N. E-mail: knn@iiap.res.in
2013-04-20
Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with
NASA Astrophysics Data System (ADS)
Pandarinath, Kailasa
2014-12-01
Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of
Schoenberg, Poppy L A; Speckens, Anne E M
2015-02-01
To illuminate candidate neural working mechanisms of Mindfulness-Based Cognitive Therapy (MBCT) in the treatment of recurrent depressive disorder, parallel to the potential interplays between modulations in electro-cortical dynamics and depressive symptom severity and self-compassionate experience. Linear and nonlinear α and γ EEG oscillatory dynamics were examined concomitant to an affective Go/NoGo paradigm, pre-to-post MBCT or natural wait-list, in 51 recurrent depressive patients. Specific EEG variables investigated were; (1) induced event-related (de-) synchronisation (ERD/ERS), (2) evoked power, and (3) inter-/intra-hemispheric coherence. Secondary clinical measures included depressive severity and experiences of self-compassion. MBCT significantly downregulated α and γ power, reflecting increased cortical excitability. Enhanced α-desynchronisation/ERD was observed for negative material opposed to attenuated α-ERD towards positively valenced stimuli, suggesting activation of neural networks usually hypoactive in depression, related to positive emotion regulation. MBCT-related increase in left-intra-hemispheric α-coherence of the fronto-parietal circuit aligned with these synchronisation dynamics. Ameliorated depressive severity and increased self-compassionate experience pre-to-post MBCT correlated with α-ERD change. The multi-dimensional neural mechanisms of MBCT pertain to task-specific linear and non-linear neural synchronisation and connectivity network dynamics. We propose MBCT-related modulations in differing cortical oscillatory bands have discrete excitatory (enacting positive emotionality) and inhibitory (disengaging from negative material) effects, where mediation in the α and γ bands relates to the former. PMID:26052359
A Confirmatory Factor Analysis of the Professional Opinion Scale
ERIC Educational Resources Information Center
Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.
2007-01-01
The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…
The Hong Psychological Reactance Scale: A Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Thomas, Adrian; Donnell, Alison J.; Buboltz, Walter C., Jr.
2001-01-01
Study uses confirmatory factor analysis to assess four models of the Hong Psychological Reactance Scale (HPRS) and attempts to provide psychometric information about the subscales. Results found inadequate fit for Hong's four orthogonal models but sufficient fit for two nonorthogonal models. (Contains 29 references and 3 tables.) (GCP)
Exploratory Factor Analysis of African Self-Consciousness Scale Scores
ERIC Educational Resources Information Center
Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.
2012-01-01
This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…
Carter, Stephen R
2016-06-01
Background Confirmatory factory analysis (CFA) and structural equation modelling (SEM) are increasingly used in social pharmacy research. One of the key benefits of CFA is that it allows researchers to provide evidence for the validity of internal factor structure of measurement scales. In particular, CFA can be used to provide evidence for the validity of the assertion that a hypothesized multi-dimensional scale discriminates between sub-scales. Aim This manuscript aims to provide guidance for researchers who wish to use CFA to provide evidence for the internal factor structure of measurement scales. Methods The manuscript places discriminant validity in the context of providing overall validity evidence for measurement scales. Four examples from the recent social pharmacy literature are used to critically examine the various methods which are used to establish discriminant validity. Using a hypothetical scenario, the manuscript demonstrates how commonly used output from CFA computer programs can be used to provide evidence for separateness of sub-scales within a multi-dimensional scale. Conclusion The manuscript concludes with recommendations for the conduct and reporting of studies which use CFA to provide evidence of internal factor structure of measurement scales. PMID:27147255
Exploratory Data analysis ENvironment eXtreme scale (EDENx)
Steed, Chad Allen
2015-07-01
EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statistical associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can select a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.
Exploratory Data analysis ENvironment eXtreme scale (EDENx)
2015-07-01
EDENx is a multivariate data visualization tool that allows interactive user driven analysis of large-scale data sets with high dimensionality. EDENx builds on our earlier system, called EDEN to enable analysis of more dimensions and larger scale data sets. EDENx provides an initial overview of summary statistics for each variable in the data set under investigation. EDENx allows the user to interact with graphical summary plots of the data to investigate subsets and their statisticalmore » associations. These plots include histograms, binned scatterplots, binned parallel coordinate plots, timeline plots, and graphical correlation indicators. From the EDENx interface, a user can select a subsample of interest and launch a more detailed data visualization via the EDEN system. EDENx is best suited for high-level, aggregate analysis tasks while EDEN is more appropriate for detail data investigations.« less
Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale
Nykanen, V.; Raines, G.L.
2006-01-01
A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.
Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation
NASA Astrophysics Data System (ADS)
Marrasé, Celia
2004-03-01
Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.
Multi-scale statistical analysis of coronal solar activity
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-08
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
Multi-scale statistical analysis of coronal solar activity
NASA Astrophysics Data System (ADS)
Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.
2016-07-01
Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.
Scaled-particle theory analysis of cylindrical cavities in solution.
Ashbaugh, Henry S
2015-04-01
The solvation of hard spherocylindrical solutes is analyzed within the context of scaled-particle theory, which takes the view that the free energy of solvating an empty cavitylike solute is equal to the pressure-volume work required to inflate a solute from nothing to the desired size and shape within the solvent. Based on our analysis, an end cap approximation is proposed to predict the solvation free energy as a function of the spherocylinder length from knowledge regarding only the solvent density in contact with a spherical solute. The framework developed is applied to extend Reiss's classic implementation of scaled-particle theory and a previously developed revised scaled-particle theory to spherocylindrical solutes. To test the theoretical descriptions developed, molecular simulations of the solvation of infinitely long cylindrical solutes are performed. In hard-sphere solvents classic scaled-particle theory is shown to provide a reasonably accurate description of the solvent contact correlation and resulting solvation free energy per unit length of cylinders, while the revised scaled-particle theory fitted to measured values of the contact correlation provides a quantitative free energy. Applied to the Lennard-Jones solvent at a state-point along the liquid-vapor coexistence curve, however, classic scaled-particle theory fails to correctly capture the dependence of the contact correlation. Revised scaled-particle theory, on the other hand, provides a quantitative description of cylinder solvation in the Lennard-Jones solvent with a fitted interfacial free energy in good agreement with that determined for purely spherical solutes. The breakdown of classical scaled-particle theory does not result from the failure of the end cap approximation, however, but is indicative of neglected higher-order curvature dependences on the solvation free energy. PMID:25974499
Bridgman crystal growth in low gravity - A scaling analysis
NASA Technical Reports Server (NTRS)
Alexander, J. I. D.; Rosenberger, Franz
1990-01-01
The results of an order-of-magnitude or scaling analysis are compared with those of numerical simulations of the effects of steady low gravity on compositional nonuniformity in crystals grown by the Bridgman-Stockbarger technique. In particular, the results are examined of numerical simulations of the effect of steady residual acceleration on the transport of solute in a gallium-doped germanium melt during directional solidification under low-gravity conditions. The results are interpreted in terms of the relevant dimensionless groups associated with the process, and scaling techniques are evaluated by comparing their predictions with the numerical results. It is demonstrated that, when convective transport is comparable with diffusive transport, some specific knowledge of the behavior of the system is required before scaling arguments can be used to make reasonable predictions.
Moderated regression analysis and Likert scales: too coarse for comfort.
Russell, C J; Bobko, P
1992-06-01
One of the most commonly accepted models of relationships among three variables in applied industrial and organizational psychology is the simple moderator effect. However, many authors have expressed concern over the general lack of empirical support for interaction effects reported in the literature. We demonstrate in the current sample that use of a continuous, dependent-response scale instead of a discrete, Likert-type scale, causes moderated regression analysis effect sizes to increase an average of 93%. We suggest that use of relatively coarse Likert scales to measure fine dependent responses causes information loss that, although varying widely across subjects, greatly reduces the probability of detecting true interaction effects. Specific recommendations for alternate research strategies are made. PMID:1601825
Kenney, Terry A.
2005-01-01
A multi-dimensional hydrodynamic model was applied to aid in the assessment of the potential hazard posed to the uranium mill tailings near Moab, Utah, by flooding in the Colorado River as it flows through Moab Valley. Discharge estimates for the 100- and 500-year recurrence interval and for the Probable Maximum Flood (PMF) were evaluated with the model for the existing channel geometry. These discharges also were modeled for three other channel-deepening configurations representing hypothetical scour of the channel at the downstream portal of Moab Valley. Water-surface elevation, velocity distribution, and shear-stress distribution were predicted for each simulation. The hydrodynamic model was developed from measured channel topography and over-bank topographic data acquired from several sources. A limited calibration of the hydrodynamic model was conducted. The extensive presence of tamarisk or salt cedar in the over-bank regions of the study reach presented challenges for determining roughness coefficients. Predicted water-surface elevations for the current channel geometry indicated that the toe of the tailings pile would be inundated by about 4 feet by the 100-year discharge and 25 feet by the PMF discharge. A small area at the toe of the tailings pile was characterized by velocities of about 1 to 2 feet per second for the 100-year discharge. Predicted velocities near the toe for the PMF discharge increased to between 2 and 4 feet per second over a somewhat larger area. The manner to which velocities progress from the 100-year discharge to the PMF discharge in the area of the tailings pile indicates that the tailings pile obstructs the over-bank flow of flood discharges. The predicted path of flow for all simulations along the existing Colorado River channel indicates that the current distribution of tamarisk in the over-bank region affects how flood-flow velocities are spatially distributed. Shear-stress distributions were predicted throughout the study reach
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would
Analysis of Reynolds number scaling for viscous vortex reconnection
NASA Astrophysics Data System (ADS)
Ni, Qionglin; Hussain, Fazle; Wang, Jianchun; Chen, Shiyi
2012-10-01
A theoretical analysis of viscous vortex reconnection is developed based on scale separation, and the Reynolds number, Re (= circulation/viscosity), scaling for the reconnection time Trec is derived. The scaling varies continuously as Re increases from T_{rec} ˜ {mathopRenolimits} ^{ - 1} to T_{rec} ˜ {mathopRenolimits} ^{ - 1/2}. This theoretical prediction agrees well with direct numerical simulations by Garten et al. [J. Fluid Mech. 426, 1 (2001)], 10.1017/S0022112000002251 and Hussain and Duraisamy [Phys. Fluids 23, 021701 (2011)], 10.1063/1.3532039. Moreover, our analysis yields two Re's, namely, a characteristic Re {mathopRenolimits} _{0.75} in left[ {Oleft({10^2 } right),Oleft({10^3 } right)} right] for the T_{rec} ˜ {mathopRenolimits} ^{ - 0.75} scaling given by Hussain and Duraisamy and the critical Re {mathopRenolimits} _c ˜ Oleft({10^4 } right) for the transition after which the first reconnection is completed. For {mathopRenolimits} > {mathopRenolimits} _c, a quiescent state follows, and then, a second reconnection may occur.
Empirical analysis of scaling and fractal characteristics of outpatients
NASA Astrophysics Data System (ADS)
Zhang, Li-Jiang; Liu, Zi-Xian; Guo, Jin-Li
2014-01-01
The paper uses power-law frequency distribution, power spectrum analysis, detrended fluctuation analysis, and surrogate data testing to evaluate outpatient registration data of two hospitals in China and to investigate the human dynamics of systems that use the “first come, first served” protocols. The research results reveal that outpatient behavior follow scaling laws. The results also suggest that the time series of inter-arrival time exhibit 1/f noise and have positive long-range correlation. Our research may contribute to operational optimization and resource allocation in hospital based on FCFS admission protocols.
Bicoherence analysis of model-scale jet noise.
Gee, Kent L; Atchley, Anthony A; Falco, Lauren E; Shepherd, Micah R; Ukeiley, Lawrence S; Jansen, Bernard J; Seiner, John M
2010-11-01
Bicoherence analysis has been used to characterize nonlinear effects in the propagation of noise from a model-scale, Mach-2.0, unheated jet. Nonlinear propagation effects are predominantly limited to regions near the peak directivity angle for this jet source and propagation range. The analysis also examines the practice of identifying nonlinear propagation by comparing spectra measured at two different distances and assuming far-field, linear propagation between them. This spectral comparison method can lead to erroneous conclusions regarding the role of nonlinearity when the observations are made in the geometric near field of an extended, directional radiator, such as a jet. PMID:21110528
Floodplain management in Africa: Large scale analysis of flood data
NASA Astrophysics Data System (ADS)
Padi, Philip Tetteh; Baldassarre, Giuliano Di; Castellarin, Attilio
2011-01-01
To mitigate a continuously increasing flood risk in Africa, sustainable actions are urgently needed. In this context, we describe a comprehensive statistical analysis of flood data in the African continent. The study refers to quality-controlled, large and consistent databases of flood data, i.e. maximum discharge value and times series of annual maximum flows. Probabilistic envelope curves are derived for the African continent by means of a large scale regional analysis. Moreover, some initial insights on the statistical characteristics of African floods are provided. The results of this study are relevant and can be used to get some indications to support flood management in Africa.
NASA Technical Reports Server (NTRS)
Cao, Yiding; Faghri, Amir; Chang, Won Soon
1989-01-01
An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.
Not Available
1982-01-01
The Department of Energy, Morgantown Energy Technology Center, has been supporting the development of flow models for Devonian shale gas reservoirs. The broad objectives of this modeling program are: (1) To develop and validate a mathematical model which describes gas flow through Devonian shales. (2) To determine the sensitive parameters that affect deliverability and recovery of gas from Devonian shales. (3) To recommend laboratory and field measurements for determination of those parameters critical to the productivity and timely recovery of gas from the Devonian shales. (4) To analyze pressure and rate transient data from observation and production gas wells to determine reservoir parameters and well performance. (5) To study and determine the overall performance of Devonian shale reservoirs in terms of well stimulation, well spacing, and resource recovery as a function of gross reservoir properties such as anisotropy, porosity and thickness variations, and boundary effects. The flow equations that are the mathematical basis of the two-dimensional model are presented. It is assumed that gas transport to producing wells in Devonian shale reservoirs occurs through a natural fracture system into which matrix blocks of contrasting physical properties deliver contained gas. That is, the matrix acts as a uniformly distributed gas source in a fracture medium. Gas desorption from pore walls is treated as a uniformly distributed source within the matrix blocks. 24 references.
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...
ERIC Educational Resources Information Center
Sengupta, Atanu; Pal, Naibedya Prasun
2012-01-01
Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…
A research on analysis method of land environment big data storage based on air-earth-life
NASA Astrophysics Data System (ADS)
Lu, Yanling; Li, Jingwen
2015-12-01
Many problems of land environment in urban development, with the support of 3S technology, the research of land environment evolved into the stage of spatial-temporal scales. This paper combining space, time and attribute features in land environmental change, with elements of "air-earth-life" framework for the study of pattern, researching the analysis method of land environment big data storage due to the limitations of traditional processing method in land environment spatial-temporal data, to reflect the organic couping relationship among the multi-dimensional elements in land environment and provide the theory basis of data storage for implementing big data analysis application platform in land environment.
Perceptual security of encrypted images based on wavelet scaling analysis
NASA Astrophysics Data System (ADS)
Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.
2016-08-01
The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.
Scaling analysis for the investigation of slip mechanisms in nanofluids
NASA Astrophysics Data System (ADS)
Savithiri, S.; Pattamatta, Arvind; Das, Sarit K.
2011-07-01
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and
Reactor Physics Methods and Analysis Capabilities in SCALE
DeHart, Mark D; Bowman, Stephen M
2011-01-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
Reactor Physics Methods and Analysis Capabilities in SCALE
Mark D. DeHart; Stephen M. Bowman
2011-05-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
SCALE 6: Comprehensive Nuclear Safety Analysis Code System
Bowman, Stephen M
2011-01-01
Version 6 of the Standardized Computer Analyses for Licensing Evaluation (SCALE) computer software system developed at Oak Ridge National Laboratory, released in February 2009, contains significant new capabilities and data for nuclear safety analysis and marks an important update for this software package, which is used worldwide. This paper highlights the capabilities of the SCALE system, including continuous-energy flux calculations for processing multigroup problem-dependent cross sections, ENDF/B-VII continuous-energy and multigroup nuclear cross-section data, continuous-energy Monte Carlo criticality safety calculations, Monte Carlo radiation shielding analyses with automated three-dimensional variance reduction techniques, one- and three-dimensional sensitivity and uncertainty analyses for criticality safety evaluations, two- and three-dimensional lattice physics depletion analyses, fast and accurate source terms and decay heat calculations, automated burnup credit analyses with loading curve search, and integrated three-dimensional criticality accident alarm system analyses using coupled Monte Carlo criticality and shielding calculations.
Scaling and dimensional analysis of acoustic streaming jets
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H.
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Dehazing method through polarimetric imaging and multi-scale analysis
NASA Astrophysics Data System (ADS)
Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin
2015-05-01
An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.
Comparative context analysis of codon pairs on an ORFeome scale
Moura, Gabriela; Pinheiro, Miguel; Silva, Raquel; Miranda, Isabel; Afreixo, Vera; Dias, Gaspar; Freitas, Adelaide; Oliveira, José L; Santos, Manuel AS
2005-01-01
Codon context is an important feature of gene primary structure that modulates mRNA decoding accuracy. We have developed an analytical software package and a graphical interface for comparative codon context analysis of all the open reading frames in a genome (the ORFeome). Using the complete ORFeome sequences of Saccharomyces cerevisiae, Schizosaccharomyces pombe, Candida albicans and Escherichia coli, we show that this methodology permits large-scale codon context comparisons and provides new insight on the rules that govern the evolution of codon-pair context. PMID:15774029
Irregularities and scaling in signal and image processing: multifractal analysis
NASA Astrophysics Data System (ADS)
Abry, Patrice; Jaffard, Herwig; Wendt, Stéphane
2015-03-01
B. Mandelbrot gave a new birth to the notions of scale invariance, self-similarity and non-integer dimensions, gathering them as the founding corner-stones used to build up fractal geometry. The first purpose of the present contribution is to review and relate together these key notions, explore their interplay and show that they are different facets of a single intuition. Second, we will explain how these notions lead to the derivation of the mathematical tools underlying multifractal analysis. Third, we will reformulate these theoretical tools into a wavelet framework, hence enabling their better theoretical understanding as well as their efficient practical implementation. B. Mandelbrot used his concept of fractal geometry to analyze real-world applications of very different natures. As a tribute to his work, applications of various origins, and where multifractal analysis proved fruitful, are revisited to illustrate the theoretical developments proposed here.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
A Multi-scale Approach to Urban Thermal Analysis
NASA Technical Reports Server (NTRS)
Gluch, Renne; Quattrochi, Dale A.
2005-01-01
An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.
Investigation of Biogrout processes by numerical analysis at pore scale
NASA Astrophysics Data System (ADS)
Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.
2013-04-01
Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be
Problems of allometric scaling analysis: examples from mammalian reproductive biology.
Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K
2005-05-01
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best
Flux Coupling Analysis of Genome-Scale Metabolic Network Reconstructions
Burgard, Anthony P.; Nikolaev, Evgeni V.; Schilling, Christophe H.; Maranas, Costas D.
2004-01-01
In this paper, we introduce the Flux Coupling Finder (FCF) framework for elucidating the topological and flux connectivity features of genome-scale metabolic networks. The framework is demonstrated on genome-scale metabolic reconstructions of Helicobacter pylori, Escherichia coli, and Saccharomyces cerevisiae. The analysis allows one to determine whether any two metabolic fluxes, v1 and v2, are (1) directionally coupled, if a non-zero flux for v1 implies a non-zero flux for v2 but not necessarily the reverse; (2) partially coupled, if a non-zero flux for v1 implies a non-zero, though variable, flux for v2 and vice versa; or (3) fully coupled, if a non-zero flux for v1 implies not only a non-zero but also a fixed flux for v2 and vice versa. Flux coupling analysis also enables the global identification of blocked reactions, which are all reactions incapable of carrying flux under a certain condition; equivalent knockouts, defined as the set of all possible reactions whose deletion forces the flux through a particular reaction to zero; and sets of affected reactions denoting all reactions whose fluxes are forced to zero if a particular reaction is deleted. The FCF approach thus provides a novel and versatile tool for aiding metabolic reconstructions and guiding genetic manipulations. PMID:14718379
Analysis of hydrological triggered clayey landslides by small scale experiments
NASA Astrophysics Data System (ADS)
Spickermann, A.; Malet, J.-P.; van Asch, T. W. J.; Schanz, T.
2010-05-01
Hydrological processes, such as slope saturation by water, are a primary cause of landslides. This effect can occur in the form of e.g. intense rainfall, snowmelt or changes in ground-water levels. Hydrological processes can trigger a landslide and control subsequent movement. In order to forecast potential landslides, it is important to know both the mechanism leading to failure, to evaluate whether a slope will fail or not, and the mechanism that control the movement of the failure mass, to estimate how much material will move in which time. Despite numerous studies which have been done there is still uncertainty in the explanation of the processes determining the failure and post-failure. Background and motivation of the study is the Barcelonnette area that is part of the Ubaye Valley in the South French Alps which is highly affected by hydrological-controlled landslides in reworked black marls. Since landslide processes are too complex to understand it only by field observation experiments and computer calculations are used. The main focus of this work is to analyse the initialization of failure and the post-failure behaviour of hydrological triggered landslides in clays by small-scale experiments, namely by small-scale flume tests and centrifuge tests. Although a lot of effort is made to investigate the landslide problem by either small-scale or even large-scale slope experiments there is still no optimal solution. Small-scale flume tests are often criticised because of their scale-effect problems dominant in dense sands and cohesive material and boundary problems. By means of centrifuge tests the scale problem with respect to stress conditions is overcome. But also centrifuge testing is accompanied with problems. The objectives of the work are 1) to review potential failure and post-failure mechanisms, 2) to evaluate small-scale experiments, namely flume and centrifuge tests in the analysis of the failure behaviour in clayey slopes and 3) to interpret the
Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Eczema , ringworm , and psoriasis ...
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2013-08-01
In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case
Reliability analysis of a utility-scale solar power plant
NASA Astrophysics Data System (ADS)
Kolb, G. J.
1992-10-01
This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western US utilities led by the Southern California Edison Company. The 10 MW plant is known as Solar Two and is scheduled to be on-line in 1994. It is a prototype which should lead to the construction of 100 MW commercial-scale plants by the year 2000. The availability calculation was performed with the UNIRAM computer code. The analysis predicted a forced outage rate of 5.4 percent and an overall plant availability, including scheduled outages, of 91 percent. The code also identified the most important contributors to plant unavailability. Control system failures were identified as the most important cause of forced outages. Receiver problems were rated second with turbine outages third. The overall plant availability of 91 percent exceeds the goal identified by the US utility study. This paper discuses the availability calculation and presents evidence why the 91 percent availability is a credible estimate.
13C metabolic flux analysis at a genome-scale.
Gopalakrishnan, Saratram; Maranas, Costas D
2015-11-01
Metabolic models used in 13C metabolic flux analysis generally include a limited number of reactions primarily from central metabolism. They typically omit degradation pathways, complete cofactor balances, and atom transition contributions for reactions outside central metabolism. This study addresses the impact on prediction fidelity of scaling-up mapping models to a genome-scale. The core mapping model employed in this study accounts for (75 reactions and 65 metabolites) primarily from central metabolism. The genome-scale metabolic mapping model (GSMM) (697 reaction and 595 metabolites) is constructed using as a basis the iAF1260 model upon eliminating reactions guaranteed not to carry flux based on growth and fermentation data for a minimal glucose growth medium. Labeling data for 17 amino acid fragments obtained from cells fed with glucose labeled at the second carbon was used to obtain fluxes and ranges. Metabolic fluxes and confidence intervals are estimated, for both core and genome-scale mapping models, by minimizing the sum of square of differences between predicted and experimentally measured labeling patterns using the EMU decomposition algorithm. Overall, we find that both topology and estimated values of the metabolic fluxes remain largely consistent between core and GSM model. Stepping up to a genome-scale mapping model leads to wider flux inference ranges for 20 key reactions present in the core model. The glycolysis flux range doubles due to the possibility of active gluconeogenesis, the TCA flux range expanded by 80% due to the availability of a bypass through arginine consistent with labeling data, and the transhydrogenase reaction flux was essentially unresolved due to the presence of as many as five routes for the inter-conversion of NADPH to NADH afforded by the genome-scale model. By globally accounting for ATP demands in the GSMM model the unused ATP decreased drastically with the lower bound matching the maintenance ATP requirement. A non
Parallel Index and Query for Large Scale Data Analysis
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.
Anomaly Detection in Multiple Scale for Insider Threat Analysis
Kim, Yoohwan; Sheldon, Frederick T; Hively, Lee M
2012-01-01
We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.
Application of wavelet transforms to reservoir data analysis and scaling
Panda, M.N.; Mosher, C.; Chopra, A.K.
1996-12-31
General characterization of physical systems uses two aspects of data analysis methods: decomposition of empirical data to determine model parameters and reconstruction of the image using these characteristic parameters. Spectral methods, involving a frequency based representation of data, usually assume stationarity. These methods, therefore, extract only the average information and hence are not suitable for analyzing data with isolated or deterministic discontinuities, such as faults or fractures in reservoir rocks or image edges in computer vision. Wavelet transforms provide a multiresolution framework for data representation. They are a family of orthogonal basis functions that separate a function or a signal into distinct frequency packets that are localized in the time domain. Thus, wavelets are well suited for analyzing non-stationary data. In other words, a projection of a function or a discrete data set onto a time-frequency space using wavelets shows how the function behaves at different scales of measurement. Because wavelets have compact support, it is easy to apply this transform to large data sets with minimal computations. We apply the wavelet transforms to one-dimensional and two-dimensional permeability data to determine the locations of layer boundaries and other discontinuities. By binning in the time-frequency plane with wavelet packets, permeability structures of arbitrary size are analyzed. We also apply orthogonal wavelets for scaling up of spatially correlated heterogeneous permeability fields.
Two-field analysis of no-scale supergravity inflation
Ellis, John; García, Marcos A.G.; Olive, Keith A.; Nanopoulos, Dimitri V. E-mail: garciagarcia@physics.umn.edu E-mail: olive@physics.umn.edu
2015-01-01
Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflation model with a quadratic potential are capable of reducing r to very small values || 0.1. We also calculate the non-Gaussianity measure f{sub NL}, finding that is well below the current experimental sensitivity.
Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de
2014-06-10
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
Large scale rigidity-based flexibility analysis of biomolecules
Streinu, Ileana
2016-01-01
KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583
Cluster coarsening during polymer collapse: Finite-size scaling analysis
NASA Astrophysics Data System (ADS)
Majumder, Suman; Janke, Wolfhard
2015-06-01
We study the kinetics of the collapse of a single flexible polymer when it is quenched from a good solvent to a poor solvent. Results obtained from Monte Carlo simulations show that the collapse occurs through a sequence of events with the formation, growth and subsequent coalescence of clusters of monomers to a single compact globule. Particular emphasis is given in this work to the cluster growth during the collapse, analyzed via the application of finite-size scaling techniques. The growth exponent obtained in our analysis is suggestive of the universal Lifshitz-Slyozov mechanism of cluster growth. The methods used in this work could be of more general validity and applicable to other phenomena such as protein folding.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Multidimensional Scaling Analysis of the Dynamics of a Country Economy
Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Large-Scale Quantitative Analysis of Painting Arts
NASA Astrophysics Data System (ADS)
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled
Spatial data analysis for exploration of regional scale geothermal resources
NASA Astrophysics Data System (ADS)
Moghaddam, Majid Kiavarz; Noorollahi, Younes; Samadzadegan, Farhad; Sharifi, Mohammad Ali; Itoi, Ryuichi
2013-10-01
Defining a comprehensive conceptual model of the resources sought is one of the most important steps in geothermal potential mapping. In this study, Fry analysis as a spatial distribution method and 5% well existence, distance distribution, weights of evidence (WofE), and evidential belief function (EBFs) methods as spatial association methods were applied comparatively to known geothermal occurrences, and to publicly-available regional-scale geoscience data in Akita and Iwate provinces within the Tohoku volcanic arc, in northern Japan. Fry analysis and rose diagrams revealed similar directional patterns of geothermal wells and volcanoes, NNW-, NNE-, NE-trending faults, hotsprings and fumaroles. Among the spatial association methods, WofE defined a conceptual model correspondent with the real world situations, approved with the aid of expert opinion. The results of the spatial association analyses quantitatively indicated that the known geothermal occurrences are strongly spatially-associated with geological features such as volcanoes, craters, NNW-, NNE-, NE-direction faults and geochemical features such as hotsprings, hydrothermal alteration zones and fumaroles. Geophysical data contains temperature gradients over 100 °C/km and heat flow over 100 mW/m2. In general, geochemical and geophysical data were better evidence layers than geological data for exploring geothermal resources. The spatial analyses of the case study area suggested that quantitative knowledge from hydrothermal geothermal resources was significantly useful for further exploration and for geothermal potential mapping in the case study region. The results can also be extended to the regions with nearly similar characteristics.
NASA Astrophysics Data System (ADS)
Shahmansouri, M.; Mamun, A. A.
2015-07-01
The effects of strong electrostatic interaction among highly charged dust on multi-dimensional instability of dust-acoustic (DA) solitary waves in a magnetized strongly coupled dusty plasma by small- k perturbation expansion method have been investigated. We found that a Zakharov-Kuznetsov equation governs the evolution of obliquely propagating small amplitude DA solitary waves in such a strongly coupled dusty plasma. The parametric regimes for which the obliquely propagating DA solitary waves become unstable are identified. The basic properties, viz., amplitude, width, instability criterion, and growth rate, of these obliquely propagating DA solitary structures are found to be significantly modified by the effects of different physical strongly coupled dusty plasma parameters. The implications of our results in some space/astrophysical plasmas and some future laboratory experiments are briefly discussed.
Secondary Analysis of Large-Scale Assessment Data: An Alternative to Variable-Centred Analysis
ERIC Educational Resources Information Center
Chow, Kui Foon; Kennedy, Kerry John
2014-01-01
International large-scale assessments are now part of the educational landscape in many countries and often feed into major policy decisions. Yet, such assessments also provide data sets for secondary analysis that can address key issues of concern to educators and policymakers alike. Traditionally, such secondary analyses have been based on a…
Yu, Ping; Revis, Joana; Wuyts, Floris L; Zanaret, Michel; Giovanni, Antoine
2002-01-01
Various rating scales have been used for perceptual voice analysis including ordinal (ORD) scales and visual analog (VA) scales. The purpose of this study was to determine the most suitable scale for studies using perceptual voice analysis as a gold standard for validation of objective analysis protocols. The study was carried out on 74 female voice samples from 68 dysphonic patients and 6 controls. A panel of 4 raters with experience in perceptual analysis was asked to score voices according to the G component (overall quality) of the GRBAS system. Two rating scales were used. The first was a conventional 4-point ORD scale. The second was a modified VA (mVA) scale obtained by transforming the VA scale into an ORD scale using a weighted conversion scheme. Objective voice evaluation was performed using the EVA workstation. Objective measurements included acoustic, aerodynamic, and physiologic parameters as well as parameters based on nonlinear mathematics (e.g., Lyapunov coefficient). Instrumental measurements were compared with results of perceptual analysis using either the conventional ORD scale or mVA scale. Results demonstrate that correlation between perceptual and objective voice judgments is better using a mVA scale than a conventional ORD scale (concordance, 88 vs. 64%). Data also indicate that the mVA scale described herein improves the correlation between objective and perceptual voice analysis. PMID:12417797
Large-scale reconstruction and phylogenetic analysis of metabolic environments
Borenstein, Elhanan; Kupiec, Martin; Feldman, Marcus W.; Ruppin, Eytan
2008-01-01
The topology of metabolic networks may provide important insights not only into the metabolic capacity of species, but also into the habitats in which they evolved. Here we introduce the concept of a metabolic network's “seed set”—the set of compounds that, based on the network topology, are exogenously acquired—and provide a methodological framework to computationally infer the seed set of a given network. Such seed sets form ecological “interfaces” between metabolic networks and their surroundings, approximating the effective biochemical environment of each species. Analyzing the metabolic networks of 478 species and identifying the seed set of each species, we present a comprehensive large-scale reconstruction of such predicted metabolic environments. The seed sets' composition significantly correlates with several basic properties characterizing the species' environments and agrees with biological observations concerning major adaptations. Species whose environments are highly predictable (e.g., obligate parasites) tend to have smaller seed sets than species living in variable environments. Phylogenetic analysis of the seed sets reveals the complex dynamics governing gain and loss of seeds across the phylogenetic tree and the process of transition between seed and non-seed compounds. Our findings suggest that the seed state is transient and that seeds tend either to be dropped completely from the network or to become non-seed compounds relatively fast. The seed sets also permit a successful reconstruction of a phylogenetic tree of life. The “reverse ecology” approach presented lays the foundations for studying the evolutionary interplay between organisms and their habitats on a large scale. PMID:18787117
Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.
ERIC Educational Resources Information Center
Faulstich, Michael E.; And Others
1985-01-01
Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)
Sensitivity analysis and scale issues in landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Catani, Filippo; Lagomarsino, Daniela; Segoni, Samuele; Tofani, Veronica
2013-04-01
random forest enables to estimate the relative importance of the single input parameters and to select the optimal configuration of the regression model. The model was initially applied using the complete set of input parameters, then with progressively smaller subsamples of the parameter space. Considering the best set of parameters we also studied the impact of scale and accuracy of input variables and the influence of the RF model random component on the susceptibility results. We apply the model statistics to a test area in central Italy, the hydrographic basin of the Arno river (ca. 9000 km2), we present the obtained results and discuss them. We also use the outcomes of the parameter sensitivity analysis to investigate the different role of environmental factors in the test area.
Large-scale dimension densities for heart rate variability analysis
NASA Astrophysics Data System (ADS)
Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen
2006-04-01
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (ρlsμ=0.97±0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65±0.13 ; EH, 0.54±0.05 ; YH, 0.57±0.05 ; p<0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in ρlsμ (day, 0.65±0.13 ; night, 0.66±0.12 ; n.s.) in contrast to healthy controls (day, 0.54±0.05 ; night, 0.61±0.05 ; p=0.002 ). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification.
Large-scale dimension densities for heart rate variability analysis.
Raab, Corinna; Wessel, Niels; Schirdewan, Alexander; Kurths, Jürgen
2006-04-01
In this work, we reanalyze the heart rate variability (HRV) data from the 2002 Computers in Cardiology (CiC) Challenge using the concept of large-scale dimension densities and additionally apply this technique to data of healthy persons and of patients with cardiac diseases. The large-scale dimension density (LASDID) is estimated from the time series using a normalized Grassberger-Procaccia algorithm, which leads to a suitable correction of systematic errors produced by boundary effects in the rather large scales of a system. This way, it is possible to analyze rather short, nonstationary, and unfiltered data, such as HRV. Moreover, this method allows us to analyze short parts of the data and to look for differences between day and night. The circadian changes in the dimension density enable us to distinguish almost completely between real data and computer-generated data from the CiC 2002 challenge using only one parameter. In the second part we analyzed the data of 15 patients with atrial fibrillation (AF), 15 patients with congestive heart failure (CHF), 15 elderly healthy subjects (EH), as well as 18 young and healthy persons (YH). With our method we are able to separate completely the AF (rho (mu/ls) = 0.97 +/- 0.02) group from the others and, especially during daytime, the CHF patients show significant differences from the young and elderly healthy volunteers (CHF, 0.65 +/- 0.13; EH, 0.54 +/- 0.05; YH, 0.57 +/- 0.05; p < 0.05 for both comparisons). Moreover, for the CHF patients we find no circadian changes in rho (mu/ls) (day, 0.65 +/- 0.13; night, 0.66 +/- 0.12; n.s.) in contrast to healthy controls (day, 0.54 +/- 0.05; night, 0.61 +/- 0.05; p=0.002). Correlation analysis showed no statistical significant relation between standard HRV and circadian LASDID, demonstrating a possibly independent application of our method for clinical risk stratification. PMID:16711836
The Effect of Data Scaling on Dual Prices and Sensitivity Analysis in Linear Programs
ERIC Educational Resources Information Center
Adlakha, V. G.; Vemuganti, R. R.
2007-01-01
In many practical situations scaling the data is necessary to solve linear programs. This note explores the relationships in translating the sensitivity analysis between the original and the scaled problems.
Acoustic modal analysis of a full-scale annular combustor
NASA Technical Reports Server (NTRS)
Karchmer, A. M.
1982-01-01
An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine.
MIXREGLS: A Program for Mixed-Effects Location Scale Analysis.
Hedeker, Donald; Nordgren, Rachel
2013-03-01
MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062
Heterogeneous reactions over fractal surfaces: A multifractal scaling analysis
Lee, Shyi-Long; Lee, Chung-Kung
1996-12-31
Monte Carlo simulations of modified Eley-Rideal mechanisms possessing decay-type and enhance-type sticking probabilities as well as a three-step catalytic reaction over fractal surfaces were performed to examine the morphological effect on the above-mentioned surface reactions. Effects of decay and enhancing profiles on reaction probability distribution for Eley-Rideal reactions as well as effects of varying probability of reaction steps and cluster sizes on the normalized selectivity distribution for the three-step reaction were then analyzed by multifractal scaling techniques. For the Eley-Rideal mechanism, it is found that reaction probability distribution tends to be spatially uniform at fast decay and rather concentrated at faster enhancing rate. For the three-step reaction, increase of cluster size is found to lower the position sensitivity of normalized selectivity distribution. Large dimerization to isomerization ratio increases position distinction among active sites as the adsorption probability equals to 1. At small adsorption probability, the dimerization/isomerization ratio causes no effect on the normalized selectivity distribution. Heterogeneity of surfaces as reflected in the multifractal analysis will also be discussed.
Full-scale testing and analysis of fuselage structure
NASA Technical Reports Server (NTRS)
Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.
1994-01-01
This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.
Acoustic modal analysis of a full-scale annular combustor
NASA Technical Reports Server (NTRS)
Karchmer, A. M.
1983-01-01
An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine. Previously announced in STAR as N83-21896
Full-scale testing and analysis of fuselage structure
NASA Astrophysics Data System (ADS)
Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.
1994-09-01
This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.
MIXREGLS: A Program for Mixed-Effects Location Scale Analysis
Hedeker, Donald; Nordgren, Rachel
2013-01-01
MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062
MicroScale Thermophoresis: Interaction analysis and beyond
NASA Astrophysics Data System (ADS)
Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis
2014-12-01
MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.
Scaling analysis of baseline dual-axis cervical accelerometry signals.
Sejdić, Ervin; Steele, Catriona M; Chau, Tom
2011-09-01
Dual-axis cervical accelerometry is an emerging approach for the assessment of swallowing difficulties. However, the baseline signals, i.e., vibration signals with only quiet breathing or apnea but without swallowing, are not well understood. In particular, to comprehend the contaminant effects of head motion on cervical accelerometry, we need to study the scaling behavior of these baseline signals. Dual-axis accelerometry data were collected from 50 healthy adult participants under conditions of quiet breathing, apnea and selected head motions, all in the absence of swallowing. The denoised cervical vibrations were subjected to detrended fluctuation analysis with empirically determined first-order detrending. Strong persistence was identified in cervical vibration signals in both anterior-posterior (A-P) and superior-inferior (S-I) directions, under all the above experimental conditions. Vibrations in the A-P axes exhibited stronger correlations than those in the S-I axes, possibly as a result of axis-specific effects of vasomotion. In both axes, stronger correlations were found in the presence of head motion than without, suggesting that head movement significantly impacts baseline cervical accelerometry. No gender or age effects were found on statistical persistence of either vibration axes. Future developments of cervical accelerometry-based medical devices should actively mitigate the effects of head movement. PMID:20708292
Numerical Simulation and Scaling Analysis of Cell Printing
NASA Astrophysics Data System (ADS)
Qiao, Rui; He, Ping
2011-11-01
Cell printing, i.e., printing three dimensional (3D) structures of cells held in a tissue matrix, is gaining significant attention in the biomedical community. The key idea is to use inkjet printer or similar devices to print cells into 3D patterns with a resolution comparable to the size of mammalian cells. Achieving such a resolution in vitro can lead to breakthroughs in areas such as organ transplantation. Although the feasibility of cell printing has been demonstrated recently, the printing resolution and cell viability remain to be improved. Here we investigate a unit operation in cell printing, namely, the impact of a cell-laden droplet into a pool of highly viscous liquids. The droplet and cell dynamics are quantified using both direct numerical simulation and scaling analysis. These studies indicate that although cell experienced significant stress during droplet impact, the duration of such stress is very short, which helps explain why many cells can survive the cell printing process. These studies also revealed that cell membrane can be temporarily ruptured during cell printing, which is supported by indirect experimental evidence.
ERIC Educational Resources Information Center
Martin, Andrew J.; Yu, Kai; Papworth, Brad; Ginns, Paul; Collie, Rebecca J.
2015-01-01
This study explored motivation and engagement among North American (the United States and Canada; n = 1,540), U.K. (n = 1,558), Australian (n = 2,283), and Chinese (n = 3,753) secondary school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale-High School (MES-HS). Confirmatory…
ERIC Educational Resources Information Center
Schaffhauser, Dian
2009-01-01
The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…
JOHNSON, M.D.
2000-03-13
Fairbanks Weight Scales are used at the Waste Receiving and Processing (WRAP) facility to determine the weight of waste drums as they are received, processed, and shipped. Due to recent problems, discovered during calibration, the WRAP Engineering Department has completed this document which outlines both the investigation of the infeed conveyor scale failure in September of 1999 and recommendations for calibration procedure modifications designed to correct deficiencies in the current procedures.
Confirmatory Factor Analysis of the Geriatric Depression Scale
ERIC Educational Resources Information Center
Adams, Kathryn Betts; Matto, Holly C.; Sanders, Sara
2004-01-01
Purpose: The Geriatric Depression Scale (GDS) is widely used in clinical and research settings to screen older adults for depressive symptoms. Although several exploratory factor analytic structures have been proposed for the scale, no independent confirmation has been made available that would enable investigators to confidently identify scores…
Reliability and Validity Analysis of the Multiple Intelligence Perception Scale
ERIC Educational Resources Information Center
Yesil, Rustu; Korkmaz, Ozgen
2010-01-01
This study mainly aims to develop a scale to determine individual intelligence profiles based on self-perceptions. The study group consists of 925 students studying in various departments of the Faculty of Education at Ahi Evran University. A logical and statistical approach was adopted in scale development. Expert opinion was obtained for the…
Huerta, M.
1981-06-01
This report describes the mathematical analysis, the physical scale modeling, and a full-scale crash test of a railcar spent-nuclear-fuel shipping system. The mathematical analysis utilized a lumped-parameter model to predict the structural response of the railcar and the shipping cask. The physical scale modeling analysis consisted of two crash tests that used 1/8-scale models to assess railcar and shipping cask damage. The full-scale crash test, conducted with retired railcar equipment, was carefully monitored with onboard instrumentation and high-speed photography. Results of the mathematical and scale modeling analyses are compared with the full-scale test. 29 figures.
GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY
Lee, S
2008-05-28
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.
A theoretical analysis of basin-scale groundwater temperature distribution
NASA Astrophysics Data System (ADS)
An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong
2015-03-01
The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814
Regional Scale Analysis of Extremes in an SRM Geoengineering Simulation
NASA Astrophysics Data System (ADS)
Muthyala, R.; Bala, G.
2014-12-01
Only a few studies in the past have investigated the statistics of extreme events under geoengineering. In this study, a global climate model is used to investigate the impact of solar radiation management on extreme precipitation events on regional scale. Solar constant was reduced by 2.25% to counteract the global mean surface temperature change caused by a doubling of CO2 (2XCO2) from its preindustrial control value. Using daily precipitation rates, extreme events are defined as those which exceed 99.9th percentile precipitation threshold. Extremes are substantially reduced in geoengineering simulation: the magnitude of change is much smaller than those that occur in a simulation with doubled CO2. Regional analysis over 22 Giorgi land regions is also performed. Doubling of CO2 leads to an increase in intensity of extreme (99.9th percentile) precipitation by 17.7% on global-mean basis with maximum increase in intensity over South Asian region by 37%. In the geoengineering simulation, there is a global-mean reduction in intensity of 3.8%, with a maximum reduction over Tropical Ocean by 8.9%. Further, we find that the doubled CO2 simulation shows an increase in the frequency of extremes (>50 mm/day) by 50-200% with a global mean increase of 80%. In contrast, in geo-engineering climate there is a decrease in frequency of extreme events by 20% globally with a larger decrease over Tropical Ocean by 30%. In both the climate states (2XCO2 and geo-engineering) change in "extremes" is always greater than change in "means" over large domains. We conclude that changes in precipitation extremes are larger in 2XCO2 scenario compared to preindustrial climate while extremes decline slightly in the geoengineered climate. We are also investigating the changes in extreme statistics for daily maximum and minimum temperature, evapotranspiration and vegetation productivity. Results will be presented at the meeting.
Impact and fracture analysis of fish scales from Arapaima gigas.
Torres, F G; Malásquez, M; Troncoso, O P
2015-06-01
Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature. PMID:25842120
ERIC Educational Resources Information Center
Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine
2013-01-01
Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic…
Wu, Hui-Chun; Hegelich, B.M.; Fernandez, J.C.; Shah, R.C.; Palaniyappan, S.; Jung, D.; Yin, L; Albright, B.J.; Bowers, K.; Huang, C.; Kwan, T.J.
2012-06-19
Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.
Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)
ERIC Educational Resources Information Center
Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.
2004-01-01
Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…
Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer
NASA Astrophysics Data System (ADS)
Schnieders, Jana; Garbe, Christoph
2014-05-01
The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale
Scaling parameters for PFBC cyclone separator system analysis
Gil, A.; Romeo, L.M.; Cortes, C.
1999-07-01
Laboratory-scale cold flow models have been used extensively to study the behavior of many installations. In particular, fluidized bed cold flow models have allowed developing the knowledge of fluidized bed hydrodynamics. In order for the results of the research to be relevant to commercial power plants, cold flow models must be properly scaled. Many efforts have been made to understand the performance of fluidized beds, but up to now no attention has been paid in developing the knowledge of cyclone separator systems. CIRCE has worked on the development of scaling parameters to enable laboratory-scale equipment operating at room temperature to simulate the performance of cyclone separator systems. This paper presents the simplified scaling parameters and experimental comparison of a cyclone separator system and a cold flow model constructed and based on those parameters. The cold flow model has been used to establish the validity of the scaling laws for cyclone separator systems and permits detailed room temperature studies (determining the filtration effects of varying operating parameters and cyclone design) to be performed in a rapid and cost effective manner. This valuable and reliable design tool will contribute to a more rapid and concise understanding of hot gas filtration systems based on cyclones. The study of the behavior of the cold flow model, including observation and measurements of flow patterns in cyclones and diplegs will allow characterizing the performance of the full-scale ash removal system, establishing safe limits of operation and testing design improvements.
Differential rotation and cloud texture: Analysis using generalized scale invariance
Pflug, K.; Lovejoy, S. ); Schertzer, D. )
1993-02-14
The standard picture of atmospheric dynamics is that of an isotropic two-dimensional large scale and an isotropic three-dimensional small scale, the two separated by a dimensional transition called the [open quotes]mesoscale gap.[close quotes] Evidence now suggests that, on the contrary, atmospheric fields, while strongly anisotropic, are nonetheless scale invariant right through the mesoscale. Using visible and infrared satellite cloud images and the formalism of generalized scale invariance (GSI), the authors attempt to quantify the anisotropy for cloud radiance fields in the range 1-1000 km. To do this, the statistical translational invariance of the fields is exploited by studying the anisotropic scaling of lines of constant Fourier amplitude. This allows the investigation of the change in shape and orientation of average structures with scale. For the three texturally-and meteorologically-very different images analyzed, three different generators of anisotropy are found that generally reproduce well the Fourier space anisotropy. Although three cases are a small number from which to infer ensemble-averaged properties, the authors conclude that while cloud radiances are not isotropic (self-similar), they are nonetheless scaling. Since elsewhere (with the help of simulations) it is shown that the generator of the anisotropy is related to the texture, it is argued here that GSI could potentially provide a quantitative basis for cloud classification and modeling. 59 refs., 21 figs., 2 tabs.
A Critical Analysis of the Concept of Scale Dependent Macrodispersivity
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Cvetkovic, Vladimir; Dagan, Gedeon; Dietrich, Peter; Fiori, Aldo; Rubin, Yoram; Teutsch, Georg
2015-04-01
Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity αL (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally αL is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime). It was surmised in the literature that αL scales up with travel distance L following a universal scaling law. Attempts to define the scaling law were sursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992. Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011). Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported αL values. Our investigation concludes that transport, and particularly αL, is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.
Murray Gibson
2010-01-08
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
Murray Gibson
2007-04-27
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
An item response theory analysis of the Olweus Bullying scale.
Breivik, Kyrre; Olweus, Dan
2014-12-01
In the present article, we used IRT (graded response) modeling as a useful technology for a detailed and refined study of the psychometric properties of the various items of the Olweus Bullying scale and the scale itself. The sample consisted of a very large number of Norwegian 4th-10th grade students (n = 48 926). The IRT analyses revealed that the scale was essentially unidimensional and had excellent reliability in the upper ranges of the latent bullying tendency trait, as intended and desired. Gender DIF effects were identified with regard to girls' use of indirect bullying by social exclusion and boys' use of physical bullying by hitting and kicking but these effects were small and worked in opposite directions, having negligible effects at the scale level. Also scale scores adjusted for DIF effects differed very little from non-adjusted scores. In conclusion, the empirical data were well characterized by the chosen IRT model and the Olweus Bullying scale was considered well suited for the conduct of fair and reliable comparisons involving different gender-age groups. Information Aggr. Behav. 9999:XX-XX, 2014. © 2014 Wiley Periodicals, Inc. PMID:25460720
Rating Scale Analysis and Psychometric Properties of the Caregiver Self-Efficacy Scale for Transfers
ERIC Educational Resources Information Center
Cipriani, Daniel J.; Hensen, Francine E.; McPeck, Danielle L.; Kubec, Gina L. D.; Thomas, Julie J.
2012-01-01
Parents and caregivers faced with the challenges of transferring children with disability are at risk of musculoskeletal injuries and/or emotional stress. The Caregiver Self-Efficacy Scale for Transfers (CSEST) is a 14-item questionnaire that measures self-efficacy for transferring under common conditions. The CSEST yields reliable data and valid…
ERIC Educational Resources Information Center
Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.
2010-01-01
The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…
Refining a self-assessment of informatics competency scale using Mokken scaling analysis.
Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne
2015-01-01
Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions. PMID:26652630
ERIC Educational Resources Information Center
Redfield, Joel
1978-01-01
TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)
Three-dimensional analysis of scale morphology in bluegill sunfish, Lepomis macrochirus.
Wainwright, Dylan K; Lauder, George V
2016-06-01
Fish scales are morphologically diverse among species, within species, and on individuals. Scales of bony fishes are often categorized into three main types: cycloid scales have smooth edges; spinoid scales have spines protruding from the body of the scale; ctenoid scales have interdigitating spines protruding from the posterior margin of the scale. For this study, we used two- and three-dimensional (2D and 3D) visualization techniques to investigate scale morphology of bluegill sunfish (Lepomis macrochirus) on different regions of the body. Micro-CT scanning was used to visualize individual scales taken from different regions, and a new technique called GelSight was used to rapidly measure the 3D surface structure and elevation profiles of in situ scale patches from different regions. We used these data to compare the surface morphology of scales from different regions, using morphological measurements and surface metrology metrics to develop a set of shape variables. We performed a discriminant function analysis to show that bluegill scales differ across the body - scales are cycloid on the opercle but ctenoid on the rest of the body, and the proportion of ctenii coverage increases ventrally on the fish. Scales on the opercle and just below the anterior spinous dorsal fin were smaller in height, length, and thickness than scales elsewhere on the body. Surface roughness did not appear to differ over the body of the fish, although scales at the start of the caudal peduncle had higher skew values than other scales, indicating they have a surface that contains more peaks than valleys. Scale shape also differs along the body, with scales near the base of the tail having a more elongated shape. This study adds to our knowledge of scale structure and diversity in fishes, and the 3D measurement of scale surface structure provides the basis for future testing of functional hypotheses relating scale morphology to locomotor performance. PMID:27062451
SU-E-T-472: A Multi-Dimensional Measurements Comparison to Analyze a 3D Patient Specific QA Tool
Ashmeg, S; Jackson, J; Zhang, Y; Oldham, M; Yin, F; Ren, L
2014-06-01
Purpose: To quantitatively evaluate a 3D patient specific QA tool using 2D film and 3D Presage dosimetry. Methods: A brain IMRT case was delivered to Delta4, EBT2 film and Presage plastic dosimeter. The film was inserted in the solid water slabs at 7.5cm depth for measurement. The Presage dosimeter was inserted into a head phantom for 3D dose measurement. Delta4's Anatomy software was used to calculate the corresponding dose to the film in solid water slabs and to Presage in the head phantom. The results from Anatomy were compared to both calculated results from Eclipse and measured dose from film and Presage to evaluate its accuracy. Using RIT software, we compared the “Anatomy” dose to the EBT2 film measurement and the film measurement to ECLIPSE calculation. For 3D analysis, DICOM file of “Anatomy” was extracted and imported to CERR software, which was used to compare the Presage dose to both “Anatomy” calculation and ECLIPSE calculation. Gamma criteria of 3% - 3mm and 5% - 5mm was used for comparison. Results: Gamma passing rates of film vs “Anatomy”, “Anatomy” vs ECLIPSE and film vs ECLIPSE were 82.8%, 70.9% and 87.6% respectively when 3% - 3mm criteria is used. When the criteria is changed to 5% - 5mm, the passing rates became 87.8%, 76.3% and 90.8% respectively. For 3D analysis, Anatomy vs ECLIPSE showed gamma passing rate of 86.4% and 93.3% for 3% - 3mm and 5% - 5mm respectively. The rate is 77.0% for Presage vs ECLIPSE analysis. The Anatomy vs ECLIPSE were absolute dose comparison. However, film and Presage analysis were relative comparison Conclusion: The results show higher passing rate in 3D than 2D in “Anatomy” software. This could be due to the higher degrees of freedom in 3D than in 2D for gamma analysis.
Guttman Facet Design and Analysis: A Technique for Attitude Scale Construction.
ERIC Educational Resources Information Center
Hamersma, Richard J.
The main import of the present paper is to discuss what Guttman facet design and analysis is and then to show how this technique can be used in attitude scale construction. Since Guttman is best known for his contribution to scaling theory known as scalogram analysis, a brief historical background is given to indicate how Guttman moved from a…
NASA Astrophysics Data System (ADS)
Jain, G.; Sharma, S.; Vyas, A.; Rajawat, A. S.
2014-11-01
This study attempts to measure and characterize urban sprawl using its multiple dimensions in the Jamnagar city, India. The study utilized the multi-date satellite images acquired by CORONA, IRS 1D PAN & LISS-3, IRS P6 LISS-4 and Resourcesat-2 LISS-4 sensors. The extent of urban growth in the study area was mapped at 1 : 25,000 scale for the years 1965, 2000, 2005 and 2011. The growth of urban areas was further categorized into infill growth, expansion and leapfrog development. The city witnessed an annual growth of 1.60 % per annum during the period 2000-2011 whereas the population growth during the same period was observed at less than 1.0 % per annum. The new development in the city during 2000-2005 time period comprised of 22 % as infill development, 60 % as extension of the peripheral urbanized areas, and 18 % as leapfrogged development. However, during 2005-2011 timeframe the proportion of leapfrog development increased to 28 % whereas due to decrease in availability of developable area within the city, the infill developments declined to 9 %. The urban sprawl in Jamnagar city was further characterized on the basis of five dimensions of sprawl viz. population density, continuity, clustering, concentration and centrality by integrating the population data with sprawl for year 2001 and 2011. The study characterised the growth of Jamnagar as low density and low concentration outwardly expansion.
'Scaling' analysis of the ice accretion process on aircraft surfaces
NASA Technical Reports Server (NTRS)
Keshock, E. G.; Tabrizi, A. H.; Missimer, J. R.
1982-01-01
A comprehensive set of scaling parameters is developed for the ice accretion process by analyzing the energy equations of the dynamic freezing zone and the already frozen ice layer, the continuity equation associated with supercooled liquid droplets entering into and impacting within the dynamic freezing zone, and energy equation of the ice layer. No initial arbitrary judgments are made regarding the relative magnitudes of each of the terms. The method of intrinsic reference variables in employed in order to develop the appropriate scaling parameters and their relative significance in rime icing conditions in an orderly process, rather than utilizing empiricism. The significance of these parameters is examined and the parameters are combined with scaling criteria related to droplet trajectory similitude.
Scale analysis of equatorial plasma irregularities derived from Swarm constellation
NASA Astrophysics Data System (ADS)
Xiong, Chao; Stolle, Claudia; Lühr, Hermann; Park, Jaeheung; Fejer, Bela G.; Kervalishvili, Guram N.
2016-07-01
In this study, we investigated the scale sizes of equatorial plasma irregularities (EPIs) using measurements from the Swarm satellites during its early mission and final constellation phases. We found that with longitudinal separation between Swarm satellites larger than 0.4°, no significant correlation was found any more. This result suggests that EPI structures include plasma density scale sizes less than 44 km in the zonal direction. During the Swarm earlier mission phase, clearly better EPI correlations are obtained in the northern hemisphere, implying more fragmented irregularities in the southern hemisphere where the ambient magnetic field is low. The previously reported inverted-C shell structure of EPIs is generally confirmed by the Swarm observations in the northern hemisphere, but with various tilt angles. From the Swarm spacecrafts with zonal separations of about 150 km, we conclude that larger zonal scale sizes of irregularities exist during the early evening hours (around 1900 LT).
Norman, Matthew R
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei
2016-07-01
This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.
HyFinBall: a two-handed, hybrid 2D/3D desktop VR interface for multi-dimensional visualization
NASA Astrophysics Data System (ADS)
Cho, Isaac; Wang, Xiaoyu; Wartell, Zachary J.
2013-12-01
This paper presents the concept, working prototype and design space of a two-handed, hybrid spatial user interface for minimally immersive desktop VR targeted at multi-dimensional visualizations. The user interface supports dual button balls (6DOF isotonic controllers with multiple buttons) which automatically switch between 6DOF mode (xyz + yaw,pitch,roll) and planar-3DOF mode (xy + yaw) upon contacting the desktop. The mode switch automatically switches a button ball's visual representation between a 3D cursor and a mouse-like 2D cursor while also switching the available user interaction techniques (ITs) between 3D and 2D ITs. Further, the small form factor of the button ball allows the user to engage in 2D multi-touch or 3D gestures without releasing and re-acquiring the device. We call the device and hybrid interface the HyFinBall interface which is an abbreviation for `Hybrid Finger Ball.' We describe the user interface (hardware and software), the design space, as well as preliminary results of a formal user study. This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and interactions
SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS
MICHAEL T. ITAMUA AND CLIFFORD K. HO
1998-06-04
The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment.
Analysis and testing of similarity and scale effects in hybrid rocket motors
NASA Astrophysics Data System (ADS)
Dayal Swami, Rajeshwar; Gany, Alon
2003-04-01
In order to derive proper scaling rules in hybrid rocket motors, a theoretical similarity analysis is presented. By taking account of the main phenomena and effects, the similarity analysis defines the following three main conditions for testing a laboratory-scale hybrid rocket motor that can simulate a full-scale motor: (1) geometric similarity, (2) same fuel and oxidizer combination, and (3) scaling mass flow rate of oxidizer in proportion to the motor port diameter. To verify the analysis, tests are conducted on different-size polymethylmethacrylate/gaseous oxygen hybrid rocket motors. These motors are scaled as per the similarity analysis and tested under similarity conditions. A fairly good agreement between the test-results and theoretical prediction verifies the similarity model. This also points out that the main processes and effects associated with hybrid rocket combustion have been considered adequately in the analysis.
THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS
Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...
An Exploratory Factor Analysis of the Differential Ability Scales.
ERIC Educational Resources Information Center
Dunham, Mardis D.; McIntosh, David E.
The primary goal of this study was to investigate the underlying structure of the Differential Ability Scales (DAS) using Exploratory Principal Axis Factoring (PAF) with 62 nonclinical preschoolers. While previous factor analyses of the DAS Core subtests revealed the derivation of two distinct factors, the current results revealed only one factor,…
A Factor Analysis of the Research Self-Efficacy Scale.
ERIC Educational Resources Information Center
Bieschke, Kathleen J.; And Others
Counseling professionals' and counseling psychology students' interest in performing research seems to be waning. Identifying the impediments to graduate students' interest and participation in research is important if systematic efforts to engage them in research are to succeed. The Research Self-Efficacy Scale (RSES) was designed to measure…
Mental Models of Text and Film: A Multidimensional Scaling Analysis.
ERIC Educational Resources Information Center
Rowell, Jack A.; Moss, Peter D.
1986-01-01
Reports results of experiment to determine whether mental models are constructed of interrelationships and cross-relationships of character attributions drawn in themes of novels and films. The study used "Animal Farm" in print and cartoon forms. Results demonstrated validity of multidimensional scaling for representing both media. Proposes use of…
A Rasch Analysis of the Teachers Music Confidence Scale
ERIC Educational Resources Information Center
Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria
2007-01-01
This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…
Analysis of the time scales in time periodic Darcy flows
NASA Astrophysics Data System (ADS)
Zhu, T.; Waluga, C.; Wohlmuth, B.; Manhart, M.
2014-12-01
We investigate unsteady flow in a porous medium under time - periodic (sinusoidal) pressure gradient. DNS were performed to benchmark the analytical solution of the unsteady Darcy equation with two different expressions of the time scale : one given by a consistent volume averaging of the Navier - Stokes equation [1] with a steady state closure for the flow resistance term, another given by volume averaging of the kinetic energy equation [2] with a closure for the dissipation rate . For small and medium frequencies, the analytical solutions with the time scale obtained by the energy approach compare well with the DNS results in terms of amplitude and phase lag. For large frequencies (f > 100 [Hz]) we observe a slightly smaller damping of the amplitude. This study supports the use of the unsteady form of Darcy's equation with constant coefficients to solve time - periodic Darcy flows at low and medium frequencies. Our DNS simulations, however, indicate that the time scale predicted by the VANS approach together with a steady - state closure for the flow resistance term is too small. The one obtained by the energy approach matches the DNS results well. At large frequencies, the amplitudes deviate slightly from the analytical solution of the unsteady Darcy equation. Note that at those high frequencies, the flow amplitudes remain below 1% of those of steady state flow. This result indicates that unsteady porous media flow can approximately be described by the unsteady Darcy equation with constant coefficients for a large range of frequencies, provided, the proper time scale has been found.
The Analysis of Dichotomous Test Data Using Nonmetric Multidimensional Scaling.
ERIC Educational Resources Information Center
Koch, William R.
The technique of nonmetric multidimensional scaling (MDS) was applied to real item response data obtained from a multiple-choice achievement test of unknown dimensionality. The goal was to classify the 50 items into the various subtests from which they were drawn originally, the latter being unknown to the investigator. Issues addressed in the…
Psychometric Analysis of Computer Science Help-Seeking Scales
ERIC Educational Resources Information Center
Pajares, Frank; Cheong, Yuk Fai; Oberman, Paul
2004-01-01
The purpose of this study was to develop scales to assess instrumental help seeking, executive help seeking, perceived benefits of help seeking, and avoidance of help seeking and to examine their psychometric properties by conducting factor and reliability analyses. As this is the first attempt to examine the latent structures underlying the…
Bohr model and dimensional scaling analysis of atoms and molecules
NASA Astrophysics Data System (ADS)
Svidzinsky, Anatoly; Chen, Goong; Chin, Siu; Kim, Moochan; Ma, Dongxia; Murawski, Robert; Sergeev, Alexei; Scully, Marlan; Herschbach, Dudley
It is generally believed that the old quantum theory, as presented by Niels Bohr in 1913, fails when applied to few electron systems, such as the H2 molecule. Here we review recent developments of the Bohr model that connect it with dimensional scaling procedures adapted from quantum chromodynamics. This approach treats electrons as point particles whose positions are determined by optimizing an algebraic energy function derived from the large-dimension limit of the Schrödinger equation. The calculations required are simple yet yield useful accuracy for molecular potential curves and bring out appealing heuristic aspects. We first examine the ground electronic states of H2, HeH, He2, LiH, BeH and Li2. Even a rudimentary Bohr model, employing interpolation between large and small internuclear distances, gives good agreement with potential curves obtained from conventional quantum mechanics. An amended Bohr version, augmented by constraints derived from Heitler-London or Hund-Mulliken results, dispenses with interpolation and gives substantial improvement for H2 and H3. The relation to D-scaling is emphasized. A key factor is the angular dependence of the Jacobian volume element, which competes with interelectron repulsion. Another version, incorporating principal quantum numbers in the D-scaling transformation, extends the Bohr model to excited S states of multielectron atoms. We also discuss kindred Bohr-style applications of D-scaling to the H atom subjected to superstrong magnetic fields or to atomic anions subjected to high frequency, superintense laser fields. In conclusion, we note correspondences to the prequantum bonding models of Lewis and Langmuir and to the later resonance theory of Pauling, and discuss prospects for joining D-scaling with other methods to extend its utility and scope.
Scale Issues in Remote Sensing: A Review on Analysis, Processing and Modeling
Wu, Hua; Li, Zhao-Liang
2009-01-01
With the development of quantitative remote sensing, scale issues have attracted more and more the attention of scientists. Research is now suffering from a severe scale discrepancy between data sources and the models used. Consequently, both data interpretation and model application become difficult due to these scale issues. Therefore, effectively scaling remotely sensed information at different scales has already become one of the most important research focuses of remote sensing. The aim of this paper is to demonstrate scale issues from the points of view of analysis, processing and modeling and to provide technical assistance when facing scale issues in remote sensing. The definition of scale and relevant terminologies are given in the first part of this paper. Then, the main causes of scale effects and the scaling effects on measurements, retrieval models and products are reviewed and discussed. Ways to describe the scale threshold and scale domain are briefly discussed. Finally, the general scaling methods, in particular up-scaling methods, are compared and summarized in detail. PMID:22573986
Large-scale computations in analysis of structures
McCallen, D.B.; Goudreau, G.L.
1993-09-01
Computer hardware and numerical analysis algorithms have progressed to a point where many engineering organizations and universities can perform nonlinear analyses on a routine basis. Through much remains to be done in terms of advancement of nonlinear analysis techniques and characterization on nonlinear material constitutive behavior, the technology exists today to perform useful nonlinear analysis for many structural systems. In the current paper, a survey on nonlinear analysis technologies developed and employed for many years on programmatic defense work at the Lawrence Livermore National Laboratory is provided, and ongoing nonlinear numerical simulation projects relevant to the civil engineering field are described.
Ogrodnik, Peter J; Moorcroft, C Ian; Thomas, Peter B M
2007-12-01
An automated loading and measurement device has been developed for assessment of the mechanical properties of a healing human tibial fracture. The characteristics of the device are presented with assessments of errors. This paper constitutes a small part of a long term research project determining a clinically quantifiable end point for fracture healing in humans, hence a sample of results is presented to demonstrate the potential application of the device. A more detailed analysis of the results will be the basis of further publications. The initial results confirm that the non-linear behaviour of callus cannot be ignored in fracture assessment methodologies. They further reinforce the requirement to measure load-rate when measuring fracture stiffness. Polar plots of stiffness demonstrate that when measuring fracture stiffness not only should load-rate be considered, but also the orientation of measurement. The results from this work support the view that fracture stiffness should be measured in at least two planes. A new material property for the assessment of fracture healing, the gamma ratio gamma, is examined and preliminary results are shown. The paper also demonstrates how creep properties of a healing tibia can be assessed and proposes that this property may form the basis for future fracture assessment investigations. PMID:17875395
NASA Astrophysics Data System (ADS)
Sahbaee, Pooyan; Abadi, Ehsan; Sanders, Jeremiah; Becchetti, Marc; Zhang, Yakun; Agasthya, Greeshma; Segars, Paul; Samei, Ehsan
2016-03-01
The purpose of this study was to substantiate the interdependency of image quality, radiation dose, and contrast material dose in CT towards the patient-specific optimization of the imaging protocols. The study deployed two phantom platforms. First, a variable sized phantom containing an iodinated insert was imaged on a representative CT scanner at multiple CTDI values. The contrast and noise were measured from the reconstructed images for each phantom diameter. Linearly related to iodine-concentration, contrast to noise ratio (CNR), was calculated for different iodine-concentration levels. Second, the analysis was extended to a recently developed suit of 58 virtual human models (5D-XCAT) with added contrast dynamics. Emulating a contrast-enhanced abdominal image procedure and targeting a peak-enhancement in aorta, each XCAT phantom was "imaged" using a CT simulation platform. 3D surfaces for each patient/size established the relationship between iodine-concentration, dose, and CNR. The Sensitivity of Ratio (SR), defined as ratio of change in iodine-concentration versus dose to yield a constant change in CNR was calculated and compared at high and low radiation dose for both phantom platforms. The results show that sensitivity of CNR to iodine concentration is larger at high radiation dose (up to 73%). The SR results were highly affected by radiation dose metric; CTDI or organ dose. Furthermore, results showed that the presence of contrast material could have a profound impact on optimization results (up to 45%).
Boularas, A. Baudoin, F.; Villeneuve-Faure, C.; Clain, S.; Teyssedre, G.
2014-08-28
Electric Force-Distance Curves (EFDC) is one of the ways whereby electrical charges trapped at the surface of dielectric materials can be probed. To reach a quantitative analysis of stored charge quantities, measurements using an Atomic Force Microscope (AFM) must go with an appropriate simulation of electrostatic forces at play in the method. This is the objective of this work, where simulation results for the electrostatic force between an AFM sensor and the dielectric surface are presented for different bias voltages on the tip. The aim is to analyse force-distance curves modification induced by electrostatic charges. The sensor is composed by a cantilever supporting a pyramidal tip terminated by a spherical apex. The contribution to force from cantilever is neglected here. A model of force curve has been developed using the Finite Volume Method. The scheme is based on the Polynomial Reconstruction Operator—PRO-scheme. First results of the computation of electrostatic force for different tip–sample distances (from 0 to 600 nm) and for different DC voltages applied to the tip (6 to 20 V) are shown and compared with experimental data in order to validate our approach.
Frequently Used Coping Scales: A Meta-Analysis.
Kato, Tsukasa
2015-10-01
This article reports the frequency of the use of coping scales in academic journals published from 1998 to 2010. Two thousand empirical journal articles were selected from the EBSCO database. The COPE, Ways of Coping Questionnaire, Coping Strategies Questionnaire, Coping Inventory for Stressful Situations, Religious-COPE and Coping Response Inventory were frequently mentioned. In particular, the COPE (20.2%) and Ways of Coping Questionnaire (13.6%) were used the most frequently. In this literature reviewed, coping scales were most often used to assess coping with health issues (e.g. illness, pain and medical diagnoses) over other types of stressors, and patients were the most frequent participants. Further, alpha coefficients were estimated for the COPE subscales, and correlations between the COPE subscales and coping outcomes were calculated, including depressive symptoms, anxiety, negative affect, psychological distress, physical symptoms and well-being. PMID:24338955
Crater ejecta scaling laws - Fundamental forms based on dimensional analysis
NASA Technical Reports Server (NTRS)
Housen, K. R.; Schmidt, R. M.; Holsapple, K. A.
1983-01-01
Self-consistent scaling laws are developed for meteoroid impact crater ejecta. Attention is given to the ejection velocity of material as a function of the impact point, the volume of ejecta with a threshold velocity, and the thickness of ejecta deposit in terms of the distance from the impact. Use is made of recently developed equations for energy and momentum coupling in cratering events. Consideration is given to scaling of laboratory trials up to real-world events and formulations are developed for calculating the ejection velocities and ejecta blanket profiles in the gravity and strength regimes of crater formation. It is concluded that, in the gravity regime, the thickness of an ejecta blanket is the same in all directions if the thickness and range are expressed in terms of the crater radius. In the strength regime, however, the ejecta velocities are independent of crater size, thereby allowing for asymmetric ejecta blankets. Controlled experiments are recommended for the gravity/strength transition.
Analysis plan for 1985 large-scale tests. Technical report
McMullan, F.W.
1983-01-01
The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.
Wavelet multiscale analysis for Hedge Funds: Scaling and strategies
NASA Astrophysics Data System (ADS)
Conlon, T.; Crane, M.; Ruskin, H. J.
2008-09-01
The wide acceptance of Hedge Funds by Institutional Investors and Pension Funds has led to an explosive growth in assets under management. These investors are drawn to Hedge Funds due to the seemingly low correlation with traditional investments and the attractive returns. The correlations and market risk (the Beta in the Capital Asset Pricing Model) of Hedge Funds are generally calculated using monthly returns data, which may produce misleading results as Hedge Funds often hold illiquid exchange-traded securities or difficult to price over-the-counter securities. In this paper, the Maximum Overlap Discrete Wavelet Transform (MODWT) is applied to measure the scaling properties of Hedge Fund correlation and market risk with respect to the S&P 500. It is found that the level of correlation and market risk varies greatly according to the strategy studied and the time scale examined. Finally, the effects of scaling properties on the risk profile of a portfolio made up of Hedge Funds is studied using correlation matrices calculated over different time horizons.
Two scale analysis applied to low permeability sandstones
NASA Astrophysics Data System (ADS)
Davy, Catherine; Song, Yang; Nguyen Kim, Thang; Adler, Pierre
2015-04-01
Low permeability materials are often composed of several pore structures of various scales, which are superposed one to another. It is often impossible to measure and to determine the macroscopic properties in one step. In the low permeability sandstones that we consider, the pore space is essentially made of micro-cracks between grains. These fissures are two dimensional structures, which aperture is roughly on the order of one micron. On the grain scale, i.e., on the scale of 1 mm, the fissures form a network. These two structures can be measured by using two different tools [1]. The density of the fissure networks is estimated by trace measurements on the two dimensional images provided by classical 2D Scanning Electron Microscopy (SEM) with a pixel size of 2.2 micron. The three dimensional geometry of the fissures is measured by X-Ray micro-tomography (micro-CT) in the laboratory, with a voxel size of 0.6x0.6x0.6microns3. The macroscopic permeability is calculated in two steps. On the small scale, the fracture transmissivity is calculated by solving the Stokes equation on several portions of the measured fissures by micro-CT. On the large scale, the density of the fissures is estimated by three different means based on the number of intersections with scanlines, on the surface density of fissures and on the intersections between fissures per unit surface. These three means show that the network is relatively isotropic and they provide very close estimations of the density. Then, a general formula derived from systematic numerical computations [2] is used to derive the macroscopic dimensionless permeability which is proportional to the fracture transmissivity. The combination of the two previous results yields the dimensional macroscopic permeability which is found to be in acceptable agreement with the experimental measurements. Some extensions of these preliminary works will be presented as a tentative conclusion. References [1] Z. Duan, C. A. Davy, F
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.
2012-01-01
The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results
[Development of the Trait Respect-Related Emotions Scale for late adolescence].
Muto, Sera
2016-02-01
This study developed a scale to measure the respect-related emotional traits (the Trait Respect-Related Emotions Scale) for late adolescence and examined the reliability and validity. In study 1,368 university students completed the items of the Trait Respect-Related Emotions Scale and other scales of theoretically important personality constructs including adult attachment style, the "Big Five," self-esteem, and two types of narcissistic personality. Factor analysis indicated that there are three factors of trait respect-related emotions: (a) trait (prototypical) respect; (b) trait idolatry (worship and adoration); and (c) trait awe. The three traits associated differentially with the daily experience (frequency) of the five basic respect-related emotions (prototypical respect, idolatry, awe, admiration, and wonder), and other constructs. In Study 2, a test-retest correlation of the new scale with 60 university students indicated good reliability. Both studies generally supported the reliability and validity of the new scale. These findings suggest that, at Ieast in late adolescence, there are large individual differences in respect-related emotion experiences and the trait of respect should be considered as multi-dimensional structure. PMID:26964371
NASA Astrophysics Data System (ADS)
Mutter, J. C.; Deraniyagala, S.; Mara, V.; Marinova, S.
2011-12-01
informal economy and will not register disaster set backs in GDP accounts. The alterations to their lives can include loss of livelihood, loss of key assets such as livestock, loss of property and loss of savings, reduced life expectancy among survivors, increased poverty rates, increased inequality, greater subsequent maternal and child mortality (due to destruction of health care facilities), reduced education attainment (lack of school buildings), increased gender-based violence and psychological ailments. Our study enhances this literature in two ways. Firstly, it examines the effects of disasters on human development and poverty using cross-country econometric analysis with indicators of welfare that go beyond GDP. We aim to search the impact of disasters on human development and absolute poverty. Secondly we use Peak Ground Acceleration for earthquakes, a modified Palmer Drought Severity and Hurricane Energy rather than disaster event occurrence to account for the severity of the disaster.
NASA Astrophysics Data System (ADS)
Pachepsky, Y. A.
2009-12-01
Advances in sensor physics and technology create opportunities for explicit consideration of patterns in soil-vegetation-atmosphere systems (SVAS). The purpose of this talk is to provoke discussion on the current status of pattern analysis and interpretation in SVAS. The explicit consideration of patterns requires observations and analysis at scales that are both coarser and finer than the scale of interest. Within-scale scaling relationships are often observed in SVAS components. However, direct scaling relationships have not been discovered between scales, possibly because the different scales provide different types of information about the SVAS, use different variables to characterize SVAS, and exhibit different variability of the system. To transcend the scales, models are needed that explicitly treat the fine-scale heterogeneity and rare occurrences that control processes at the coarser scale. As patterns are generated from simulations and or/or observations, methods are needed for pattern characterization and comparison. One promising direction here is the symbolic representation of patterns which leads to the exploitation of methods developed in the bioinformatics community. Examples drawn from soil hydrology and micrometeorology will be used in illustrations to make the argument that observation and analysis of patterns is the important part of understanding and quantifying relationships between structure, functioning and self-organization in SVAS and their components.
Multi-resolution analysis for ENO schemes
NASA Technical Reports Server (NTRS)
Harten, Ami
1991-01-01
Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.
Stochastic analysis of a field-scale unsaturated transport experiment
NASA Astrophysics Data System (ADS)
Severino, G.; Comegna, A.; Coppola, A.; Sommella, A.; Santini, A.
2010-10-01
Modelling of field-scale transport of chemicals is of deep interest to public as well as private sectors, and it represents an area of active theoretical research in many environmentally-based disciplines. However, the experimental data needed to validate field-scale transport models are very limited due to the numerous logistic difficulties that one faces out. In the present paper, the migration of a tracer (Cl -) was monitored during its movement in the unsaturated zone beneath the surface of 8 m × 50 m sandy soil. Under flux-controlled, steady-state water flow ( Jw = 10 mm/day) was achieved by bidaily sprinkler irrigation. A pulse of 105 g/m 2 KCl was applied uniformly to the surface, and subsequently leached downward by the same (chloride-free) flux Jw over the successive two months. Chloride concentration monitoring was carried out in seven measurement campaigns (each one corresponding to a given time) along seven (parallel) transects. The mass recovery was near 100%, therefore underlining the very good-quality of the concentration data-set. The chloride concentrations are used to test two field-scale models of unsaturated transport: (i) the Advection-Dispersion Equation (ADE), which models transport far from the zone of solute entry, and (ii) the Stochastic- Convective Log- normal (CLT) transfer function model, which instead accounts for transport near the release zone. Both the models provided an excellent representation of the solute spreading at z > 0.45 m (being z = 0.45 m the calibration depth). As a consequence, by the depth z ≈ 50 cm one can regard transport as Fickian. The ADE model dramatically underestimates solute spreading at shallow depths. This is due to the boundary effects which are not captured by the ADE. The CLT model appears to be a more robust tool to mimic transport at every depth.
Analysis of world economic variables using multidimensional scaling.
Machado, J A Tenreiro; Mata, Maria Eugénia
2015-01-01
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene. PMID:25811177
Wavelet analysis and scaling properties of time series
NASA Astrophysics Data System (ADS)
Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.
On the analysis of large-scale genomic structures.
Oiwa, Nestor Norio; Goldman, Carla
2005-01-01
We apply methods from statistical physics (histograms, correlation functions, fractal dimensions, and singularity spectra) to characterize large-scale structure of the distribution of nucleotides along genomic sequences. We discuss the role of the extension of noncoding segments ("junk DNA") for the genomic organization, and the connection between the coding segment distribution and the high-eukaryotic chromatin condensation. The following sequences taken from GenBank were analyzed: complete genome of Xanthomonas campestri, complete genome of yeast, chromosome V of Caenorhabditis elegans, and human chromosome XVII around gene BRCA1. The results are compared with the random and periodic sequences and those generated by simple and generalized fractal Cantor sets. PMID:15858230
Analysis of World Economic Variables Using Multidimensional Scaling
Machado, J.A. Tenreiro; Mata, Maria Eugénia
2015-01-01
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene. PMID:25811177
Wavelet analysis and scaling properties of time series.
Manimaran, P; Panigrahi, Prasanta K; Parikh, Jitendra C
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior. PMID:16383481
Enabling Large-Scale Biomedical Analysis in the Cloud
Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen
2013-01-01
Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665
Magnetohydrodynamic generator scaling analysis for baseload commercial powerplants
NASA Astrophysics Data System (ADS)
Swallom, D. W.; Pian, C. C. P.
1983-08-01
MHD generator channel scaling analyses have been performed to definitize the effect of generator size and oxygen enrichment on channel performance. These studies have shown that MHD generator channels can be designed to operate efficiently over the range of 250 to 2135 thermal megawatts. The optimum design conditions for each of the thermal inputs were established by investigating various combinations of electrical load parameters, pressure ratios, magnetic field profiles, and channel lengths. These results provide design flexibility for the baseload combined cycle MHD/steam power plant.
Enabling large-scale biomedical analysis in the cloud.
Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen
2013-01-01
Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
Genome-scale thermodynamic analysis of Escherichia coli metabolism.
Henry, Christopher S; Jankowski, Matthew D; Broadbelt, Linda J; Hatzimanikatis, Vassily
2006-02-15
Genome-scale metabolic models are an invaluable tool for analyzing metabolic systems as they provide a more complete picture of the processes of metabolism. We have constructed a genome-scale metabolic model of Escherichia coli based on the iJR904 model developed by the Palsson Laboratory at the University of California at San Diego. Group contribution methods were utilized to estimate the standard Gibbs free energy change of every reaction in the constructed model. Reactions in the model were classified based on the activity of the reactions during optimal growth on glucose in aerobic media. The most thermodynamically unfavorable reactions involved in the production of biomass in E. coli were identified as ATP phosphoribosyltransferase, ATP synthase, methylene-tetra-hydrofolate dehydrogenase, and tryptophanase. The effect of a knockout of these reactions on the production of biomass and the production of individual biomass precursors was analyzed. Changes in the distribution of fluxes in the cell after knockout of these unfavorable reactions were also studied. The methodologies and results discussed can be used to facilitate the refinement of the feasible ranges for cellular parameters such as species concentrations and reaction rate constants. PMID:16299075
Bench-scale Analysis of Surrogates for Anaerobic Digestion Processes.
Carroll, Zachary S; Long, Sharon C
2016-05-01
Frequent monitoring of anaerobic digestion processes for pathogen destruction is both cost and time prohibitive. The use of surrogates to supplement regulatory monitoring may be one solution. To evaluate surrogates, a semi-batch bench-scale anaerobic digester design was tested. Bench-scale reactors were operated under mesophilic (36 °C) and thermophilic (53-55 °C) conditions, with a 15 day solids retention time. Biosolids from different facilities and during different seasons were examined. USEPA regulated pathogens and surrogate organisms were enumerated at different times throughout each experiment. The surrogate organisms included fecal coliforms, E. coli, enterococci, male-specific and somatic coliphages, Clostridium perfringens, and bacterial spores. Male-specific coliphages tested well as a potential surrogate organism for virus inactivation. None of the tested surrogate organisms correlated well with helminth inactivation under the conditions studied. There were statistically significant differences in the inactivation rates between the facilities in this study, but not between seasons. PMID:27131309
Economic analysis of small-scale fuel alcohol plants
Schafer, J.J. Jr.
1980-01-01
To plan Department of Energy support programs, it is essential to understand the fundamental economics of both the large industrial size plants and the small on-farm size alcohol plants. EG and G Idaho, Inc., has designed a 25 gallon per hour anhydrous ethanol plant for the Department of Energy's Alcohol Fuels Office. This is a state-of-the-art reference plant, which will demonstrate the cost and performance of currently available equipment. The objective of this report is to examine the economics of the EG and G small-scale alcohol plant design and to determine the conditions under which a farm plant is a financially sound investment. The reference EG and G Small-Scale Plant is estimated to cost $400,000. Given the baseline conditions defined in this report, it is calculated that this plant will provide an annual after-tax of return on equity of 15%, with alcohol selling at $1.62 per gallon. It is concluded that this plant is an excellent investment in today's market, where 200 proof ethanol sells for between $1.80 and $2.00 per gallon. The baseline conditions which have a significant effect on the economics include plant design parameters, cost estimates, financial assumptions and economic forecasts. Uncertainty associated with operational variables will be eliminated when EG and G's reference plant begins operation in the fall of 1980. Plant operation will verify alcohol yield per bushel of corn, labor costs, maintenance costs, plant availability and by-product value.
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
NASA Astrophysics Data System (ADS)
Günther, Uwe; Zhuk, Alexander; Bezerra, Valdir B.; Romero, Carlos
2005-08-01
We study multi-dimensional gravitational models with scalar curvature nonlinearities of types R-1 and R4. It is assumed that the corresponding higher dimensional spacetime manifolds undergo a spontaneous compactification to manifolds with a warped product structure. Special attention has been paid to the stability of the extra-dimensional factor spaces. It is shown that for certain parameter regions the systems allow for a freezing stabilization of these spaces. In particular, we find for the R-1 model that configurations with stabilized extra dimensions do not provide a late-time acceleration (they are AdS), whereas the solution branch which allows for accelerated expansion (the dS branch) is incompatible with stabilized factor spaces. In the case of the R4 model, we obtain that the stability region in parameter space depends on the total dimension D = dim(M) of the higher dimensional spacetime M. For D > 8 the stability region consists of a single (absolutely stable) sector which is shielded from a conformal singularity (and an antigravity sector beyond it) by a potential barrier of infinite height and width. This sector is smoothly connected with the stability region of a curvature-linear model. For D < 8 an additional (metastable) sector exists which is separated from the conformal singularity by a potential barrier of finite height and width so that systems in this sector are prone to collapse into the conformal singularity. This second sector is not smoothly connected with the first (absolutely stable) one. Several limiting cases and the possibility of inflation are discussed for the R4 model.